2197. Replace Non-Coprime Numbers in Array

You are given an array of integers nums. Perform the following steps: Find any two adjacent numbers in nums that are non-coprime. If no such numbers are found, stop the process. Otherwise, delete the two numbers and replace them with their LCM (Least Common Multiple). Repeat this process as long as you keep finding two adjacent non-coprime numbers. Return the final modified array. It can be shown that replacing adjacent non-coprime numbers in any arbitrary order will lead to the same result. The test cases are generated such that the values in the final array are less than or equal to 108. Two values x and y are non-coprime if GCD(x, y) > 1 where GCD(x, y) is the Greatest Common Divisor of x and y.
/**
 * @param {number[]} nums
 * @return {number[]}
 */
var replaceNonCoprimes = function(nums) {
    // Helper function to compute GCD using Euclidean algorithm
    const gcd = (a, b) => {
        while (b !== 0) {
            let temp = b;
            b = a % b;
            a = temp;
        }
        return a;
    };

    // Helper function to compute LCM
    const lcm = (a, b) => (a * b) / gcd(a, b);

    const stack = [];

    for (let num of nums) {
        stack.push(num); // Add current 

Header-footer-BDI

andare in:
admin/config/bdi/settings --> site Tools
ed inserire i file che devono essere resi pubblici es:
/en/header-footer.html 
/en/header-footer.js

andare qui:
.com/ca/en/__include-links e verificare le risorse che verranno esportate

☁️ AWS - Investigating Network

# VPCs
## Counting VPCs
```bash
aws ec2 describe-vpcs --query "length(Vpcs)"
```

## Displaying Attributes
```bash
aws ec2 describe-vpcs | jq '.Vpcs[0] | keys'
```

***
## AWS VPCs Attributes

| Attribute | Description |
| :--- | :--- |
| **VpcId** | 🆔 L'identifiant unique et non modifiable du VPC. C'est sa "plaque d'immatriculation" (ex: `vpc-0123456789abcdef0`). |
| **CidrBlock** | 🌐 La plage d'adresses IPv4 **principale** du VPC, définie en notation CIDR (ex: `10.0.0.0/16`). C'est l'espace d'

☁️ 🐧 CloudShell - `.bashrc`

# Guide d'organisation .bashrc et scripts AWS

## Principe fondamental : garder .bashrc léger et rapide

Le fichier `.bashrc` est exécuté à chaque ouverture de terminal. Il doit rester rapide et ne contenir que l'essentiel. Pensez-y comme au vestibule de votre maison : vous y mettez les essentiels (porte-manteau, clés), pas votre établi complet.

## Table de décision : où placer quoi ?

| Type de code | Emplacement | Raison | Exemple |
|-------------|-------------|---------|---------|

IntentGraphNet-CognAlign

# IntentGraphNet-CognAlign: Symbolic and Context-Aware Modeling for Employment Intention Prediction > This repository provides a research-oriented scaffold based on the paper *“Deep Learning Models for Predicting Employment Intentions of Rural College Students”* (Xiangqian Liu, Lihao Shang, Shengjuan Liu, 2024). The framework integrates **IntentGraphNet**, a symbolic graph encoding model, and **CognAlign**, a context-aware alignment mechanism. ## Motivation Predicting employment intentions of rural college students is critical for: - Addressing **structural inequalities** in the labor market. - Informing **policy and career guidance**. - Enhancing **social mobility and equitable access** to opportunities. Conventional models suffer from: - Rule-based rigidity, - Shallow statistical learning, - Deep learning's interpretability and fairness challenges. Our dual-component framework unifies symbolic reasoning and deep learning to provide **transparent, scalable, and generalizable predictions** across socio-economic contexts​:contentReference[oaicite:2]{index=2}. ## Key Components - **IntentGraphNet**: - Constructs personalized semantic graphs over symbolic nodes (e.g., family income, motivation, orientation). - Uses attention-based message passing and disentangled representation learning. - Produces transparent, interpretable embeddings of student attributes​:contentReference[oaicite:3]{index=3}. - **CognAlign**: - Projects symbolic embeddings into contextual manifolds. - Uses adversarial domain adaptation and entropy-aware prediction. - Ensures alignment across regions and fairness across demographic groups​:contentReference[oaicite:4]{index=4}.
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

class GraphEncoder(nn.Module):
    """Multimodal Encoder + Graph message passing (IntentGraphNet, Sec. 3.3)."""
    def __init__(self, in_dim, hidden_dim):
        super().__init__()
        self.embed = nn.Linear(in_dim, hidden_dim)
        self.attn = nn.Linear(2 * hidden_dim, 1)
        self.update = nn.Linear(hidden_dim, hidden_dim)

    def forward(self, x, adj):
        """
        x: [

LinguaSphere

# LinguaSphere: Speech-Interactive 3D Language Learning with Adaptive Feedback > A research-grade skeleton implementation inspired by the paper *“Speech-Interactive 3D Game Architecture for Language Learning with Feedback Mechanisms”* (Meizi Zhang, Northeastern University). The code organizes core modules—Intent classification, grammar-aware parsing, symbolic execution, semantic anchoring, and adaptive curriculum—in a clean, extensible Python package suitable for prototyping and integration with a 3D engine. ## Why this project? Traditional CALL tools rely on static drills and pre-scripted flows. LinguaSphere treats **language as action** inside a simulated world: utterances map to in-game objects, agents, and tasks; feedback loops are **multimodal** (visual, auditory, textual) and **adaptive** to proficiency. (See the paper’s Section 3.3 “LinguaSphere” and Figure 1 for the modular architecture.) :contentReference[oaicite:3]{index=3} ## Key Features (scaffold) - **Hybrid Communication Layer**: intent classification → grammar-aware parsing → symbolic execution. - **Semantic Anchoring**: phrases are grounded to objects/actions/goals in the world. - **Adaptive Curriculum Control**: tracks mastery over constructs and schedules tasks to target weaknesses. - **Feedback Loop**: generates context-sensitive corrective tuples `(highlight, corrected_expression, explanation)`. > This repo provides a runnable *skeleton* with clean interfaces and mock logic so you can plug in ASR/NLP, or connect to Unity/Unreal.
```python
# model.py
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Tuple, Optional, Any
import math
import random

# -----------------------------
# Data structures
# -----------------------------

@dataclass
class WorldObject:
    obj_id: str
    cls: str
    affordances: List[str]
    linguistic_tags: List[str]
    semantic_anchors: List[str]
    state: Dict[str, Any] = field(default_factory=dict)

@dataclass
cl

AeroPerceptNet + GeoNarrative Alignment

# AeroPerceptNet + GeoNarrative Alignment This repository provides a minimal PyTorch implementation of the framework introduced in: **“Multi-Source Fusion Architecture for Intelligent Evaluation of Low-Altitude Tourism Experience”** by *Lina Fu, Yanlong Fu, Hua Su, and Yan Wang*​:contentReference[oaicite:1]{index=1}. --- ## Highlights - **AeroPerceptNet:** hybrid neural-symbolic model integrating: - **Multimodal Fusion Encoding**: fuses panoramic visual features, geospatial semantics, and geometric flight trajectories​:contentReference[oaicite:2]{index=2}. - **Graphical Temporal Aggregation**: BiGRU-based sequence modeling with attention pooling for coherent route-level representation​:contentReference[oaicite:3]{index=3}. - **Interpretable Local Scoring**: localized perceptual scores mapped to trajectory coordinates​:contentReference[oaicite:4]{index=4}. - **GeoNarrative Alignment (GNA):** - Captures **semantic cohesion** across route segments. - Detects **thematic discontinuities** and clusters​:contentReference[oaicite:5]{index=5}. - Measures **symbol–perception synchrony** (aligns salience with symbolic meaning)​:contentReference[oaicite:6]{index=6}. - Outperforms SOTA baselines (OC-SVM, Isolation Forest, DAGMM, DeepSVDD, TranAD) by **3–5%** in Accuracy/F1 on four benchmark datasets​:contentReference[oaicite:7]{index=7}.
```python
# model.py
# AeroPerceptNet + GeoNarrative Alignment (simplified PyTorch version)
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Multimodal Fusion Encoding
# ----------------------------
class FusionEncoder(nn.Module):
    def __init__(self, input_dim=64, latent_dim=64):
        super().__init__()
        self.vis_proj = nn.Linear(input_dim, latent_dim)
        self.sem_proj = n

MoLENet-KAST

# MoLENet-KAST: Heritage-Inspired Deep Action Analytics This repository implements the framework introduced in **“Heritage-Inspired Deep Action Analytics for Sports Movement Recognition and Behavioral Profiling”** by *Zonghao Wang*​:contentReference[oaicite:1]{index=1}. It provides: - **MoLENet (Motion-Oriented Latent Encoding Network)** Tokenized dual-branch encoders + temporal graph propagation + manifold-aligned regularization for structured motion representation. - **KAST (Kinematic-Aware Strategy Transfer)** Domain adaptation method using behavior-centric alignment, manifold preservation, and kinematic distribution matching. --- ## Highlights - **Tokenized Dual-Branch Encoding:** disentangles static vs. dynamic motion patterns into semantically coherent tokens​:contentReference[oaicite:2]{index=2}. - **Graph-Based Temporal Propagation:** captures inter-token spatiotemporal dependencies with affinity-weighted GCN​:contentReference[oaicite:3]{index=3}. - **Manifold-Aligned Regularization:** enforces orthogonality, temporal smoothness, and expert-informed alignment​:contentReference[oaicite:4]{index=4}. - **KAST:** aligns behavioral manifolds across domains with contrastive, Laplacian, and temporal-statistical losses​:contentReference[oaicite:5]{index=5}. - **Superior Results:** achieves up to **89–90% accuracy** on Athlete Movement Heritage and Sports Behavioral Profiling datasets, outperforming SOTA (ResNet-50, SlowFast, ViViT, I3D, TimeSformer)​:contentReference[oaicite:6]{index=6}.
```python
# model.py
# MoLENet + KAST minimal PyTorch implementation
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Tokenized Dual-Branch Encoder
# ----------------------------
class DualBranchEncoder(nn.Module):
    def __init__(self, input_dim, latent_dim):
        super().__init__()
        # static branch
        self.static = nn.Sequential(
            nn.Linear(input_dim, latent_di

PBEN-CAAS

# PBEN-CAAS for Cultural-Sports-Tourism Consumption Modeling This repository implements a **deep feature mining framework** based on the paper: **“Consumption Behavior Characterization in Cultural-Sports-Tourism Integration via Deep Feature Mining”** by *Runtao Zhang*​:contentReference[oaicite:1]{index=1}. It introduces: - **PBEN (Polycentric Behavioral Embedding Network):** Learns multi-source latent representations by fusing intrinsic preferences with contextual signals, using domain-specific branches and higher-order polycentric integration tensors. - **CAAS (Contextual Adaptive Anchoring Strategy):** Recalibrates consumer intent dynamically under changing contextual conditions via contextual modulation kernels, memory-based anchoring, and time-conditioned intent blending. --- ## Highlights - Multi-source latent embedding of consumer + offering features - Three-branch **domain-specific fusion** (Culture, Sports, Tourism) - **Trajectory-aware personalization** with recurrent updates - Contextual anchoring via **CAAS** modules: - Contextual Modulation Kernel - Memory-Based Anchoring - Time-Conditioned Intent Blending - State-of-the-art performance on four benchmark datasets (Cultural Tourism, Sports Events, Leisure Preferences, Tourism Spending)
```python
# model.py
# Minimal PBEN + CAAS implementation in PyTorch
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# PBEN: Polycentric Behavioral Embedding Network
# ----------------------------
class PBEN(nn.Module):
    def __init__(self, input_dim=64, latent_dim=32, hidden_dim=64):
        super().__init__()
        self.latent_dim = latent_dim
        # Linear projections
        self

BOPPPS-Intelligent-Analysis

# BOPPPS-Intelligent-Analysis This repository provides a PyTorch-based reference implementation of the framework described in **“Modeling BOPPPS Classroom Interaction Patterns with Intelligent Analysis Tools”** by *Yumei Yang* and *Qijia Xuan*​:contentReference[oaicite:1]{index=1}. The framework integrates intelligent analysis into each **BOPPPS phase** (Bridge-in, Objectives, Pre-assessment, Participatory Learning, Post-assessment, and Summary), enabling: - Real-time multimodal monitoring of classroom signals - Adaptive feedback and personalized engagement modeling - Phase-aware representation learning (via PRIT: Phase-aware Reflective Interaction Transformer) - Engagement-driven policy optimization (via REDIP: Reflective Engagement-Driven Instructional Policy) --- ## Highlights - **Phase-aware Transformer (PRIT):** Models temporal/semantic dynamics across BOPPPS phases using constrained self-attention. - **Reflective Response Adapter (RRA):** Personalizes predictions using learner-specific engagement signals. - **Inter-phase Contrastive Supervision:** Encourages semantic divergence between instructional phases. - **REDIP Policy Layer:** Reinforcement learning strategy adjusting instructional time allocation, guided by engagement signals.
```python
# model.py
# Minimal PRIT + RRA + inter-phase contrastive loss + REDIP policy hooks
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Phase-aware Reflective Interaction Transformer (PRIT)
# ----------------------------
class PhaseAwareAttention(nn.Module):
    def __init__(self, embed_dim, num_heads):
        super().__init__()
        self.attn = nn.MultiheadAttention(embed_dim, nu

GlypticNet-3CL

# GlypticNet-3CL A minimal, practical PyTorch implementation inspired by **“Digital Sculpture Style Analysis through Image Recognition and Voxel-Based Techniques”**, featuring: - **GlypticNet**: a mesh/graph encoder for style embeddings (multimodal cues → local geometry + graph features → global embedding) - **3CL (Curated Contextual Contrastive Learning)**: metric-learning objectives combining triplet, contextual contrastive, and optional hard-negatives > Paper authors: **Jiaqing Lyu** and **Xue Bai**. :contentReference[oaicite:1]{index=1} ## Highlights - Multimodal-style **feature encoding** (curvature-like cues as node features) - **Graph propagation** with GCN-style layers and residual connections - **Semantic attention** to aggregate part/patch features into a sculpture-level embedding - **3CL** objectives: triplet loss + neighborhood-aware contrastive loss (+ hooks for hard negatives)
```python
# model.py
# Minimal GlypticNet + 3CL-style losses in PyTorch
# Author: your-name
# License: MIT

from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Utility: simple GCN layer
# ----------------------------
class GCNLayer(nn.Module):
    """
    Basic GCN layer with A_hat = A + I and symmetric normalization.
    X_{l+1} = ReLU( D^{-1/2} A_hat D^{-1/2} X_l W )
    """
    def __init_

OcuInflameFusion

# OcuInflameFusion: Cross-Modal Trajectory Modeling for Postoperative Inflammation This repository provides an implementation of **OcuInflameFusion**, a multimodal deep learning framework for estimating and managing postoperative inflammation in refractive surgeries. It integrates **InflamNet**, a trajectory-based predictive model, with **ImmunoMod-Flow**, a personalized intervention controller. > Reference: *"OcuInflameFusion: Cross-Modal Feature Fusion for Postoperative Inflammation Estimation in Refractive Surgeries"* by Yan et al. (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Cross-modal fusion**: integrates OCT, slit-lamp photography, tomography, and clinical metadata. - **InflamNet**: continuous-time latent trajectory model using neural ODEs (*Figure 1, page 6*​:contentReference[oaicite:4]{index=4}). - **Multimodal latent encoding** with modality-specific networks and cross-modal attention (*Figure 2, page 7*​:contentReference[oaicite:5]{index=5}). - **Causal and geometric regularization** for interpretable and clinically meaningful latent spaces (*page 8*​:contentReference[oaicite:6]{index=6}). - **ImmunoMod-Flow**: adaptive intervention strategy leveraging Hamilton–Jacobi–Bellman optimal control (*Figure 3, page 9*​:contentReference[oaicite:7]{index=7}). - **Risk-aware gating & personalization**: adjusts therapy intensity by patient profile (*page 11–12*​:contentReference[oaicite:8]{index=8}). --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# -------------------------------
# Modality-Specific Encoders
# -------------------------------
class ModalityEncoder(nn.Module):
    def __init__(self, in_dim, out_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(in_dim, out_dim),
            nn.ReLU(),
            nn.LayerNorm(out_dim)
        )
    def forward(self, x):
        return self.net(x)

StyleFusionNet-Styloformer

# StyleFusionNet-Styloformer: Symbolic Attention for Artistic Style Recognition This repository provides an implementation of **StyleFusionNet**, a symbolic attention-based framework for the **visual recognition of artistic styles**. The system combines **Styloformer**, a transformer backbone enriched with symbolic motifs and spatial reasoning, with **StyloScope**, a domain-guided optimization strategy for art-historically consistent training. > Reference: *"StyleFusionNet: Visual Feature-Driven Classification with Attention Refinement for Artistic Work Style Recognition"* by Du & Yao (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Styloformer**: symbolic-aware transformer with motif vocabulary, positional priors, and graph reasoning layer (*Figure 1, page 5*​:contentReference[oaicite:4]{index=4}). - **Symbolic attention**: aligns patch embeddings with curated art-historical motif vocabulary (*Figure 2, page 6*​:contentReference[oaicite:5]{index=5}). - **StyloScope**: domain-guided optimization with curriculum scheduling, entropy-adjusted learning rate, and motif-consistency loss (*Figure 3–4, pages 7–8*​:contentReference[oaicite:6]{index=6}). - **Interpretability**: motif classification head for symbolic transparency. - **Robustness**: state-of-the-art accuracy on 4 benchmark datasets (e.g., *Table 1–2, pages 10–11*​:contentReference[oaicite:7]{index=7}). --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# ----------------------------
# Patch Embedding
# ----------------------------
class PatchEmbed(nn.Module):
    def __init__(self, in_ch=3, embed_dim=128, patch_size=16):
        super().__init__()
        self.proj = nn.Conv2d(in_ch, embed_dim, kernel_size=patch_size, stride=patch_size)
    def forward(self, x):
        x = self.proj(x)  # (B,D,H/ps,W/ps)
        x = x.flatten(2).transpose(1

SpineNet++-AnatoRisk

# SpineNet++-AnatoRisk: Anatomy-Aware Lumbar MRI Classification and Risk Prediction This repository provides an implementation of **SpineNet++**, a vertebra-aware deep residual network architecture, combined with **AnatoRisk**, a structured post-hoc risk inference module. Together, they enable **intelligent lumbar MRI classification and clinically interpretable risk prediction**. > Reference: *"Deep Residual Network-Driven Classification and Risk Prediction of Lumbar MRI Images"* by Guo, Li, and Zhu (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Dual-pathway encoding**: combines global spinal context and localized vertebra-specific embeddings. - **Anatomical Graph Attention Module (AGAM)**: captures inter-vertebral relationships (see *Figure 1, page 6*​:contentReference[oaicite:4]{index=4}). - **Hierarchical fusion**: integrates multi-scale features with contrastive regularization. - **AnatoRisk**: post-hoc module for calibrated and interpretable risk prediction (see *Figure 3, page 8*​:contentReference[oaicite:5]{index=5}). - **Counterfactual sensitivity analysis**: highlights vertebral contributions to overall risk. --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# -------------------------------
# Basic Convolutional Encoder
# -------------------------------
class ConvBlock(nn.Module):
    def __init__(self, in_ch, out_ch, k=3, s=1, p=1):
        super().__init__()
        self.net = nn.Sequential(
            nn.Conv2d(in_ch, out_ch, k, s, p),
            nn.BatchNorm2d(out_ch),
            nn.ReLU(inplace=True)
        )
    def forward(self, x):

SpineFormer-NeuroDiscAlign

# SpineFormer-NeuroDiscAlign: Transformer-based Lumbar MRI Risk Prediction This repository provides an implementation of **SpineFormer**, a domain-adapted transformer with axial-aware tokenization and disc-level positional embeddings, combined with **NeuroDiscAlign**, a training strategy for uncertainty calibration and risk-aligned contrastive learning. Together, they enable **intelligent classification and risk prediction of lumbar MRI images**. > Reference: *"Intelligent Classification and Risk Prediction of Lumbar MRI Images Using Deep Residual Networks"* by Guo, Li, and Zhu (2025). --- ## Features - **Axial-aware tokenization** and disc-level embeddings for anatomical consistency. - **Transformer backbone (SpineFormer)** with modality-sensitive attention. - **Uncertainty modeling** using Dirichlet priors. - **Risk prediction** with saliency-weighted aggregation. - **NeuroDiscAlign** for Bayesian calibration, risk-aligned contrastive learning, and multi-view consistency. --- ## Quick Start 1) Install dependencies ```bash pip install torch torchvision torchaudio 2) Usage import torch from model import SpineFormer model = SpineFormer( in_channels=1, # MRI modality channels embed_dim=128, depth=4, num_heads=8, num_classes=5 ) x = torch.randn(2, 1, 224, 224) # Example batch (B,C,H,W) logits, risk = model(x) print("Class logits:", logits.shape) # (B, num_classes) print("Risk score:", risk.shape) # (B, 1) 3) Datasets Lumbar Spine MRI Image Collection Deep Learning Lumbar MRI Dataset Spinal Health MRI Risk Assessment Data​ You need to preprocess MRI volumes into disc-level patches. Model Outputs logits: per-disc class probabilities risk: patient-level risk score (0–1) uncertainty: Dirichlet-based confidence measure
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

class PatchEmbed(nn.Module):
    """Flatten MRI patch into token embeddings"""
    def __init__(self, in_channels=1, embed_dim=128, patch_size=16):
        super().__init__()
        self.proj = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size)
    def forward(self, x):
        x = self.proj(x)  # (B, embed_dim, H/ps, W/ps)
        x = x.flatten(2).transpose(1, 2)  # (