IntentGraphNet-CognAlign

# IntentGraphNet-CognAlign: Symbolic and Context-Aware Modeling for Employment Intention Prediction > This repository provides a research-oriented scaffold based on the paper *“Deep Learning Models for Predicting Employment Intentions of Rural College Students”* (Xiangqian Liu, Lihao Shang, Shengjuan Liu, 2024). The framework integrates **IntentGraphNet**, a symbolic graph encoding model, and **CognAlign**, a context-aware alignment mechanism. ## Motivation Predicting employment intentions of rural college students is critical for: - Addressing **structural inequalities** in the labor market. - Informing **policy and career guidance**. - Enhancing **social mobility and equitable access** to opportunities. Conventional models suffer from: - Rule-based rigidity, - Shallow statistical learning, - Deep learning's interpretability and fairness challenges. Our dual-component framework unifies symbolic reasoning and deep learning to provide **transparent, scalable, and generalizable predictions** across socio-economic contexts​:contentReference[oaicite:2]{index=2}. ## Key Components - **IntentGraphNet**: - Constructs personalized semantic graphs over symbolic nodes (e.g., family income, motivation, orientation). - Uses attention-based message passing and disentangled representation learning. - Produces transparent, interpretable embeddings of student attributes​:contentReference[oaicite:3]{index=3}. - **CognAlign**: - Projects symbolic embeddings into contextual manifolds. - Uses adversarial domain adaptation and entropy-aware prediction. - Ensures alignment across regions and fairness across demographic groups​:contentReference[oaicite:4]{index=4}.
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

class GraphEncoder(nn.Module):
    """Multimodal Encoder + Graph message passing (IntentGraphNet, Sec. 3.3)."""
    def __init__(self, in_dim, hidden_dim):
        super().__init__()
        self.embed = nn.Linear(in_dim, hidden_dim)
        self.attn = nn.Linear(2 * hidden_dim, 1)
        self.update = nn.Linear(hidden_dim, hidden_dim)

    def forward(self, x, adj):
        """
        x: [

LinguaSphere

# LinguaSphere: Speech-Interactive 3D Language Learning with Adaptive Feedback > A research-grade skeleton implementation inspired by the paper *“Speech-Interactive 3D Game Architecture for Language Learning with Feedback Mechanisms”* (Meizi Zhang, Northeastern University). The code organizes core modules—Intent classification, grammar-aware parsing, symbolic execution, semantic anchoring, and adaptive curriculum—in a clean, extensible Python package suitable for prototyping and integration with a 3D engine. ## Why this project? Traditional CALL tools rely on static drills and pre-scripted flows. LinguaSphere treats **language as action** inside a simulated world: utterances map to in-game objects, agents, and tasks; feedback loops are **multimodal** (visual, auditory, textual) and **adaptive** to proficiency. (See the paper’s Section 3.3 “LinguaSphere” and Figure 1 for the modular architecture.) :contentReference[oaicite:3]{index=3} ## Key Features (scaffold) - **Hybrid Communication Layer**: intent classification → grammar-aware parsing → symbolic execution. - **Semantic Anchoring**: phrases are grounded to objects/actions/goals in the world. - **Adaptive Curriculum Control**: tracks mastery over constructs and schedules tasks to target weaknesses. - **Feedback Loop**: generates context-sensitive corrective tuples `(highlight, corrected_expression, explanation)`. > This repo provides a runnable *skeleton* with clean interfaces and mock logic so you can plug in ASR/NLP, or connect to Unity/Unreal.
```python
# model.py
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Tuple, Optional, Any
import math
import random

# -----------------------------
# Data structures
# -----------------------------

@dataclass
class WorldObject:
    obj_id: str
    cls: str
    affordances: List[str]
    linguistic_tags: List[str]
    semantic_anchors: List[str]
    state: Dict[str, Any] = field(default_factory=dict)

@dataclass
cl

AeroPerceptNet + GeoNarrative Alignment

# AeroPerceptNet + GeoNarrative Alignment This repository provides a minimal PyTorch implementation of the framework introduced in: **“Multi-Source Fusion Architecture for Intelligent Evaluation of Low-Altitude Tourism Experience”** by *Lina Fu, Yanlong Fu, Hua Su, and Yan Wang*​:contentReference[oaicite:1]{index=1}. --- ## Highlights - **AeroPerceptNet:** hybrid neural-symbolic model integrating: - **Multimodal Fusion Encoding**: fuses panoramic visual features, geospatial semantics, and geometric flight trajectories​:contentReference[oaicite:2]{index=2}. - **Graphical Temporal Aggregation**: BiGRU-based sequence modeling with attention pooling for coherent route-level representation​:contentReference[oaicite:3]{index=3}. - **Interpretable Local Scoring**: localized perceptual scores mapped to trajectory coordinates​:contentReference[oaicite:4]{index=4}. - **GeoNarrative Alignment (GNA):** - Captures **semantic cohesion** across route segments. - Detects **thematic discontinuities** and clusters​:contentReference[oaicite:5]{index=5}. - Measures **symbol–perception synchrony** (aligns salience with symbolic meaning)​:contentReference[oaicite:6]{index=6}. - Outperforms SOTA baselines (OC-SVM, Isolation Forest, DAGMM, DeepSVDD, TranAD) by **3–5%** in Accuracy/F1 on four benchmark datasets​:contentReference[oaicite:7]{index=7}.
```python
# model.py
# AeroPerceptNet + GeoNarrative Alignment (simplified PyTorch version)
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Multimodal Fusion Encoding
# ----------------------------
class FusionEncoder(nn.Module):
    def __init__(self, input_dim=64, latent_dim=64):
        super().__init__()
        self.vis_proj = nn.Linear(input_dim, latent_dim)
        self.sem_proj = n

MoLENet-KAST

# MoLENet-KAST: Heritage-Inspired Deep Action Analytics This repository implements the framework introduced in **“Heritage-Inspired Deep Action Analytics for Sports Movement Recognition and Behavioral Profiling”** by *Zonghao Wang*​:contentReference[oaicite:1]{index=1}. It provides: - **MoLENet (Motion-Oriented Latent Encoding Network)** Tokenized dual-branch encoders + temporal graph propagation + manifold-aligned regularization for structured motion representation. - **KAST (Kinematic-Aware Strategy Transfer)** Domain adaptation method using behavior-centric alignment, manifold preservation, and kinematic distribution matching. --- ## Highlights - **Tokenized Dual-Branch Encoding:** disentangles static vs. dynamic motion patterns into semantically coherent tokens​:contentReference[oaicite:2]{index=2}. - **Graph-Based Temporal Propagation:** captures inter-token spatiotemporal dependencies with affinity-weighted GCN​:contentReference[oaicite:3]{index=3}. - **Manifold-Aligned Regularization:** enforces orthogonality, temporal smoothness, and expert-informed alignment​:contentReference[oaicite:4]{index=4}. - **KAST:** aligns behavioral manifolds across domains with contrastive, Laplacian, and temporal-statistical losses​:contentReference[oaicite:5]{index=5}. - **Superior Results:** achieves up to **89–90% accuracy** on Athlete Movement Heritage and Sports Behavioral Profiling datasets, outperforming SOTA (ResNet-50, SlowFast, ViViT, I3D, TimeSformer)​:contentReference[oaicite:6]{index=6}.
```python
# model.py
# MoLENet + KAST minimal PyTorch implementation
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Tokenized Dual-Branch Encoder
# ----------------------------
class DualBranchEncoder(nn.Module):
    def __init__(self, input_dim, latent_dim):
        super().__init__()
        # static branch
        self.static = nn.Sequential(
            nn.Linear(input_dim, latent_di

PBEN-CAAS

# PBEN-CAAS for Cultural-Sports-Tourism Consumption Modeling This repository implements a **deep feature mining framework** based on the paper: **“Consumption Behavior Characterization in Cultural-Sports-Tourism Integration via Deep Feature Mining”** by *Runtao Zhang*​:contentReference[oaicite:1]{index=1}. It introduces: - **PBEN (Polycentric Behavioral Embedding Network):** Learns multi-source latent representations by fusing intrinsic preferences with contextual signals, using domain-specific branches and higher-order polycentric integration tensors. - **CAAS (Contextual Adaptive Anchoring Strategy):** Recalibrates consumer intent dynamically under changing contextual conditions via contextual modulation kernels, memory-based anchoring, and time-conditioned intent blending. --- ## Highlights - Multi-source latent embedding of consumer + offering features - Three-branch **domain-specific fusion** (Culture, Sports, Tourism) - **Trajectory-aware personalization** with recurrent updates - Contextual anchoring via **CAAS** modules: - Contextual Modulation Kernel - Memory-Based Anchoring - Time-Conditioned Intent Blending - State-of-the-art performance on four benchmark datasets (Cultural Tourism, Sports Events, Leisure Preferences, Tourism Spending)
```python
# model.py
# Minimal PBEN + CAAS implementation in PyTorch
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# PBEN: Polycentric Behavioral Embedding Network
# ----------------------------
class PBEN(nn.Module):
    def __init__(self, input_dim=64, latent_dim=32, hidden_dim=64):
        super().__init__()
        self.latent_dim = latent_dim
        # Linear projections
        self

BOPPPS-Intelligent-Analysis

# BOPPPS-Intelligent-Analysis This repository provides a PyTorch-based reference implementation of the framework described in **“Modeling BOPPPS Classroom Interaction Patterns with Intelligent Analysis Tools”** by *Yumei Yang* and *Qijia Xuan*​:contentReference[oaicite:1]{index=1}. The framework integrates intelligent analysis into each **BOPPPS phase** (Bridge-in, Objectives, Pre-assessment, Participatory Learning, Post-assessment, and Summary), enabling: - Real-time multimodal monitoring of classroom signals - Adaptive feedback and personalized engagement modeling - Phase-aware representation learning (via PRIT: Phase-aware Reflective Interaction Transformer) - Engagement-driven policy optimization (via REDIP: Reflective Engagement-Driven Instructional Policy) --- ## Highlights - **Phase-aware Transformer (PRIT):** Models temporal/semantic dynamics across BOPPPS phases using constrained self-attention. - **Reflective Response Adapter (RRA):** Personalizes predictions using learner-specific engagement signals. - **Inter-phase Contrastive Supervision:** Encourages semantic divergence between instructional phases. - **REDIP Policy Layer:** Reinforcement learning strategy adjusting instructional time allocation, guided by engagement signals.
```python
# model.py
# Minimal PRIT + RRA + inter-phase contrastive loss + REDIP policy hooks
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Phase-aware Reflective Interaction Transformer (PRIT)
# ----------------------------
class PhaseAwareAttention(nn.Module):
    def __init__(self, embed_dim, num_heads):
        super().__init__()
        self.attn = nn.MultiheadAttention(embed_dim, nu

GlypticNet-3CL

# GlypticNet-3CL A minimal, practical PyTorch implementation inspired by **“Digital Sculpture Style Analysis through Image Recognition and Voxel-Based Techniques”**, featuring: - **GlypticNet**: a mesh/graph encoder for style embeddings (multimodal cues → local geometry + graph features → global embedding) - **3CL (Curated Contextual Contrastive Learning)**: metric-learning objectives combining triplet, contextual contrastive, and optional hard-negatives > Paper authors: **Jiaqing Lyu** and **Xue Bai**. :contentReference[oaicite:1]{index=1} ## Highlights - Multimodal-style **feature encoding** (curvature-like cues as node features) - **Graph propagation** with GCN-style layers and residual connections - **Semantic attention** to aggregate part/patch features into a sculpture-level embedding - **3CL** objectives: triplet loss + neighborhood-aware contrastive loss (+ hooks for hard negatives)
```python
# model.py
# Minimal GlypticNet + 3CL-style losses in PyTorch
# Author: your-name
# License: MIT

from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Utility: simple GCN layer
# ----------------------------
class GCNLayer(nn.Module):
    """
    Basic GCN layer with A_hat = A + I and symmetric normalization.
    X_{l+1} = ReLU( D^{-1/2} A_hat D^{-1/2} X_l W )
    """
    def __init_

OcuInflameFusion

# OcuInflameFusion: Cross-Modal Trajectory Modeling for Postoperative Inflammation This repository provides an implementation of **OcuInflameFusion**, a multimodal deep learning framework for estimating and managing postoperative inflammation in refractive surgeries. It integrates **InflamNet**, a trajectory-based predictive model, with **ImmunoMod-Flow**, a personalized intervention controller. > Reference: *"OcuInflameFusion: Cross-Modal Feature Fusion for Postoperative Inflammation Estimation in Refractive Surgeries"* by Yan et al. (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Cross-modal fusion**: integrates OCT, slit-lamp photography, tomography, and clinical metadata. - **InflamNet**: continuous-time latent trajectory model using neural ODEs (*Figure 1, page 6*​:contentReference[oaicite:4]{index=4}). - **Multimodal latent encoding** with modality-specific networks and cross-modal attention (*Figure 2, page 7*​:contentReference[oaicite:5]{index=5}). - **Causal and geometric regularization** for interpretable and clinically meaningful latent spaces (*page 8*​:contentReference[oaicite:6]{index=6}). - **ImmunoMod-Flow**: adaptive intervention strategy leveraging Hamilton–Jacobi–Bellman optimal control (*Figure 3, page 9*​:contentReference[oaicite:7]{index=7}). - **Risk-aware gating & personalization**: adjusts therapy intensity by patient profile (*page 11–12*​:contentReference[oaicite:8]{index=8}). --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# -------------------------------
# Modality-Specific Encoders
# -------------------------------
class ModalityEncoder(nn.Module):
    def __init__(self, in_dim, out_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(in_dim, out_dim),
            nn.ReLU(),
            nn.LayerNorm(out_dim)
        )
    def forward(self, x):
        return self.net(x)

StyleFusionNet-Styloformer

# StyleFusionNet-Styloformer: Symbolic Attention for Artistic Style Recognition This repository provides an implementation of **StyleFusionNet**, a symbolic attention-based framework for the **visual recognition of artistic styles**. The system combines **Styloformer**, a transformer backbone enriched with symbolic motifs and spatial reasoning, with **StyloScope**, a domain-guided optimization strategy for art-historically consistent training. > Reference: *"StyleFusionNet: Visual Feature-Driven Classification with Attention Refinement for Artistic Work Style Recognition"* by Du & Yao (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Styloformer**: symbolic-aware transformer with motif vocabulary, positional priors, and graph reasoning layer (*Figure 1, page 5*​:contentReference[oaicite:4]{index=4}). - **Symbolic attention**: aligns patch embeddings with curated art-historical motif vocabulary (*Figure 2, page 6*​:contentReference[oaicite:5]{index=5}). - **StyloScope**: domain-guided optimization with curriculum scheduling, entropy-adjusted learning rate, and motif-consistency loss (*Figure 3–4, pages 7–8*​:contentReference[oaicite:6]{index=6}). - **Interpretability**: motif classification head for symbolic transparency. - **Robustness**: state-of-the-art accuracy on 4 benchmark datasets (e.g., *Table 1–2, pages 10–11*​:contentReference[oaicite:7]{index=7}). --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# ----------------------------
# Patch Embedding
# ----------------------------
class PatchEmbed(nn.Module):
    def __init__(self, in_ch=3, embed_dim=128, patch_size=16):
        super().__init__()
        self.proj = nn.Conv2d(in_ch, embed_dim, kernel_size=patch_size, stride=patch_size)
    def forward(self, x):
        x = self.proj(x)  # (B,D,H/ps,W/ps)
        x = x.flatten(2).transpose(1

SpineNet++-AnatoRisk

# SpineNet++-AnatoRisk: Anatomy-Aware Lumbar MRI Classification and Risk Prediction This repository provides an implementation of **SpineNet++**, a vertebra-aware deep residual network architecture, combined with **AnatoRisk**, a structured post-hoc risk inference module. Together, they enable **intelligent lumbar MRI classification and clinically interpretable risk prediction**. > Reference: *"Deep Residual Network-Driven Classification and Risk Prediction of Lumbar MRI Images"* by Guo, Li, and Zhu (2025)​:contentReference[oaicite:3]{index=3}. --- ## Features - **Dual-pathway encoding**: combines global spinal context and localized vertebra-specific embeddings. - **Anatomical Graph Attention Module (AGAM)**: captures inter-vertebral relationships (see *Figure 1, page 6*​:contentReference[oaicite:4]{index=4}). - **Hierarchical fusion**: integrates multi-scale features with contrastive regularization. - **AnatoRisk**: post-hoc module for calibrated and interpretable risk prediction (see *Figure 3, page 8*​:contentReference[oaicite:5]{index=5}). - **Counterfactual sensitivity analysis**: highlights vertebral contributions to overall risk. --- ## Quick Start ### 1) Install dependencies ```bash pip install torch torchvision torchaudio
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

# -------------------------------
# Basic Convolutional Encoder
# -------------------------------
class ConvBlock(nn.Module):
    def __init__(self, in_ch, out_ch, k=3, s=1, p=1):
        super().__init__()
        self.net = nn.Sequential(
            nn.Conv2d(in_ch, out_ch, k, s, p),
            nn.BatchNorm2d(out_ch),
            nn.ReLU(inplace=True)
        )
    def forward(self, x):

SpineFormer-NeuroDiscAlign

# SpineFormer-NeuroDiscAlign: Transformer-based Lumbar MRI Risk Prediction This repository provides an implementation of **SpineFormer**, a domain-adapted transformer with axial-aware tokenization and disc-level positional embeddings, combined with **NeuroDiscAlign**, a training strategy for uncertainty calibration and risk-aligned contrastive learning. Together, they enable **intelligent classification and risk prediction of lumbar MRI images**. > Reference: *"Intelligent Classification and Risk Prediction of Lumbar MRI Images Using Deep Residual Networks"* by Guo, Li, and Zhu (2025). --- ## Features - **Axial-aware tokenization** and disc-level embeddings for anatomical consistency. - **Transformer backbone (SpineFormer)** with modality-sensitive attention. - **Uncertainty modeling** using Dirichlet priors. - **Risk prediction** with saliency-weighted aggregation. - **NeuroDiscAlign** for Bayesian calibration, risk-aligned contrastive learning, and multi-view consistency. --- ## Quick Start 1) Install dependencies ```bash pip install torch torchvision torchaudio 2) Usage import torch from model import SpineFormer model = SpineFormer( in_channels=1, # MRI modality channels embed_dim=128, depth=4, num_heads=8, num_classes=5 ) x = torch.randn(2, 1, 224, 224) # Example batch (B,C,H,W) logits, risk = model(x) print("Class logits:", logits.shape) # (B, num_classes) print("Risk score:", risk.shape) # (B, 1) 3) Datasets Lumbar Spine MRI Image Collection Deep Learning Lumbar MRI Dataset Spinal Health MRI Risk Assessment Data​ You need to preprocess MRI volumes into disc-level patches. Model Outputs logits: per-disc class probabilities risk: patient-level risk score (0–1) uncertainty: Dirichlet-based confidence measure
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

class PatchEmbed(nn.Module):
    """Flatten MRI patch into token embeddings"""
    def __init__(self, in_channels=1, embed_dim=128, patch_size=16):
        super().__init__()
        self.proj = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size)
    def forward(self, x):
        x = self.proj(x)  # (B, embed_dim, H/ps, W/ps)
        x = x.flatten(2).transpose(1, 2)  # (

RecurAlignNet-TGEP

# RecurAlignNet-TGEP: Multimodal Functional Recovery Prediction RecurAlignNet-TGEP is a PyTorch implementation of a recurrent, intervention-aware multimodal model (**RecurAlignNet**) with a knowledge-infused therapeutic graph embedding (**T-GEP**) for forecasting functional recovery in low back pain (LBP) patients. > Key ideas come from the paper *"Prediction of Functional Recovery in Low Back Pain Patients Using Multimodal Data Fusion."* > Highlights and diagrams: see the architecture overview (Figure 1–2) and graph-conditioned embedding (Figure 3–4). ## Features - **Multimodal encoder** for clinical/biomechanical/sensor inputs with learnable projection. - **Interventional alignment**: domain-specific encoders + soft attention weights over therapeutic components. - **Intervention-aware GRU**: control gates modulated by aligned interventions and time encoding. - **Trajectory pooling**: attention over time to obtain a global recovery signature. - **T-GEP**: light-weight therapeutic concept graph embedding with single-step propagation and intent decoding. - **Counterfactual hooks**: run the model with alternative intervention sequences to simulate “what-if” outcomes. ## Quick Start ### 1) Setup ```bash python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 # choose your CUDA/CPU
```python
# model.py
# RecurAlignNet + T-GEP (reference implementation)
# Author: Legend Co., Ltd. (implementation based on the uploaded paper)
# License: MIT (adjust as needed)

from typing import Dict, List, Optional, Tuple
import math
import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Utilities
# ----------------------------

class TimeEncoding(nn.Module):
    """Sinusoidal or learned time encoding; here we use learned embed

SRE-CAPS

# SRE-CAPS: Structured Recovery Encoder with Curriculum-Aligned Progression Strategy This repository implements the framework proposed in: **"Multimodal Data Fusion for Predicting Functional Recovery in Patients with Low Back Pain" by Shaojuan Tian, Xiaoxia Fang, Zhendan Xu, and Rui Shi** ## Overview Functional recovery prediction for **low back pain (LBP) patients** is challenging due to heterogeneous data, variability in recovery patterns, and lack of interpretability in existing models. **SRE-CAPS** introduces a two-part framework: - **Structured Recovery Encoder (SRE):** Encodes multimodal biomechanical and clinical data into a continuous latent manifold using BiGRUs and neural ODEs, ensuring temporal smoothness and clinical interpretability. - **Curriculum-Aligned Progression Strategy (CAPS):** A training strategy that enforces stage-aware recovery constraints, monotonic progression, and clinically grounded forecasting. ### Key Features - **Multimodal Temporal Encoding** with BiGRUs - **Neural ODE Latent Dynamics** for recovery trajectory modeling - **Clinical Decoder** mapping latent states to interpretable outcomes - **Stage-aware Constraints** with intra/inter-stage regularization - **Forecast-aware Penalization** for clinically plausible predictions - **Improved interpretability and patient-specific adaptation**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchdiffeq import odeint


class TemporalEncoder(nn.Module):
    """BiGRU-based encoder for multimodal temporal data."""
    def __init__(self, input_dim, hidden_dim, latent_dim):
        super().__init__()
        self.bigru = nn.GRU(input_dim, hidden_dim, batch_first=True, bidirectional=True)
        self.fc = nn.Linear(2 * hidden_dim, latent_dim)

    def forward(self, x):
        h, _ = self.big

LearnFusionNet

# LearnFusionNet: Multimodal Transformer for Student Learning Behavior Profiling This repository implements the multimodal transformer-based framework proposed in: **"LearnFusionNet: Multimodal Transformer for Student Learning Behavior Profiling and Smart Teaching Management" by Lefei Xu, Zhi Fang, and Jingjing Lan** ## Overview Traditional methods of student behavior profiling rely on unimodal data (e.g., logs, video), which often fail to capture the multifaceted dynamics of learning engagement. **LearnFusionNet** introduces a multimodal transformer framework that integrates: - **Textual interactions** - **Visual cues** - **Behavioral logs** to generate comprehensive student profiles and support smart teaching management. ### Key Features - **Latent Multi-Agent Encoding (LME):** Compresses heterogeneous student/teacher states into a shared latent space. - **Structure-Aware Temporal Dynamics (STD):** Captures peer influence, policy effects, and temporal evolution. - **Constraint-Driven Latent Control:** Ensures fairness, equity, and compliance with institutional constraints. - **Goal-Aligned Recursive Intervention Planning (GRIP):** Optimizes policies for engagement and learning outcomes. - **Fairness-Aware Planning:** Integrates demographic fairness into decision making.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class LatentMultiAgentEncoding(nn.Module):
    """Latent Multi-Agent Encoding (LME) module."""
    def __init__(self, input_dim, latent_dim):
        super().__init__()
        self.encoder = nn.Linear(input_dim, latent_dim)
        self.decoder = nn.Linear(latent_dim, input_dim)

    def forward(self, x):
        z = F.relu(self.encoder(x))
        recon = self.decoder(z)
        return z, recon


CPE-CARN

# CPE-CARN: Graph Neural Network Framework for Postoperative Complication Forecasting This repository implements the two-stage framework introduced in: **"Application of Graph Neural Networks in Identifying Postoperative Complications after Artificial Joint Replacement" by Xiao Yun, Liping Feng, and Zhizhong Tian** ## Overview Postoperative complication identification after joint replacement is critical for patient recovery and long-term implant success. Traditional methods often fail to capture the **temporal dynamics** and **inter-complication dependencies** required in clinical risk prediction. This work proposes: - **CPE (Complication Progression Encoder):** Encodes postoperative trajectories using gated temporal convolutions, hierarchical attention, and complication-specific latent embeddings. - **CARN (Context-Aware Risk Navigation):** Provides context-sensitive forecasting through temporal propagation, adaptive realignment with new observations, and evidence fusion across multimodal inputs. Together, **CPE + CARN** deliver accurate, interpretable, and adaptive forecasting of postoperative risks. ## Key Features - Temporal encoding with dilated convolutions and sinusoidal positional embeddings. - Multi-head complication-specific decoders with correlation calibration. - Regularized objective combining binary cross-entropy, co-occurrence penalties, and parameter norms. - Context-sensitive inference with conditional propagation, adaptive realignment, and multimodal evidence fusion. - Prioritization of high-severity risks through selective calibration.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class TemporalConvEncoder(nn.Module):
    """Temporal convolution + positional encoding for patient trajectories."""
    def __init__(self, input_dim, hidden_dim, num_layers=3, kernel_size=3):
        super().__init__()
        self.input_fc = nn.Linear(input_dim, hidden_dim)
        self.convs = nn.ModuleList([
            nn.Conv1d(hidden_dim, hidden_dim, kernel_size, padding="same", dilation=2**i)
    

CANE-RCLIS

# CANE-RCLIS: Postoperative Complication Identification with GNNs This repository implements the methods proposed in: **"Graph Neural Network-Based Research on Postoperative Complication Identification in Artificial Joint Replacement" by Xiao Yun, Liping Feng, and Zhizhong Tian** ## Overview Accurate identification of complications after joint replacement is critical for patient safety and recovery. Traditional methods fail to capture complex temporal and relational patterns in multimodal clinical data. This project introduces: - **CANE (Complication-Aware Neural Estimator)**: A hybrid neural framework combining latent temporal dynamics, structured variational inference, and multi-label prediction. - **RCLIS (Risk-Conditioned Latent Intervention Strategy)**: A counterfactual training paradigm that enhances sensitivity to rare but high-risk complications by perturbing latent health trajectories. Together, CANE + RCLIS leverage **graph neural networks (GNNs)**, structured priors, and attention mechanisms to provide robust, interpretable, and clinically actionable predictions. ## Key Features - Variational recurrent encoders for latent health dynamics. - Multi-label complication prediction with CRF-based dependency modeling. - Temporal calibration for consistent longitudinal risk estimation. - Counterfactual latent interventions to model rare events. - Adversarial alignment for clinically plausible simulations. ## Project Structure . ├── model.py # Implementation of CANE + RCLIS ├── README.md # Project documentation ├── data/ # Placeholder for datasets └── experiments/ # Training and evaluation scripts
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class TemporalEncoder(nn.Module):
    """GRU-based temporal encoder for patient clinical sequences."""
    def __init__(self, input_dim, hidden_dim):
        super(TemporalEncoder, self).__init__()
        self.fc_in = nn.Linear(input_dim, hidden_dim)
        self.gru = nn.GRU(hidden_dim, hidden_dim, batch_first=True)

    def forward(self, x):
        # x: [batch, seq_len, input_dim]
        x = F.relu