Decompose Flow

Если задача с типом **decomp**

![](https://cdn.cacher.io/attachments/u/3kcbpjvt3jkry/PNj6DgUJ0EqxM_I_aaMwYKXgEuvfRa3G/wq1oqnqid.png)

- нужно создать новую задачу на разработку, где производится описание задачи.
- задачу нужно оценить самому и отдать на оценку QA переведя в статус Requirements Review  (RR).
QA уже дальше переведет в ready to develop

### Линковки
- Задачу по декомпозиции линкуем children задача для разработки
- Задачу по разработке линкуем с задачей по декомпозу parent
- Задачу

🛠️ Setting Up Pre-commit Hooks with UV

# Setting Up Pre-commit Hooks with UV

Pre-commit hooks automatically check your code before each commit, catching issues early and enforcing consistent code quality.

## Installation

Add pre-commit as a development dependency:

```bash
uv add --dev pre-commit
```

## Configuration

Create `.pre-commit-config.yaml` in your project root:

```yaml
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.5.0
    hooks:
      - id: trailing-whitespace
      

git

# 查看远程仓库
git remote -v

# 断联
git remote remove origin

cartopy

import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader

sys.path.insert(0, "/data8/xuyf/Project/shouxian")
from configs import MAP_DIR
sheng = Reader(os.path.join(MAP_DIR, 'sheng.shp'))

BDY_DIR = "/data8/xuyf/Data/Static/boundary/GS(2024)0650-SHP"
sheng = Reader(os.path.join(BDY_DIR, 'sheng.shp'))

fig = plt.figure(figsize=(12, 8), dpi=300)
ax = fig.subplots(1, 1, subplot_kw={'projection': ccrs.PlateCarree()})

ax.add_feature(cfeatu

Mount/Dismount Registry Hive File

function Mount-RegistryHive {
    [CmdletBinding()]
    param(
        [Parameter(Mandatory = $true, Position = 0, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)]
        $KeyName
        ,
        [Parameter(Mandatory = $true, Position = 1, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)]
        $FileName
    )

    begin {
            Add-Type -Name LoadHive -NameSpace RegistryHelper -MemberDefinition @"

[DllImport("advapi32.dll", SetLast

3349. Adjacent Increasing Subarrays Detection I

Given an array nums of n integers and an integer k, determine whether there exist two adjacent subarrays of length k such that both subarrays are strictly increasing. Specifically, check if there are two subarrays starting at indices a and b (a < b), where: Both subarrays nums[a..a + k - 1] and nums[b..b + k - 1] are strictly increasing. The subarrays must be adjacent, meaning b = a + k. Return true if it is possible to find two such subarrays, and false otherwise.
/**
 * @param {number[]} nums
 * @param {number} k
 * @return {boolean}
 */
var hasIncreasingSubarrays = function(nums, k) {
    // Helper function to check if a subarray is strictly increasing
    function isStrictlyIncreasing(start, end) {
        for (let i = start; i < end; i++) {
            if (nums[i] >= nums[i + 1]) {
                return false; // Not strictly increasing
            }
        }
        return true;
    }

    // Total length needed for two adjacent subarrays of length

Vanilla Stats CountUp

using matrix and some js, count up/down to select values. Respects decimals. #
<div class="pp-stats-countup" data-id="pp-{{Matrix.MatrixId}}">
   <div class="section-copy">{{Module.FieldValues.SectionCopy}}</div>
   <ul class="stats-countup-list" data-layout="{{Module.FieldValues.Layout | default: 'quarter'}}">
      {% for item in List.Items %}
         {% assign el = item.FieldValues %}
         <li
            class="single-stat"
            data-start-value="{{el.StartingValue}}"
            data-value="{{el.Value}}"
         >
            <h3>
             

SFAN-CMEFS

# SFAN-CMEFS: Semantic Fusion Attention Network for Music Culture Communication This repository provides a PyTorch implementation of the **Semantic Fusion Attention Network (SFAN)** with **Cross-Modal Emotion Fusion Strategy (CMEFS)** proposed in: > Feng Liu. *Semantic Modeling of Music Culture Communication Using Attention Network with Cross-Modal Emotion Fusion Mechanism.* Xinyang Vocational College of Arts & Wuhan Sports University (2025)&#8203;:contentReference[oaicite:1]{index=1}. --- ## 🎵 Overview Music culture communication represents a complex, multimodal process involving **audio**, **text**, **visual**, and **emotional** channels. Traditional rule-based models often fail to generalize across cultural contexts. The **SFAN-CMEFS** model integrates **deep attention networks** and **cross-modal emotion fusion** to dynamically align semantic and emotional cues across modalities&#8203;:contentReference[oaicite:2]{index=2}. --- ## ⚙️ Model Architecture ### 🧠 Semantic Fusion Attention Network (SFAN) As shown in *Figure 1* (p.7), SFAN consists of: - **Multimodal Encoders** for audio, text, and emotion features. - **Cross-Modal Fusion Mechanism** with 3D Adapters for dynamic weighting. - **Graphical Propagation Layer** modeling inter-modal semantic relations. - **Temporal Multi-Head Self-Attention (MSA)** capturing global and local dependencies. Equations (9)–(15) define the modality encoding and attention-weighted fusion: \[ H_{fusion} = f_{fusion}(H_a, H_t, H_e; \theta) \] \[ Y = f_{decoder}(A H_{fusion}) \] where \( A_{ij} = softmax(q_i^T k_j) \) is the semantic attention matrix&#8203;:contentReference[oaicite:3]{index=3}&#8203;:contentReference[oaicite:4]{index=4}. --- ### 🎭 Cross-Modal Emotion Fusion Strategy (CMEFS) Described in *Figures 3–4* (pp.10–11), CMEFS aligns emotional and cultural semantics by: - Extracting modality-specific features (audio, text, visual). - Computing attention weights \( \alpha_m = softmax(W_m f_m + b_m) \). - Projecting attended features into a shared latent space \( L \). - Aggregating via weighted fusion \( R = \sum_m \beta_m l_m \). - Minimizing combined losses: \[ L_{total} = L_{align} + \lambda L_{recon} \] to preserve both emotional alignment and modality-specific information&#8203;:contentReference[oaicite:5]{index=5}. --- ## 🧩 Dataset Summary Datasets used for evaluation include&#8203;:contentReference[oaicite:6]{index=6}: | Dataset | Modalities | Focus | |----------|-------------|-------| | Music Emotion Communication Dataset | Audio + Text | Emotion perception | | Cross-Modal Music Culture Dataset | Audio + Text + Visual | Cultural interpretation | | Semantic Music Interaction Dataset | Audio + User Interaction | Behavior modeling | | Attention-Based Emotion Fusion Dataset | Audio + Physiological Signals | Affective response | --- ## 🚀 Experimental Setup - **Framework:** PyTorch - **Optimizer:** Adam (β₁=0.9, β₂=0.999, lr=1e-3 → cosine decay) - **Batch size:** 64 | **Epochs:** 100 - **Augmentation:** random crop, flip, color jitter, CutMix - **Loss:** \( L_{total} = L_{align} + \lambda L_{recon} \), with λ=0.3 - **Metrics:** Accuracy, Precision, Recall, AUC&#8203;:contentReference[oaicite:7]{index=7} SFAN-CMEFS achieves **89.1–90.3% accuracy** across all benchmark datasets, outperforming ViT, ResNet, DenseNet, and I3D by 3–4%&#8203;:contentReference[oaicite:8]{index=8}. --- ## 🧠 Example Usage ```python import torch from model import SFAN_CMEFS # Example multimodal input audio = torch.randn(8, 128, 64) # (batch, time, audio_dim) text = torch.randn(8, 64, 256) # (batch, seq_len, text_dim) emotion = torch.randn(8, 32, 64) # (batch, emo_features) visual = torch.randn(8, 3, 224, 224) model = SFAN_CMEFS(audio_dim=64, text_dim=256, emo_dim=64, hidden_dim=128, out_dim=8) out = model(audio, text, emotion, visual) print(out.shape) # torch.Size([8, 8])
```python
# model.py
# SFAN + CMEFS unified model for semantic music culture modeling
# Based on Feng Liu (2025)&#8203;:contentReference[oaicite:11]{index=11}&#8203;:contentReference[oaicite:12]{index=12}

import torch
import torch.nn as nn
import torch.nn.functional as F


class MultiHeadAttention(nn.Module):
    """Multi-head self-attention for multimodal fusion."""
    def __init__(self, dim, heads=4):
        super().__init__()
        self.qkv = nn.Linear(dim, dim * 3)
      

GEMFEN-AKIS

# GEMFEN-AKIS: Intelligent Evaluation Framework for Rural Revitalization A PyTorch implementation of the **Graph-Enhanced Multi-Source Feature Embedding Network (GEMFEN)** and the **Adaptive Knowledge Integration Strategy (AKIS)**, proposed by *Hongduan Zhu et al. (Sichuan Polytechnic University, 2025)*&#8203;:contentReference[oaicite:3]{index=3}. This project builds an intelligent evaluation system for **rural revitalization**, integrating **graph neural networks (GNNs)**, **multi-source feature embeddings**, and **domain knowledge–driven attention mechanisms** to deliver interpretable and data-driven insights. --- ## 🌾 Overview Rural revitalization involves complex socio-economic, environmental, and infrastructural factors. This framework formalizes the evaluation task as a **graph-based learning problem**, where: - **Nodes** represent rural entities (e.g., farms, communities, enterprises), - **Edges** represent structural or semantic relationships (e.g., trade, geography, communication), - **Features** come from heterogeneous data (satellite imagery, census data, IoT sensors, etc.). The system learns to **capture dependencies**, **fuse multi-source features**, and **generate interpretable evaluations** for decision-makers&#8203;:contentReference[oaicite:4]{index=4}. --- ## ⚙️ Framework Components ### 1. Graph-Enhanced Multi-Source Feature Embedding Network (GEMFEN) - Builds a **heterogeneous graph** \( G = (V, E) \) with node features from multiple data sources. - Embeds features using `Embed(D)` and propagates information via GNN layers: \[ H^{(l+1)} = \sigma(\tilde{A} H^{(l)} W^{(l)}) \] - Integrates **attention-based propagation**: \[ \alpha_{ij} = \frac{\exp(score(h_i, h_j))}{\sum_k \exp(score(h_i, h_k))} \] - Outputs evaluation embedding via: \[ Y = Aggregate(H^{(K)}) \] - See **Figure 1** (*page 5*) and **Figure 2** (*page 6*) for architecture and data fusion flow&#8203;:contentReference[oaicite:5]{index=5}. ### 2. Adaptive Knowledge Integration Strategy (AKIS) - Enhances interpretability and adaptability. - Fuses **data-driven embeddings** \( F' \) and **domain knowledge** \( K \): \[ H = \alpha F' + (1 - \alpha) K \] - Applies **context-aware attention modulation**: \[ H' = A \odot H, \quad A = softmax(W_c C) \] - Loss combines prediction and regularization: \[ L = L_{task}(Y, \hat{Y}) + \lambda L_{reg}(H') \] - Figures **3–4** (*pages 7–8*) visually show multimodal embedding and knowledge-driven fusion&#8203;:contentReference[oaicite:6]{index=6}. --- ## 🧩 Datasets Used | Dataset | Description | Domain | |----------|--------------|---------| | Rural Infrastructure Feature Dataset | Roads, utilities, communications | Infrastructure | | Multi-Source Agricultural Productivity Dataset | Crop, soil, irrigation, sensors | Agriculture | | Intelligent Community Development Dataset | IoT, energy, social metrics | Smart Villages | | GNN Evaluation Metrics Dataset | Benchmarking node/graph tasks | Machine Learning |&#8203;:contentReference[oaicite:7]{index=7} --- ## 🚀 Experimental Setup - Framework: **PyTorch** - Backbone: **ResNet-50 (ImageNet pretrained)** - Optimizer: `Adam` (lr=1e-3, decay×0.1 every 10 epochs) - Batch size: 64; Epochs: 100 - Data Augmentation: random crop, flip, jitter, temporal augmentation - Hardware: 4× NVIDIA Tesla V100 GPUs&#8203;:contentReference[oaicite:8]{index=8} Performance Summary (Top-1 Accuracy): | Dataset | Ours | ViT | DenseNet | MobileNet | |----------|------|------|-----------|-----------| | Rural Infrastructure | **89.34%** | 86.89% | 86.78% | 85.98% | | Agricultural Productivity | **91.12%** | 89.12% | 89.03% | 87.89% | | Intelligent Community | **89.34%** | 86.89% | 86.78% | 85.89% |&#8203;:contentReference[oaicite:9]{index=9} --- ## 💡 Example Usage ```python import torch from model import GEMFEN_AKIS x = torch.randn(32, 128, 64) # 32 nodes, 128 features adj = torch.randint(0, 2, (32, 32)).float() model = GEMFEN_AKIS(in_dim=64, hid_dim=128, out_dim=10) out = model(x, adj) print(out.shape) # torch.Size([32, 10])
```python
# model.py
# GEMFEN + AKIS for Intelligent Rural Evaluation System
# Based on Zhu et al. (2025)&#8203;:contentReference[oaicite:11]{index=11}

import torch
import torch.nn as nn
import torch.nn.functional as F


class GraphConvLayer(nn.Module):
    """Basic GNN layer (Eq. 1 & 4)."""
    def __init__(self, in_dim, out_dim):
        super().__init__()
        self.linear = nn.Linear(in_dim, out_dim)

    def forward(self, x, adj):
        h = torch.matmul(adj, x)
      

EVCAN-AVPAM

# EVCAN-AVPAM: Enhanced Visual Convolutional Attention Network for Art Style Recognition This repository provides a PyTorch implementation of the **Enhanced Visual Convolutional Attention Network (EVCAN)** combined with the **Adaptive Visual Perception and Attention Mechanism (AVPAM)**, proposed in the paper: > **Design of Art Style Recognition Method Based on Visual Convolution Perception and Attention Enhancement Mechanism** > *Authors: Xiang Shi, Feng He (2025)*&#8203;:contentReference[oaicite:1]{index=1} --- ## Overview Art style recognition is a complex and multifaceted challenge due to stylistic diversity, abstraction, and subtle variations in brushstrokes, color, and texture. EVCAN integrates **multi-scale CNN feature extraction** with **hierarchical attention**, while AVPAM adds a **domain-specific adaptive perception and attention strategy** to highlight salient regions and improve robustness to stylistic variability&#8203;:contentReference[oaicite:2]{index=2}&#8203;:contentReference[oaicite:3]{index=3}. ### 🔍 Framework Components - **EVCAN (Enhanced Visual Convolutional Attention Network):** - Multi-scale feature extraction across various kernel sizes. - Adaptive feature fusion with learnable weights \( \alpha_k \) (Eq. 9–10). - Hierarchical attention layers refine local and global stylistic features (Eq. 11–12). - End-to-end classification optimized with cross-entropy loss (Eq. 13–14). - **AVPAM (Adaptive Visual Perception and Attention Mechanism):** - Domain-aware convolutional encoder with hierarchical residual blocks. - Multi-scale attention fusion \( F_{multi} = \sum_s w_s F^{(s)} \) (Eq. 18). - Graphical propagation and attention refinement (Figure 3, p.7). - Style regularization \( L_{style} = \|F_i - F_j\|_2^2 \) (Eq. 20) to enforce stylistic coherence. Figures 1–4 in the paper (pp.5–8) illustrate the multimodal feature fusion, attention hierarchy, and adaptive visual perception architecture with residual and pooling pathways&#8203;:contentReference[oaicite:4]{index=4}&#8203;:contentReference[oaicite:5]{index=5}. --- ## 🚀 Key Features - **Hierarchical visual perception** across multiple convolutional layers. - **Multi-scale adaptive fusion** balancing fine-grained and global style features. - **Hierarchical attention** emphasizing stylistically relevant regions. - **Domain-specific regularization** for robust and interpretable classification. - **PyTorch-based modular architecture** for easy extension and training. --- ## 🧩 Model Architecture ```text Input Image ↓ [Convolution Blocks] → [Multi-Scale Feature Fusion (α_k softmax weighting)] ↓ [Hierarchical Attention Layers (QK^T / √d)] ↓ [Adaptive Visual Perception Module + Graphical Propagation] ↓ [Domain-Specific Regularization + Classification Head] ↓ Softmax Output (Art Style Category)
```python
# model.py
# Implementation of EVCAN + AVPAM for Art Style Recognition
# Based on equations and architecture in Shi & He (2025) :contentReference[oaicite:9]{index=9}

import torch
import torch.nn as nn
import torch.nn.functional as F
import math

class MultiScaleConv(nn.Module):
    """Multi-scale feature extraction (Eq. 7–10)."""
    def __init__(self, in_ch, out_ch, scales=(3,5,7)):
        super().__init__()
        self.branches = nn.ModuleList([
            nn.Conv2

TSLM-AKIS

# TSLM-AKIS: Transformer-Driven Student Behavior Prediction A PyTorch implementation of the **Transformer-Driven Sequential Learning Model (TSLM)** with an **Adaptive Knowledge Integration Strategy (AKIS)** for analyzing online education behavior sequences and predicting student learning behaviors. > Paper: *Transformer-Driven Online Education Behavior Sequence Analysis for Student Learning Behavior Prediction* (Qing Wang, Henan Open University). Key ideas, terminology, and equations follow the paper’s Sections 3.2–3.4. See the architecture diagrams (e.g., Figure 1 on page 5 and Figure 2 on page 6) for a visual overview of the encoder and attention workflow. :contentReference[oaicite:1]{index=1} --- ## Features - **TSLM encoder**: Multi-head self-attention over behavior-event embeddings with positional encodings, residual connections, and FFN blocks. - **Multimodal-ready inputs**: Action/resource/timestamp embeddings are concatenated to form an event vector (per Eq. (3) in the paper). :contentReference[oaicite:2]{index=2} - **AKIS module**: Lightweight domain-knowledge attention over a learnable concept bank + gating to adaptively fuse knowledge with sequence states (Eqs. (18)–(22)). :contentReference[oaicite:3]{index=3} - **Prediction heads**: Classification (softmax) or next-event prediction. - **Clean, minimal training loop hooks** so you can plug in your own datasets and losses. --- ## Quick Start ### 1) Install ```bash pip install torch torchvision torchaudio # pick versions compatible with your CUDA
```python
# model.py
# TSLM + AKIS (minimal PyTorch implementation)
# References to equations and sections follow the uploaded paper. :contentReference[oaicite:10]{index=10}

from typing import Optional, Tuple
import math
import torch
import torch.nn as nn
import torch.nn.functional as F


class SinusoidalPositionalEncoding(nn.Module):
    """
    Classic sinusoidal positional encodings (paper Eq. (9)-(10)). :contentReference[oaicite:11]{index=11}
    """
    def __init__(self, d

MPFN-AMFS

# MPFN-AMFS: Multimodal Pose Fusion Network with Adaptive Multimodal Fusion Strategy This repository provides a PyTorch reference implementation of the research paper: **“System Implementation of Gymnastics Teaching and Training System Based on Multimodal Perception Fusion and Pose Estimation”** by **Chen Niyun**, Macao Polytechnic University&#8203;:contentReference[oaicite:2]{index=2}. The proposed system integrates **Multimodal Pose Fusion Network (MPFN)** and **Adaptive Multimodal Fusion Strategy (AMFS)** to enhance gymnastics teaching and training. It fuses **visual, inertial, and skeletal** data modalities to achieve robust, accurate, and real-time pose estimation with domain-specific optimization. --- ## 🌟 Highlights - **Multimodal Perception Fusion:** Integrates video, IMU, and skeleton data for comprehensive human motion understanding. - **MPFN Architecture:** Combines CNN (vision), RNN (inertial), and GNN (skeleton) encoders with a *graph propagation* layer. - **AMFS Strategy:** Adapts fusion weights dynamically based on reliability, contextual significance, and temporal consistency. - **Real-Time Feedback Loop:** Provides actionable feedback for gymnastics coaching and pose correction. - **Domain-Specific Optimization:** Incorporates biomechanical constraints for joint angles, symmetry, and temporal smoothing. --- ## 🧠 Conceptual Overview The system consists of two major modules: ### 🧩 1. Multimodal Pose Fusion Network (MPFN) - **Inputs:** - Visual stream (images or video frames) → CNN - Inertial stream (accelerometer/gyroscope) → RNN - Skeletal stream (3D joint coordinates) → GNN - **Fusion Mechanism:** \[ F_m = α_v F_v' + α_i F_i' + α_s F_s' \] where \( α_v, α_i, α_s \) are self-attention weights learned via: \[ α_v, α_i, α_s = Softmax(W_a[F_v'; F_i'; F_s']) \]&#8203;:contentReference[oaicite:3]{index=3} ### 🧩 2. Adaptive Multimodal Fusion Strategy (AMFS) - Dynamically adjusts modality weights according to reliability probabilities: \[ α_v = \frac{P_v}{P_v + P_i + P_s}, \; α_i = \frac{P_i}{P_v + P_i + P_s}, \; α_s = \frac{P_s}{P_v + P_i + P_s} \] - Temporal smoothing via exponential moving average: \[ X^*_t = β X_t + (1-β) X_{t-1} \] - Corrected pose refinement: \[ X_c = X_p + ΔX \] where \(ΔX\) applies joint-angle limits and domain constraints&#8203;:contentReference[oaicite:4]{index=4}. --- ## 🧩 Architecture Diagram Summary - **Figure 1 (p.5):** Shows the full MPFN pipeline with CNN, RNN, and GNN backbones, graphical propagation, and temporal modeling&#8203;:contentReference[oaicite:5]{index=5}. - **Figure 3 (p.7):** Depicts the AMFS block, including hierarchical HC-SSM modules and domain-specific optimization layers. - **Figure 4 (p.8):** Details the multimodal encoder—stacked convolutions, batch normalization, ELU activation, and max pooling before fusion. --- ## 🧪 Experimental Summary | Dataset | Accuracy | Recall | F1 Score | AUC | |----------|-----------|---------|-----------|------| | Gymnastics Pose Estimation | **89.78%** | **89.19%** | **88.61%** | **88.90%** | | Multimodal Athlete Movement | **91.45%** | **90.92%** | **90.38%** | **90.65%** | | Gymnastics Training Perception | **89.34%** | **88.72%** | **88.19%** | **88.53%** | | Human Motion Fusion | **90.12%** | **89.56%** | **89.03%** | **89.37%** |&#8203;:contentReference[oaicite:6]{index=6} > Achieved +3–4% over ViT, DenseNet, and MobileNet baselines while maintaining real-time performance. --- ## 🧰 Installation ```bash python -m venv .venv source .venv/bin/activate pip install torch torchvision numpy matplotlib
```python
# model.py
# MPFN-AMFS: Multimodal Pose Fusion Network + Adaptive Multimodal Fusion Strategy
# Author: Chen Niyun (Macao Polytechnic University, 2024) :contentReference[oaicite:8]{index=8}

import torch
import torch.nn as nn
import torch.nn.functional as F


# ======== Encoders ========

class CNNEncoder(nn.Module):
    """Visual feature extractor (2D Conv + BN + ReLU)&#8203;:contentReference[oaicite:9]{index=9}"""
    def __init__(self, in_ch=3, out_dim=128):
        su

DCMFN-AMFS

# DCMFN-AMFS: Deep Convolutional Multi-Scale Fusion Network for Green Building Evaluation This repository implements the **Deep Convolutional Multi-Scale Fusion Network (DCMFN)** and **Adaptive Multi-Scale Fusion Strategy (AMFS)** proposed in **“Comprehensive Evaluation of Green Building Indoor Environment Using Deep Convolution and Multi-Scale Feature Fusion”** by **Xinyi Zhang**, **Jia Ji**, **Qiong Yang**, and **Guanhua Li**&#8203;:contentReference[oaicite:2]{index=2}. The model integrates **multi-scale convolution**, **spatial attention**, and **adaptive fusion** mechanisms to evaluate indoor environmental quality (IEQ) comprehensively — covering **thermal comfort, air quality, lighting, and acoustics**. --- ## 🌱 Motivation Traditional indoor environment assessment methods rely on manual scoring or rigid rule-based systems, which lack scalability and adaptability. DCMFN-AMFS enables **automated, data-driven evaluation** by learning hierarchical spatial-temporal relationships from building sensor data, providing **interpretable and efficient** insight for sustainable design&#8203;:contentReference[oaicite:3]{index=3}. --- ## 🧠 Framework Overview The proposed **DCMFN-AMFS** combines: - **Multi-Scale Convolutional Extraction:** Local and global environmental features are captured with kernels of varying sizes (e.g., 3×3, 5×5, 7×7). - **Spatial Attention Fusion:** Enhances feature coherence by assigning attention weights to key indoor zones. - **Adaptive Multi-Scale Fusion Strategy (AMFS):** Dynamically reweights feature scales based on learned relevance (via softmax over φ(Fᵢ)). - **Residual Refinement:** Strengthens fused representations through 3×3 convolutions and skip connections. - **Domain-Specific Modulation:** Integrates priors about correlations between environmental factors (e.g., temperature-humidity coupling). 📊 *Figure 1 on page 5* shows the full pipeline — from hyperspectral patch tokenization, transformer-based cross attention, to fusion and prediction modules&#8203;:contentReference[oaicite:4]{index=4}. --- ## 🔬 Architecture Components ### 🧩 1. Multi-Scale Feature Extraction Extracts hierarchical patterns across environmental sensors: \[ F_k = σ(W_k * X + b_k) \] with kernels \(k_1, k_2, ..., k_n\) capturing different spatial resolutions&#8203;:contentReference[oaicite:5]{index=5}. ### 🧩 2. Spatial Attention Fusion Integrates and weights spatial regions: \[ A = \text{softmax}(W_a * F_{fused} + b_a) \] \[ F_{attended} = A ⊙ F_{fused} \] ### 🧩 3. Adaptive Multi-Scale Fusion (AMFS) Softmax-based attention weighting across scales: \[ α_i = \frac{\exp(φ(F_i))}{\sum_j \exp(φ(F_j))} \] and aligned fusion: \[ F_{fusion} = \sum_i α_i F_{aligned_i} \] followed by residual refinement: \[ F_{refined} = F_{fusion} + ReLU(Conv_{3×3}(F_{fusion})) \]&#8203;:contentReference[oaicite:6]{index=6} --- ## 🧪 Experimental Results | Dataset | Accuracy | Precision | Recall | AUC | |----------|-----------|-----------|--------|-----| | Green Building Indoor Air Quality | **89.23%** | **88.67%** | **88.89%** | **89.12%** | | Sustainable Architecture Thermal Comfort | **90.45%** | **89.89%** | **90.12%** | **90.36%** | | Indoor Environmental Quality Dataset | **89.34%** | **88.72%** | **88.91%** | **89.15%** | | Multi-Scale Building Energy Efficiency | **90.12%** | **89.67%** | **89.83%** | **90.05%** |&#8203;:contentReference[oaicite:7]{index=7} > These results outperform ResNet, ViT, DenseNet, and MobileNet by 3–4% while being 30% faster due to efficient depthwise convolutions. --- ## 🧰 Installation ```bash python -m venv .venv source .venv/bin/activate pip install torch torchvision numpy matplotlib
```python
# model.py
# DCMFN-AMFS: Deep Convolutional Multi-Scale Fusion Network with Adaptive Multi-Scale Fusion Strategy
# Based on Zhang et al. (2024), Comprehensive Evaluation of Green Building Indoor Environment :contentReference[oaicite:9]{index=9}

import torch
import torch.nn as nn
import torch.nn.functional as F


class ConvBlock(nn.Module):
    """Convolution + BN + ReLU"""
    def __init__(self, in_ch, out_ch, k=3, s=1, p=1):
        super().__init__()
        self.conv 

RSAN-ARSAS

# RSAN-ARSAS: Residual Self-Attention Network with Adaptive Strategy for Fault Diagnosis This repository provides a PyTorch reference implementation of the model proposed in **“Data-Driven Fault Diagnosis of Single-Phase Cascaded H-Bridge Rectifier Using Residual Network and Self-Attention Mechanism”** by **Lihui Zhou** and **Chunjie Li**, Jiangsu Normal University, China&#8203;:contentReference[oaicite:4]{index=4}. The paper introduces a **Residual Self-Attention Network (RSAN)** integrated with an **Adaptive Residual Self-Attention Strategy (AR-SAS)**, designed to enhance diagnostic accuracy, interpretability, and real-time performance in **power electronic fault detection**. --- ## 🌟 Highlights - **Residual Network Backbone (ResNet-like):** Enables stable deep feature extraction and mitigates vanishing gradients. - **Self-Attention Integration:** Captures global dependencies in voltage/current signals for better fault discrimination. - **Adaptive Strategy (AR-SAS):** Dynamically reweights attention using domain-specific priors and gating mechanisms. - **Explainable & Efficient:** Supports interpretability through attention visualization, adaptable to edge deployment. - **Real-Time Capability:** Lightweight enough for on-device or distributed power systems. --- ## ⚙️ Architecture Overview - **RSAN Core:** Multi-level residual feature extractor + embedded self-attention for temporal and spatial dependency modeling. - **AR-SAS (Adaptive Residual Self-Attention Strategy):** - Dynamically updates attention weights via residual feature gating. - Integrates domain priors to bias toward known fault signatures. - Includes multimodal encoder & graphical propagation modules. 📘 *See Figure 1 on page 5* — illustrates the RSAN workflow, showing **visual feature refinement**, **residual feature extraction**, and **multiscale self-attention** paths fused into a unified classification head&#8203;:contentReference[oaicite:5]{index=5}. --- ## 🧠 Method Summary ### Mathematical Framework The system models fault diagnosis as: \[ \hat{F} = \arg\max_{F_j \in F} P(F_j | O) \] where \(O\) represents system observations (voltage, current, and switching states), and \(M(z)\) predicts the fault class from extracted features \(z = \Phi(O)\)&#8203;:contentReference[oaicite:6]{index=6}. ### Residual Self-Attention Each block combines: \[ H_i = H_{i-1} + \sigma(W_i H_{i-1} + b_i) \] \[ A = \text{softmax}\left(\frac{QK^\top}{\sqrt{d}}\right), \quad H_{att} = AV \] \[ H_{final} = H_i + H_{att} \] as described in *Equations (7)–(12)*&#8203;:contentReference[oaicite:7]{index=7}. ### AR-SAS Adaptive Mechanism Adds a **gating** and **prior-biased attention**: \[ G = \sigma(W_G R), \quad Z_{gated} = G \odot Z \] \[ A_{biased} = \text{softmax}\left(\frac{QK^\top + P}{\sqrt{D}}\right) \] where \(P\) encodes domain-specific fault priors&#8203;:contentReference[oaicite:8]{index=8}. --- ## 🧩 Dataset References The experiments use four curated datasets (Section 4.1)&#8203;:contentReference[oaicite:9]{index=9}: - **Single-Phase Rectifier Fault Signals Dataset** (fault transients) - **Cascaded H-Bridge Rectifier Performance Dataset** - **Residual Network Fault Diagnosis Dataset** - **Self-Attention Rectifier Fault Detection Dataset** Each includes open/short-circuit faults, sensor failures, and dynamic load variations, supporting both time- and frequency-domain learning. --- ## 🧪 Experimental Results | Dataset | Accuracy | Precision | Recall | AUC | |----------|-----------|-----------|--------|-----| | Single-Phase Rectifier | **89.45%** | **88.72%** | **88.94%** | **89.18%** | | Cascaded H-Bridge | **90.12%** | **89.45%** | **89.68%** | **89.91%** | | Residual Net Fault | **89.45%** | **88.92%** | **89.08%** | **89.32%** | | Self-Attention Rectifier | **90.12%** | **89.56%** | **89.73%** | **89.95%** | (*from Tables 1–2, pp. 9–10*)&#8203;:contentReference[oaicite:10]{index=10} > The model outperforms CNN-, ViT-, and DenseNet-based baselines by 2.5–4.5% on all benchmarks. --- ## 🧰 Installation ```bash python -m venv .venv source .venv/bin/activate pip install torch numpy matplotlib
```python
# model.py
# RSAN-ARSAS: Residual Self-Attention Network with Adaptive Residual Strategy
# Based on: Zhou & Li, "Data-Driven Fault Diagnosis..." Jiangsu Normal University, 2024 :contentReference[oaicite:12]{index=12}

import torch
import torch.nn as nn
import torch.nn.functional as F


class ResidualBlock(nn.Module):
    """Standard residual block with GELU activation"""
    def __init__(self, dim):
        super().__init__()
        self.fc1 = nn.Linear(dim, dim)
      

SARP-ASAUE

# SARP-ASAUE: Sparse Attention & Uncertainty Estimation for Risk Prediction **SARP-ASAUE** is an open-source PyTorch implementation of the research paper *“Sparse Attention and Uncertainty Estimation for Risk Prediction in College Student Innovation and Entrepreneurship Projects”* by **Bin Zhan**, Zhejiang A&F University&#8203;:contentReference[oaicite:2]{index=2}. The framework integrates a **Sparse Attention-based Risk Predictor (SARP)** with an **Adaptive Sparse Attention and Uncertainty Estimation (ASAUE)** strategy, aimed at improving both **interpretability** and **robustness** in risk assessment. --- ## ✨ Highlights - **Sparse Attention (SARP):** Efficiently focuses on the most critical project features, reducing complexity while improving interpretability. - **ASAUE Strategy:** Dynamically adjusts attention thresholds based on predictive uncertainty. - **Uncertainty Estimation:** Quantifies confidence via Monte Carlo Dropout & variance-based modeling. - **Dual-Branch Architecture:** Combines Convolutional and Transformer branches for local + global context (see Fig. 1 on p. 5 of the paper&#8203;:contentReference[oaicite:3]{index=3}). - **Federated + Meta-Learning Ready:** Compatible with decentralized educational data sources (see Fig. 4 on p. 14&#8203;:contentReference[oaicite:4]{index=4}). --- ## 🧠 Motivation Predicting risks in student innovation and entrepreneurship projects is difficult due to sparse, heterogeneous, and uncertain data. Traditional rule-based or statistical methods are rigid and opaque. Deep learning brings end-to-end modeling but lacks interpretability. SARP-ASAUE bridges this gap through **sparse attention** + **uncertainty-aware optimization**, enabling reliable, explainable decisions for educators and policymakers. --- ## 🏗 Architecture Overview - **Convolution Branch:** Captures local spatial patterns via down/up sampling (1×1 and 3×3 Convs + BN). - **Transformer Branch:** Employs multi-head sparse attention to model long-range dependencies efficiently. - **Fusion Head:** Merges both branches and outputs risk scores through a lightweight MLP. - **ASAUE Module:** Adjusts sparsity threshold τ based on uncertainty u → τ = τ₀ (1 + αu). (See detailed diagrams on pp. 5–7 and 14&#8203;:contentReference[oaicite:5]{index=5}.)
```python
# model.py
# SARP-ASAUE: Sparse Attention & Uncertainty Estimation for Risk Prediction
# Based on Bin Zhan (Zhejiang A&F University) 2024 :contentReference[oaicite:10]{index=10}

import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Optional, Tuple


class SparseAttention(nn.Module):
    """Sparse multi-head attention selecting top-k keys per query."""

    def __init__(self, dim: int, num_heads: int = 4, topk: int = 16):
        super()._

CMARL-COS

# CMARL-COS: Collaborative Multi-Agent RL for Public Space Optimization **CMARL-COS** is a research-oriented PyTorch project that reproduces the core ideas from the paper *“Multi-Agent Collaboration and Reinforcement Learning-Driven Framework for Public Space Design and Management Optimization”* by **Yu Caixia**. It combines **Collaborative Multi-Agent Reinforcement Learning (CMARL)** with a **Collaborative Optimization Strategy (COS)** to optimize public-space design and management with multiple cooperating agents. > Paper reference: Multi-Agent Collaboration and Reinforcement Learning-Driven Framework for Public Space Design and Management Optimization (author: **Yu Caixia**). :contentReference[oaicite:1]{index=1} --- ## Motivation Designing and managing public spaces must balance **efficiency, inclusivity, and sustainability** while handling **multiple stakeholders** and **dynamic environments**. Traditional rule-based or single-agent approaches struggle with sequential, interactive decisions. CMARL-COS models each stakeholder as an agent, uses **centralized training with decentralized execution**, and coordinates agents through **graph-based communication** and **constraint-aware optimization**. Key ideas inspired by the paper: - **Decentralized multi-agent collaboration** with shared yet local policies. - **Graph-based message passing** for agent coordination. - **Centralized critic** optimizing joint value while preserving scalable, decentralized execution. - **COS**: domain constraints via Lagrangian penalties + adaptive policy mechanisms for non-stationary environments. --- ## Features - ✅ PyTorch implementation of **multi-agent policies** (independent actor heads) - ✅ **Centralized critic** for joint Q-value estimation - ✅ **Graph communication module** (adjacency-controlled message passing) - ✅ **Constraint-aware optimization (COS)** with Lagrangian multipliers - ✅ Hooks for **adaptive/meta updates** (regret-minimizing placeholder) - ✅ Minimal interfaces to plug in simulators of public-space scenarios
```python
# model.py
# CMARL-COS: Collaborative Multi-Agent RL with Constraint-Aware Optimization
# Author: Legend Co., Ltd. (Sharon's workspace)
# Inspired by: Yu Caixia, "Multi-Agent Collaboration and Reinforcement Learning-Driven Framework..."  :contentReference[oaicite:5]{index=5}

from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F


def mlp(sizes: List[int], activation=nn.ReLU, out_