ETHNOSCORE-Net & CULTURE-ADAPT

ETHNOSCORE-Net is a multimodal deep learning framework for performance assessment in ethnic traditional sports. It integrates symbolic encoding, relational graph reasoning, hierarchical temporal fusion, and culturally adaptive embedding to ensure robust, interpretable, and culturally faithful evaluations. Coupled with CULTURE-ADAPT, a knowledge-informed evaluation strategy, this system aligns machine learning outputs with indigenous knowledge systems and cultural rules . 🌍 Motivation Traditional sports embody cultural heritage but lack standardized evaluation systems. Mainstream AI models often fail to capture cultural semantics, stylistic transitions, and rule-based expressivity. ETHNOSCORE-Net bridges computational intelligence with traditional knowledge, enabling fair, explainable, and sustainable AI-driven evaluation. 🏗️ Architecture ETHNOSCORE-Net (see Figure 1, page 6) Stage 1: Symbolic Encoding – temporal CNN encodes raw pose, velocity, and contextual features into symbolic sequences (Eq. 9–10). Stage 2: Relational Reasoning – spatio-temporal graph encoder models dependencies among actions and body segments (Eq. 11–12). Stage 3: Hierarchical Temporal Fusion – BiLSTM + temporal attention aggregates context into global cultural representation (Eq. 13–15). Stage 4: Cultural Alignment – multi-headed decoder maps outputs to cultural expressivity dimensions with manifold regularization (Eq. 16–18). CULTURE-ADAPT (see Figures 3 & 4, pages 9–10) Temporal Consensus Filtering – Gaussian-weighted smoothing to reduce noise (Eq. 19). Symbolic Alignment – compares symbolic trajectories with expert references via symbolic kernel (Eq. 20–21). Domain-Specific Rule Embedding – automaton-based compliance scoring of symbolic rules (Eq. 22–23). Expressivity Mapping – maps numeric outputs into culturally intelligible categories (Eq. 24). Consensus-Aware Evaluation – ensemble of models calibrated on cultural folds, producing composite performance scores (Eq. 25–27). 📊 Datasets Validated on four diverse datasets: Traditional Sports Performance Metrics – wrestling, archery, martial arts (reaction time, force, endurance). Ethnic Sports Athlete Evaluation – indigenous sports with expert scores + biometrics (stride, HRV, smoothness). Cultural Sports Skill Assessment Records – annotated videos with ethnographic commentary. Indigenous Games Performance Analysis – tribal sports like stick-fighting & traditional ball games with community scoring. 🚀 Results ETHNOSCORE-Net consistently outperforms YOLOv5, Faster R-CNN, DETR, RetinaNet, Mask R-CNN: Traditional Sports Dataset: Accuracy 91.86%, F1 89.91%, AUC 93.12% Ethnic Sports Dataset: Accuracy 90.44%, F1 88.20%, AUC 92.07% Cultural Sports Dataset: Accuracy 90.76%, F1 88.87%, AUC 92.30% Indigenous Games Dataset: Accuracy 89.42%, F1 87.84%, AUC 91.45% Ablation studies (Tables 3 & 4, pp. 13–14) confirm that multimodal encoder, temporal fusion, and cultural embedding each play critical roles. Removing any module significantly reduces performance. ⚙️ Installation git clone https://github.com/yourusername/ethnoscore-net.git cd ethnoscore-net pip install -r requirements.txt 📂 Usage from ethnoscore import EthnoScoreNet, CultureAdapt # Initialize model model = EthnoScoreNet(num_classes=10, d_model=128) # Forward pass outputs = model(inputs) # symbolic sequence + cultural scores # Apply CULTURE-ADAPT evaluation final_score = CultureAdapt(outputs, expert_refs, cultural_rules) 🔬 Citation If you use this framework, please cite: Xv Yibing. TradTrainEval: Intelligent algorithms for performance assessment in ethnic traditional sports. PLoS ONE, 2025.
# ethnoscore_net.py
# PyTorch >= 2.0

from __future__ import annotations
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple, Callable
import math
import torch
import torch.nn as nn
import torch.nn.functional as F

# ============================================================
# Utilities
# ============================================================

def masked_softmax(x: torch.Tensor, mask: Optional[torch.Tensor], dim: int = -1) -> torch.Tensor:
   

☁️ AWS - API Gateway CLI Commands

# ☝️ IMPORTANT
Pour explorer API Gateway, la première chose à savoir est qu'il existe deux versions du service, avec deux ensembles de commandes distincts :
1. **API Gateway v1** : Pour les API **REST**. Les commandes se trouvent sous aws apigateway.
2. **API Gateway v2** : Pour les API **HTTP** (plus modernes et moins chères) et les API WebSocket. 
Les commandes se trouvent sous `aws apigatewayv2`.

Vous devriez donc commencer par lister les API dans chacune des deux versions pour voir ce que v

☁️ AWS - CloudFront CLI Commands

# Commande Basique
```bash
aws cloudfront list-distributions
```

# Compter les Distributions
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items | length'
```

# Compter les Attributs
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items[0] | length'
```

# Afficher les Attributs
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items[0] | keys'
```

Voici le descriptif des attributs pour une distribution CloudFront, classés de manière logiq

☁️ AWS - S3 CLI Commands

# ☝️ IMPORTANT
Une distinction très importante pour S3 est qu'il existe deux ensembles de commandes :

## **`aws s3`** : 
Des commandes de **haut niveau**, faciles à utiliser, qui ressemblent aux commandes Linux (ls, cp, mv, sync). 
Idéal pour commencer et pour les opérations sur les fichiers.

## **`aws s3api`** : 
Des commandes de **bas niveau qui correspondent directement aux appels de l'API S3**. 
Elles sont plus verbeuses et puissantes. 

👉 **Nécessaire pour la configuration avancée des buc

☁️ AWS - DynamoDB CLI Commands

# List Tables
```bash
aws dynamodb list-tables --query "TableNames" --output table
```

# Inspect a Specific Table
## Basic
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME>
```

## Count Attributes
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME> |
jq '.Table | length'
```

## Display Attributes
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME> |
jq '.Table | keys'
```

Voici le descriptif des attributs pour une table DynamoDB, classés de manière logiq

☁️ AWS - Lambda - CLI Commands

# Basic
```bash
aws lambda list-functions
```

# Count Functions
```bash
aws lambda list-functions --query "length(Functions)"
```

# Count Attributes
```bash
aws lambda list-functions | jq '.Functions[0] | length'
```

# Display Attributes
```bash
aws lambda list-functions | jq '.Functions[0] | keys'
```

# Lambda Function Attributes

| Attribut | Description |
| :--- | :--- |
| **--- Identité et Code ---** | |
| **FunctionName** | 🆔 Le **nom unique** de la fonction dans votre compte et votre r

☁️ AWS - RDS CLI Commands

# Basique
```bash
aws rds describe-db-instances
```

# Count DB Instances
```bash
aws rds describe-db-instances --query "length(DBInstances)"
```

# Count Attributes
```bash
aws rds describe-db-instances | jq '.DBInstances[0] | length'
```

# Display Attributes
```bash
aws rds describe-db-instances | jq '.DBInstances[0] | keys
```

# RDS Instances Attributes

| Attribut | Description |
| :--- | :--- |
| **--- Identité et Moteur ---** | |
| **DBInstanceIdentifier** | 🆔 Le **nom unique** que vous 

event message

event message
*.pyc
/.venv/

2197. Replace Non-Coprime Numbers in Array

You are given an array of integers nums. Perform the following steps: Find any two adjacent numbers in nums that are non-coprime. If no such numbers are found, stop the process. Otherwise, delete the two numbers and replace them with their LCM (Least Common Multiple). Repeat this process as long as you keep finding two adjacent non-coprime numbers. Return the final modified array. It can be shown that replacing adjacent non-coprime numbers in any arbitrary order will lead to the same result. The test cases are generated such that the values in the final array are less than or equal to 108. Two values x and y are non-coprime if GCD(x, y) > 1 where GCD(x, y) is the Greatest Common Divisor of x and y.
/**
 * @param {number[]} nums
 * @return {number[]}
 */
var replaceNonCoprimes = function(nums) {
    // Helper function to compute GCD using Euclidean algorithm
    const gcd = (a, b) => {
        while (b !== 0) {
            let temp = b;
            b = a % b;
            a = temp;
        }
        return a;
    };

    // Helper function to compute LCM
    const lcm = (a, b) => (a * b) / gcd(a, b);

    const stack = [];

    for (let num of nums) {
        stack.push(num); // Add current 

Header-footer-BDI

andare in:
admin/config/bdi/settings --> site Tools
ed inserire i file che devono essere resi pubblici es:
/en/header-footer.html 
/en/header-footer.js

andare qui:
.com/ca/en/__include-links e verificare le risorse che verranno esportate

☁️ AWS - Investigating Network

# VPCs
## Counting VPCs
```bash
aws ec2 describe-vpcs --query "length(Vpcs)"
```

## Displaying Attributes
```bash
aws ec2 describe-vpcs | jq '.Vpcs[0] | keys'
```

***
## AWS VPCs Attributes

| Attribute | Description |
| :--- | :--- |
| **VpcId** | 🆔 L'identifiant unique et non modifiable du VPC. C'est sa "plaque d'immatriculation" (ex: `vpc-0123456789abcdef0`). |
| **CidrBlock** | 🌐 La plage d'adresses IPv4 **principale** du VPC, définie en notation CIDR (ex: `10.0.0.0/16`). C'est l'espace d'

☁️ 🐧 CloudShell - `.bashrc`

# Guide d'organisation .bashrc et scripts AWS

## Principe fondamental : garder .bashrc léger et rapide

Le fichier `.bashrc` est exécuté à chaque ouverture de terminal. Il doit rester rapide et ne contenir que l'essentiel. Pensez-y comme au vestibule de votre maison : vous y mettez les essentiels (porte-manteau, clés), pas votre établi complet.

## Table de décision : où placer quoi ?

| Type de code | Emplacement | Raison | Exemple |
|-------------|-------------|---------|---------|

IntentGraphNet-CognAlign

# IntentGraphNet-CognAlign: Symbolic and Context-Aware Modeling for Employment Intention Prediction > This repository provides a research-oriented scaffold based on the paper *“Deep Learning Models for Predicting Employment Intentions of Rural College Students”* (Xiangqian Liu, Lihao Shang, Shengjuan Liu, 2024). The framework integrates **IntentGraphNet**, a symbolic graph encoding model, and **CognAlign**, a context-aware alignment mechanism. ## Motivation Predicting employment intentions of rural college students is critical for: - Addressing **structural inequalities** in the labor market. - Informing **policy and career guidance**. - Enhancing **social mobility and equitable access** to opportunities. Conventional models suffer from: - Rule-based rigidity, - Shallow statistical learning, - Deep learning's interpretability and fairness challenges. Our dual-component framework unifies symbolic reasoning and deep learning to provide **transparent, scalable, and generalizable predictions** across socio-economic contexts&#8203;:contentReference[oaicite:2]{index=2}. ## Key Components - **IntentGraphNet**: - Constructs personalized semantic graphs over symbolic nodes (e.g., family income, motivation, orientation). - Uses attention-based message passing and disentangled representation learning. - Produces transparent, interpretable embeddings of student attributes&#8203;:contentReference[oaicite:3]{index=3}. - **CognAlign**: - Projects symbolic embeddings into contextual manifolds. - Uses adversarial domain adaptation and entropy-aware prediction. - Ensures alignment across regions and fairness across demographic groups&#8203;:contentReference[oaicite:4]{index=4}.
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F

class GraphEncoder(nn.Module):
    """Multimodal Encoder + Graph message passing (IntentGraphNet, Sec. 3.3)."""
    def __init__(self, in_dim, hidden_dim):
        super().__init__()
        self.embed = nn.Linear(in_dim, hidden_dim)
        self.attn = nn.Linear(2 * hidden_dim, 1)
        self.update = nn.Linear(hidden_dim, hidden_dim)

    def forward(self, x, adj):
        """
        x: [

LinguaSphere

# LinguaSphere: Speech-Interactive 3D Language Learning with Adaptive Feedback > A research-grade skeleton implementation inspired by the paper *“Speech-Interactive 3D Game Architecture for Language Learning with Feedback Mechanisms”* (Meizi Zhang, Northeastern University). The code organizes core modules—Intent classification, grammar-aware parsing, symbolic execution, semantic anchoring, and adaptive curriculum—in a clean, extensible Python package suitable for prototyping and integration with a 3D engine. ## Why this project? Traditional CALL tools rely on static drills and pre-scripted flows. LinguaSphere treats **language as action** inside a simulated world: utterances map to in-game objects, agents, and tasks; feedback loops are **multimodal** (visual, auditory, textual) and **adaptive** to proficiency. (See the paper’s Section 3.3 “LinguaSphere” and Figure 1 for the modular architecture.) :contentReference[oaicite:3]{index=3} ## Key Features (scaffold) - **Hybrid Communication Layer**: intent classification → grammar-aware parsing → symbolic execution. - **Semantic Anchoring**: phrases are grounded to objects/actions/goals in the world. - **Adaptive Curriculum Control**: tracks mastery over constructs and schedules tasks to target weaknesses. - **Feedback Loop**: generates context-sensitive corrective tuples `(highlight, corrected_expression, explanation)`. > This repo provides a runnable *skeleton* with clean interfaces and mock logic so you can plug in ASR/NLP, or connect to Unity/Unreal.
```python
# model.py
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Tuple, Optional, Any
import math
import random

# -----------------------------
# Data structures
# -----------------------------

@dataclass
class WorldObject:
    obj_id: str
    cls: str
    affordances: List[str]
    linguistic_tags: List[str]
    semantic_anchors: List[str]
    state: Dict[str, Any] = field(default_factory=dict)

@dataclass
cl

AeroPerceptNet + GeoNarrative Alignment

# AeroPerceptNet + GeoNarrative Alignment This repository provides a minimal PyTorch implementation of the framework introduced in: **“Multi-Source Fusion Architecture for Intelligent Evaluation of Low-Altitude Tourism Experience”** by *Lina Fu, Yanlong Fu, Hua Su, and Yan Wang*&#8203;:contentReference[oaicite:1]{index=1}. --- ## Highlights - **AeroPerceptNet:** hybrid neural-symbolic model integrating: - **Multimodal Fusion Encoding**: fuses panoramic visual features, geospatial semantics, and geometric flight trajectories&#8203;:contentReference[oaicite:2]{index=2}. - **Graphical Temporal Aggregation**: BiGRU-based sequence modeling with attention pooling for coherent route-level representation&#8203;:contentReference[oaicite:3]{index=3}. - **Interpretable Local Scoring**: localized perceptual scores mapped to trajectory coordinates&#8203;:contentReference[oaicite:4]{index=4}. - **GeoNarrative Alignment (GNA):** - Captures **semantic cohesion** across route segments. - Detects **thematic discontinuities** and clusters&#8203;:contentReference[oaicite:5]{index=5}. - Measures **symbol–perception synchrony** (aligns salience with symbolic meaning)&#8203;:contentReference[oaicite:6]{index=6}. - Outperforms SOTA baselines (OC-SVM, Isolation Forest, DAGMM, DeepSVDD, TranAD) by **3–5%** in Accuracy/F1 on four benchmark datasets&#8203;:contentReference[oaicite:7]{index=7}.
```python
# model.py
# AeroPerceptNet + GeoNarrative Alignment (simplified PyTorch version)
# Author: your-name
# License: MIT

import torch
import torch.nn as nn
import torch.nn.functional as F


# ----------------------------
# Multimodal Fusion Encoding
# ----------------------------
class FusionEncoder(nn.Module):
    def __init__(self, input_dim=64, latent_dim=64):
        super().__init__()
        self.vis_proj = nn.Linear(input_dim, latent_dim)
        self.sem_proj = n