SRE-CAPS

# SRE-CAPS: Structured Recovery Encoder with Curriculum-Aligned Progression Strategy This repository implements the framework proposed in: **"Multimodal Data Fusion for Predicting Functional Recovery in Patients with Low Back Pain" by Shaojuan Tian, Xiaoxia Fang, Zhendan Xu, and Rui Shi** ## Overview Functional recovery prediction for **low back pain (LBP) patients** is challenging due to heterogeneous data, variability in recovery patterns, and lack of interpretability in existing models. **SRE-CAPS** introduces a two-part framework: - **Structured Recovery Encoder (SRE):** Encodes multimodal biomechanical and clinical data into a continuous latent manifold using BiGRUs and neural ODEs, ensuring temporal smoothness and clinical interpretability. - **Curriculum-Aligned Progression Strategy (CAPS):** A training strategy that enforces stage-aware recovery constraints, monotonic progression, and clinically grounded forecasting. ### Key Features - **Multimodal Temporal Encoding** with BiGRUs - **Neural ODE Latent Dynamics** for recovery trajectory modeling - **Clinical Decoder** mapping latent states to interpretable outcomes - **Stage-aware Constraints** with intra/inter-stage regularization - **Forecast-aware Penalization** for clinically plausible predictions - **Improved interpretability and patient-specific adaptation**
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchdiffeq import odeint


class TemporalEncoder(nn.Module):
    """BiGRU-based encoder for multimodal temporal data."""
    def __init__(self, input_dim, hidden_dim, latent_dim):
        super().__init__()
        self.bigru = nn.GRU(input_dim, hidden_dim, batch_first=True, bidirectional=True)
        self.fc = nn.Linear(2 * hidden_dim, latent_dim)

    def forward(self, x):
        h, _ = self.big

LearnFusionNet

# LearnFusionNet: Multimodal Transformer for Student Learning Behavior Profiling This repository implements the multimodal transformer-based framework proposed in: **"LearnFusionNet: Multimodal Transformer for Student Learning Behavior Profiling and Smart Teaching Management" by Lefei Xu, Zhi Fang, and Jingjing Lan** ## Overview Traditional methods of student behavior profiling rely on unimodal data (e.g., logs, video), which often fail to capture the multifaceted dynamics of learning engagement. **LearnFusionNet** introduces a multimodal transformer framework that integrates: - **Textual interactions** - **Visual cues** - **Behavioral logs** to generate comprehensive student profiles and support smart teaching management. ### Key Features - **Latent Multi-Agent Encoding (LME):** Compresses heterogeneous student/teacher states into a shared latent space. - **Structure-Aware Temporal Dynamics (STD):** Captures peer influence, policy effects, and temporal evolution. - **Constraint-Driven Latent Control:** Ensures fairness, equity, and compliance with institutional constraints. - **Goal-Aligned Recursive Intervention Planning (GRIP):** Optimizes policies for engagement and learning outcomes. - **Fairness-Aware Planning:** Integrates demographic fairness into decision making.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class LatentMultiAgentEncoding(nn.Module):
    """Latent Multi-Agent Encoding (LME) module."""
    def __init__(self, input_dim, latent_dim):
        super().__init__()
        self.encoder = nn.Linear(input_dim, latent_dim)
        self.decoder = nn.Linear(latent_dim, input_dim)

    def forward(self, x):
        z = F.relu(self.encoder(x))
        recon = self.decoder(z)
        return z, recon


CPE-CARN

# CPE-CARN: Graph Neural Network Framework for Postoperative Complication Forecasting This repository implements the two-stage framework introduced in: **"Application of Graph Neural Networks in Identifying Postoperative Complications after Artificial Joint Replacement" by Xiao Yun, Liping Feng, and Zhizhong Tian** ## Overview Postoperative complication identification after joint replacement is critical for patient recovery and long-term implant success. Traditional methods often fail to capture the **temporal dynamics** and **inter-complication dependencies** required in clinical risk prediction. This work proposes: - **CPE (Complication Progression Encoder):** Encodes postoperative trajectories using gated temporal convolutions, hierarchical attention, and complication-specific latent embeddings. - **CARN (Context-Aware Risk Navigation):** Provides context-sensitive forecasting through temporal propagation, adaptive realignment with new observations, and evidence fusion across multimodal inputs. Together, **CPE + CARN** deliver accurate, interpretable, and adaptive forecasting of postoperative risks. ## Key Features - Temporal encoding with dilated convolutions and sinusoidal positional embeddings. - Multi-head complication-specific decoders with correlation calibration. - Regularized objective combining binary cross-entropy, co-occurrence penalties, and parameter norms. - Context-sensitive inference with conditional propagation, adaptive realignment, and multimodal evidence fusion. - Prioritization of high-severity risks through selective calibration.
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class TemporalConvEncoder(nn.Module):
    """Temporal convolution + positional encoding for patient trajectories."""
    def __init__(self, input_dim, hidden_dim, num_layers=3, kernel_size=3):
        super().__init__()
        self.input_fc = nn.Linear(input_dim, hidden_dim)
        self.convs = nn.ModuleList([
            nn.Conv1d(hidden_dim, hidden_dim, kernel_size, padding="same", dilation=2**i)
    

CANE-RCLIS

# CANE-RCLIS: Postoperative Complication Identification with GNNs This repository implements the methods proposed in: **"Graph Neural Network-Based Research on Postoperative Complication Identification in Artificial Joint Replacement" by Xiao Yun, Liping Feng, and Zhizhong Tian** ## Overview Accurate identification of complications after joint replacement is critical for patient safety and recovery. Traditional methods fail to capture complex temporal and relational patterns in multimodal clinical data. This project introduces: - **CANE (Complication-Aware Neural Estimator)**: A hybrid neural framework combining latent temporal dynamics, structured variational inference, and multi-label prediction. - **RCLIS (Risk-Conditioned Latent Intervention Strategy)**: A counterfactual training paradigm that enhances sensitivity to rare but high-risk complications by perturbing latent health trajectories. Together, CANE + RCLIS leverage **graph neural networks (GNNs)**, structured priors, and attention mechanisms to provide robust, interpretable, and clinically actionable predictions. ## Key Features - Variational recurrent encoders for latent health dynamics. - Multi-label complication prediction with CRF-based dependency modeling. - Temporal calibration for consistent longitudinal risk estimation. - Counterfactual latent interventions to model rare events. - Adversarial alignment for clinically plausible simulations. ## Project Structure . ├── model.py # Implementation of CANE + RCLIS ├── README.md # Project documentation ├── data/ # Placeholder for datasets └── experiments/ # Training and evaluation scripts
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class TemporalEncoder(nn.Module):
    """GRU-based temporal encoder for patient clinical sequences."""
    def __init__(self, input_dim, hidden_dim):
        super(TemporalEncoder, self).__init__()
        self.fc_in = nn.Linear(input_dim, hidden_dim)
        self.gru = nn.GRU(hidden_dim, hidden_dim, batch_first=True)

    def forward(self, x):
        # x: [batch, seq_len, input_dim]
        x = F.relu

KINETIX-BIOALIGN

# KINETIX + BIOALIGN: Athlete Action Recognition and Technical Evaluation This repository implements the deep learning framework proposed in the paper: **"Research on action pattern recognition and technical evaluation method of track and field athletes based on deep learning" by Jiaxian Wu** ## Overview Traditional evaluation of athletic performance relies on manual observation and handcrafted biomechanical models. This project introduces a **transformer-based spatiotemporal architecture (KINETIX)** combined with a **biomechanical alignment strategy (BIOALIGN)** to deliver robust and interpretable performance analysis. ### Key Features - **Motion-Aware Encoding**: Embeds joint positions, velocities, and accelerations. - **Transformer Backbone**: Captures long-range dependencies in motion sequences. - **Biomechanical Integration**: Enforces torque and angular momentum constraints for physical plausibility. - **BIOALIGN Strategy**: Aligns predictions with expert templates and optimizes energy efficiency. - **Datasets**: Evaluated on Kinetics-700, Human3.6M, UCF101, and PoseTrack. ## Project Structure . ├── model.py # Core implementation of KINETIX + BIOALIGN ├── README.md # Documentation ├── data/ # Placeholder for datasets └── experiments/ # Training and evaluation scripts ## Installation ```bash git clone https://snippets.cacher.io/snippet/ae11dd685fae35590680 cd kinetix-bioalign-athlete-evaluation pip install -r requirements.txt
```python
import torch
import torch.nn as nn
import torch.nn.functional as F


class MotionAwareEncoding(nn.Module):
    """Embed pose, velocity, and acceleration into a unified token."""
    def __init__(self, input_dim, hidden_dim):
        super(MotionAwareEncoding, self).__init__()
        self.fc = nn.Linear(input_dim * 3, hidden_dim)

    def forward(self, pose, velocity, acceleration):
        # Concatenate [x, v, a] -> motion-aware token
        z = torch.cat([pose, velocit

汎用ページングロジック(Next.js)

### 1. 必要な情報

- **総データ数**: 全部で何件のデータがあるか(例:100 件)
- **1 ページあたりの表示件数**: 1 ページに何件表示するか(例:3 件)
- **現在のページ番号**: 今何ページ目を見ているか(例:5 ページ目)

### 2. 計算で求められる情報

```javascript
/* 総ページ数 = 総データ数 ÷ 1ページあたりの件数(切り上げ) */
const totalPages = Math.ceil(totalItems / itemsPerPage);
// 例: Math.ceil(100 / 3) = Math.ceil(33.333...) = 34ページ

/* 表示開始位置 = (現在のページ - 1) × 1ページあたりの件数 */
const startIndex = (currentPage - 1) * itemsPerPage;
// 例: 5ページ目なら (5 - 1) × 3 = 12(13番目のデータから表示)

/* 表示終了位置 = 開始位置 + 1ページあたりの件数(ただし総データ数を超えな

Next.jsでURL判定

import { NextResponse } from "next/server";
import type { NextRequest } from "next/server";

export function middleware(request: NextRequest) {
  /* 
  src/をエイリアスにしている場合、middleware.tsを配置する場所はプロジェクトルートではなくsrc/直下にする必要がある
   */
  const requestHeaders = new Headers(request.headers);
  requestHeaders.set("x-pathname", request.nextUrl.pathname);

  return NextResponse.next({
    request: {
      headers: requestHeaders,
    },
  });
}

export const config = {
  matcher: "/((?!api|_next/static|_next/im

1935. Maximum Number of Words You Can Type

There is a malfunctioning keyboard where some letter keys do not work. All other keys on the keyboard work properly. Given a string text of words separated by a single space (no leading or trailing spaces) and a string brokenLetters of all distinct letter keys that are broken, return the number of words in text you can fully type using this keyboard.
/**
 * @param {string} text
 * @param {string} brokenLetters
 * @return {number}
 */
var canBeTypedWords = function(text, brokenLetters) {
    // Step 1: Split the input text into individual words
    const words = text.split(" ");

    // Step 2: Convert brokenLetters into a Set for fast lookup
    const brokenSet = new Set(brokenLetters);

    // Step 3: Initialize a counter for typeable words
    let count = 0;

    // Step 4: Loop through each word in the text
    for (let word of words) {
 

產出空白圖片

Add-Type -AssemblyName System.Drawing |
$width = 300 |
$height = 200 |
$bitmap = New-Object System.Drawing.Bitmap $width, $height |
$graphics = [System.Drawing.Graphics]::FromImage($bitmap) |
$graphics.Clear([System.Drawing.Color]::White) |
$bitmap.Save("C:\blank.png", [System.Drawing.Imaging.ImageFormat]::Png) |
$graphics.Dispose() |
$bitmap.Dispose()

ChatGPT Gratuit : une révolution numérique à portée de main

Dans le monde digital actuel, l’intelligence artificielle occupe une place de plus en plus centrale. Parmi les outils qui transforment notre manière de travailler, d’apprendre et de communiquer, ChatGPT Gratuit s’impose comme une solution incontournable. Accessible en ligne, sans abonnement coûteux, il offre aux particuliers comme aux professionnels une nouvelle façon d’interagir avec la technologie.

Qu’est-ce que ChatGPT Gratuit ?

ChatGPT Gratuit est un modèle de langage avancé, capable de gé

Audit serial number against Active Directory

$CommandText = @"
SELECT
	[t1].[computer_id]
	, [t1].[name]
    , [t1].[serial_num]
    , [t1].[last_inventory]
	, [GS] = (SELECT TOP 1 UPPER([hostname]) FROM mmsettings)
FROM
	[computer] [t1]
"@

$GSSQLData = @()
$GSServers = Get-GSServers
foreach ($GSServer in $GSServers) {
    $GSSQLData += Get-GSSQLData -Instance "$GSServer\SQLEXPRESS" -Database "eXpress" -CommandText $CommandText
}
$GSSQLDataByGS = $GSSQLData | Sort-Object -Property GS
 
if ( ($GSSQLDataByGS | Measure-Obje

Identify serial devices tty

# How to identify serial device tty
[Source](https://superuser.com/a/1095432/393146)

You can check the current tty connection configuration using ```cat```.

```
# cat /proc/tty/driver/serial
serinfo:1.0 driver revision:
0: uart:16550A port:000003F8 irq:4 tx:0 rx:0
1: uart:16550A port:000002F8 irq:3 tx:111780 rx:1321 RTS|DTR|DSR
2: uart:unknown port:000003E8 irq:4
3: uart:unknown port:000002E8 irq:3
```

* A line with uart:unknown means: nothing detected (and likely not existent)
* CTS, DSR, (D

☁️ AWS VPC Learning

# VPC et Subnets
## Image Globale
Un VPC est comme un étage entier loué dans l'immense gratte-ciel AWS.
Cet étage est complètement isolé des autres locataires, personne ne pouvant y entrer ou y sortir sans autorisation.

## Subnets
Un étage vide n'est pas très utile. On va vouloir y aménager des pièces pour différentes fonctions.
C'est là qu'internviennent les **subnets** (ou **sous-réseaux**).
Un subnet est simplement une subdivision d'un VPC, un peu comme une ou plusieurs pièces de l'étage.

#

brew user security

brew user security
#!/bin/bash
# setup-brew-system-user.sh - Complete production-ready setup for _brew system user

set -e  # Exit on any error
set -u  # Treat unset variables as error

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color

# Logging functions
log_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
log_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
log_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
log_error() { echo -e "${RED}[ERROR

966. Vowel Spellchecker

Given a wordlist, we want to implement a spellchecker that converts a query word into a correct word. For a given query word, the spell checker handles two categories of spelling mistakes: Capitalization: If the query matches a word in the wordlist (case-insensitive), then the query word is returned with the same case as the case in the wordlist. Example: wordlist = ["yellow"], query = "YellOw": correct = "yellow" Example: wordlist = ["Yellow"], query = "yellow": correct = "Yellow" Example: wordlist = ["yellow"], query = "yellow": correct = "yellow" Vowel Errors: If after replacing the vowels ('a', 'e', 'i', 'o', 'u') of the query word with any vowel individually, it matches a word in the wordlist (case-insensitive), then the query word is returned with the same case as the match in the wordlist. Example: wordlist = ["YellOw"], query = "yollow": correct = "YellOw" Example: wordlist = ["YellOw"], query = "yeellow": correct = "" (no match) Example: wordlist = ["YellOw"], query = "yllw": correct = "" (no match) In addition, the spell checker operates under the following precedence rules: When the query exactly matches a word in the wordlist (case-sensitive), you should return the same word back. When the query matches a word up to capitlization, you should return the first such match in the wordlist. When the query matches a word up to vowel errors, you should return the first such match in the wordlist. If the query has no matches in the wordlist, you should return the empty string. Given some queries, return a list of words answer, where answer[i] is the correct word for query = queries[i].
/**
 * @param {string[]} wordlist
 * @param {string[]} queries
 * @return {string[]}
 */
var spellchecker = function(wordlist, queries) {
    // Helper function to normalize vowels in a word
    const normalizeVowels = (word) => {
        return word.toLowerCase().replace(/[aeiou]/g, '*');
    };

    // Step 1: Build lookup structures
    const exactWords = new Set(wordlist); // For exact match
    const caseInsensitiveMap = new Map(); // For case-insensitive match
    const vowelErrorMap = new

K8S | MariaDB Galera

# ===============================
# Secrets for MariaDB and MaxScale
# ===============================
apiVersion: v1
kind: Secret
metadata:
  name: mariadb-root-secret
  namespace: mariadb-ha
stringData:
  password: aaaaaaPDaA8fKDv

# ===============================
# MySQL Configuration (ConfigMap)
# ===============================

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: mariadb-config
  namespace: mariadb-ha
data:
  my.cnf: |
    [mariadb]
    skip-name-resolve
    
    max_allo