フロントエンドだけでバリデーションしても意味がないサンプル

<form id="purchase-form">
  <label>
    個数(最大2個まで):
    <input type="number" id="quantity" name="quantity" min="1" max="2" required />
  </label>
  <button type="submit">購入</button>
</form>

<script>
  document.getElementById("purchase-form").addEventListener("submit", async (e) => {
    e.preventDefault();

    const quantity = parseInt(document.getElementById("quantity").value, 10);

    // フロント側バリデーション
    if (quantity > 2) {
      alert("2個までしか購入できません");
      return;
    }

    // APIに送信
  

php artisan tinker - add user

Abrir PHP Artisan Tinker
```php
php artisan tinker
```

```
use App\Models\User;
User::create([
    'name' => 'test',
    'email' => 'test@testuser.com',
    'password' => bcrypt('0QcrNCKSQ7uuay2@'),
]);

```

🚧 Ingester

## Exemple d'URL à requêter
https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata_2025-01.parquet

## Paramètres de `__init__`
BASE_URL = 'https://d37ci6vzurychx.cloudfront.net/trip-data/yellow_tripdata'
YEAR = '2025'
DATA_DIR = 🚧 Définir avec path relatif par rapport à la racine du projet

🚧 Il faut éventuellement générer le folder data (dans le `__init__` ou dans un 
autre script?)

3350. Adjacent Increasing Subarrays Detection II

Given an array nums of n integers, your task is to find the maximum value of k for which there exist two adjacent subarrays of length k each, such that both subarrays are strictly increasing. Specifically, check if there are two subarrays of length k starting at indices a and b (a < b), where: Both subarrays nums[a..a + k - 1] and nums[b..b + k - 1] are strictly increasing. The subarrays must be adjacent, meaning b = a + k. Return the maximum possible value of k. A subarray is a contiguous non-empty sequence of elements within an array.
/**
 * @param {number[]} nums
 * @return {number}
 */
var maxIncreasingSubarrays = function (nums) {
    let maxK = 0;       // Stores the maximum valid k found so far
    let curLen = 1;     // Length of the current strictly increasing run
    let prevLen = 0;    // Length of the previous strictly increasing run

    for (let i = 1; i < nums.length; i++) {
        if (nums[i] > nums[i - 1]) {
            // Continue the current increasing run
            curLen++;
        } else {
            /

START_END

import pandas as pd
from datetime import datetime, timedelta
import yfinance as yf

# Define the end date
end = str(pd.Timestamp.today().strftime('%Y-%m-%d'))

# Calculate the start date (20 years before the end date)
no_years = 20
start = (datetime.strptime(end, '%Y-%m-%d') - timedelta(days=no_years*365)).strftime('%Y-%m-%d')

# Generate the date range
date_range = pd.date_range(start, end, freq='D')

print(date_range, '\n\n')

tickers = ['SPY', 'MDY']
data = yf.download(ticker

Decompose Flow

Если задача с типом **decomp**

![](https://cdn.cacher.io/attachments/u/3kcbpjvt3jkry/PNj6DgUJ0EqxM_I_aaMwYKXgEuvfRa3G/wq1oqnqid.png)

- нужно создать новую задачу на разработку, где производится описание задачи.
- задачу нужно оценить самому и отдать на оценку QA переведя в статус Requirements Review  (RR).
QA уже дальше переведет в ready to develop

### Линковки
- Задачу по декомпозиции линкуем children задача для разработки
- Задачу по разработке линкуем с задачей по декомпозу parent
- Задачу

🛠️ Setting Up Pre-commit Hooks with UV

# Setting Up Pre-commit Hooks with UV

Pre-commit hooks automatically check your code before each commit, catching issues early and enforcing consistent code quality.

## Installation

Add pre-commit as a development dependency:

```bash
uv add --dev pre-commit
```

## Configuration

Create `.pre-commit-config.yaml` in your project root:

```yaml
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.5.0
    hooks:
      - id: trailing-whitespace
      

git

# 查看远程仓库
git remote -v

# 断联
git remote remove origin

cartopy

import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.io.shapereader import Reader

sys.path.insert(0, "/data8/xuyf/Project/shouxian")
from configs import MAP_DIR
sheng = Reader(os.path.join(MAP_DIR, 'sheng.shp'))

BDY_DIR = "/data8/xuyf/Data/Static/boundary/GS(2024)0650-SHP"
sheng = Reader(os.path.join(BDY_DIR, 'sheng.shp'))

fig = plt.figure(figsize=(12, 8), dpi=300)
ax = fig.subplots(1, 1, subplot_kw={'projection': ccrs.PlateCarree()})

ax.add_feature(cfeatu

Mount/Dismount Registry Hive File

function Mount-RegistryHive {
    [CmdletBinding()]
    param(
        [Parameter(Mandatory = $true, Position = 0, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)]
        $KeyName
        ,
        [Parameter(Mandatory = $true, Position = 1, ValueFromPipeline = $true, ValueFromPipelineByPropertyName = $true)]
        $FileName
    )

    begin {
            Add-Type -Name LoadHive -NameSpace RegistryHelper -MemberDefinition @"

[DllImport("advapi32.dll", SetLast

3349. Adjacent Increasing Subarrays Detection I

Given an array nums of n integers and an integer k, determine whether there exist two adjacent subarrays of length k such that both subarrays are strictly increasing. Specifically, check if there are two subarrays starting at indices a and b (a < b), where: Both subarrays nums[a..a + k - 1] and nums[b..b + k - 1] are strictly increasing. The subarrays must be adjacent, meaning b = a + k. Return true if it is possible to find two such subarrays, and false otherwise.
/**
 * @param {number[]} nums
 * @param {number} k
 * @return {boolean}
 */
var hasIncreasingSubarrays = function(nums, k) {
    // Helper function to check if a subarray is strictly increasing
    function isStrictlyIncreasing(start, end) {
        for (let i = start; i < end; i++) {
            if (nums[i] >= nums[i + 1]) {
                return false; // Not strictly increasing
            }
        }
        return true;
    }

    // Total length needed for two adjacent subarrays of length

Vanilla Stats CountUp

using matrix and some js, count up/down to select values. Respects decimals. #
<div class="pp-stats-countup" data-id="pp-{{Matrix.MatrixId}}">
   <div class="section-copy">{{Module.FieldValues.SectionCopy}}</div>
   <ul class="stats-countup-list" data-layout="{{Module.FieldValues.Layout | default: 'quarter'}}">
      {% for item in List.Items %}
         {% assign el = item.FieldValues %}
         <li
            class="single-stat"
            data-start-value="{{el.StartingValue}}"
            data-value="{{el.Value}}"
         >
            <h3>
             

SFAN-CMEFS

# SFAN-CMEFS: Semantic Fusion Attention Network for Music Culture Communication This repository provides a PyTorch implementation of the **Semantic Fusion Attention Network (SFAN)** with **Cross-Modal Emotion Fusion Strategy (CMEFS)** proposed in: > Feng Liu. *Semantic Modeling of Music Culture Communication Using Attention Network with Cross-Modal Emotion Fusion Mechanism.* Xinyang Vocational College of Arts & Wuhan Sports University (2025)&#8203;:contentReference[oaicite:1]{index=1}. --- ## 🎵 Overview Music culture communication represents a complex, multimodal process involving **audio**, **text**, **visual**, and **emotional** channels. Traditional rule-based models often fail to generalize across cultural contexts. The **SFAN-CMEFS** model integrates **deep attention networks** and **cross-modal emotion fusion** to dynamically align semantic and emotional cues across modalities&#8203;:contentReference[oaicite:2]{index=2}. --- ## ⚙️ Model Architecture ### 🧠 Semantic Fusion Attention Network (SFAN) As shown in *Figure 1* (p.7), SFAN consists of: - **Multimodal Encoders** for audio, text, and emotion features. - **Cross-Modal Fusion Mechanism** with 3D Adapters for dynamic weighting. - **Graphical Propagation Layer** modeling inter-modal semantic relations. - **Temporal Multi-Head Self-Attention (MSA)** capturing global and local dependencies. Equations (9)–(15) define the modality encoding and attention-weighted fusion: \[ H_{fusion} = f_{fusion}(H_a, H_t, H_e; \theta) \] \[ Y = f_{decoder}(A H_{fusion}) \] where \( A_{ij} = softmax(q_i^T k_j) \) is the semantic attention matrix&#8203;:contentReference[oaicite:3]{index=3}&#8203;:contentReference[oaicite:4]{index=4}. --- ### 🎭 Cross-Modal Emotion Fusion Strategy (CMEFS) Described in *Figures 3–4* (pp.10–11), CMEFS aligns emotional and cultural semantics by: - Extracting modality-specific features (audio, text, visual). - Computing attention weights \( \alpha_m = softmax(W_m f_m + b_m) \). - Projecting attended features into a shared latent space \( L \). - Aggregating via weighted fusion \( R = \sum_m \beta_m l_m \). - Minimizing combined losses: \[ L_{total} = L_{align} + \lambda L_{recon} \] to preserve both emotional alignment and modality-specific information&#8203;:contentReference[oaicite:5]{index=5}. --- ## 🧩 Dataset Summary Datasets used for evaluation include&#8203;:contentReference[oaicite:6]{index=6}: | Dataset | Modalities | Focus | |----------|-------------|-------| | Music Emotion Communication Dataset | Audio + Text | Emotion perception | | Cross-Modal Music Culture Dataset | Audio + Text + Visual | Cultural interpretation | | Semantic Music Interaction Dataset | Audio + User Interaction | Behavior modeling | | Attention-Based Emotion Fusion Dataset | Audio + Physiological Signals | Affective response | --- ## 🚀 Experimental Setup - **Framework:** PyTorch - **Optimizer:** Adam (β₁=0.9, β₂=0.999, lr=1e-3 → cosine decay) - **Batch size:** 64 | **Epochs:** 100 - **Augmentation:** random crop, flip, color jitter, CutMix - **Loss:** \( L_{total} = L_{align} + \lambda L_{recon} \), with λ=0.3 - **Metrics:** Accuracy, Precision, Recall, AUC&#8203;:contentReference[oaicite:7]{index=7} SFAN-CMEFS achieves **89.1–90.3% accuracy** across all benchmark datasets, outperforming ViT, ResNet, DenseNet, and I3D by 3–4%&#8203;:contentReference[oaicite:8]{index=8}. --- ## 🧠 Example Usage ```python import torch from model import SFAN_CMEFS # Example multimodal input audio = torch.randn(8, 128, 64) # (batch, time, audio_dim) text = torch.randn(8, 64, 256) # (batch, seq_len, text_dim) emotion = torch.randn(8, 32, 64) # (batch, emo_features) visual = torch.randn(8, 3, 224, 224) model = SFAN_CMEFS(audio_dim=64, text_dim=256, emo_dim=64, hidden_dim=128, out_dim=8) out = model(audio, text, emotion, visual) print(out.shape) # torch.Size([8, 8])
```python
# model.py
# SFAN + CMEFS unified model for semantic music culture modeling
# Based on Feng Liu (2025)&#8203;:contentReference[oaicite:11]{index=11}&#8203;:contentReference[oaicite:12]{index=12}

import torch
import torch.nn as nn
import torch.nn.functional as F


class MultiHeadAttention(nn.Module):
    """Multi-head self-attention for multimodal fusion."""
    def __init__(self, dim, heads=4):
        super().__init__()
        self.qkv = nn.Linear(dim, dim * 3)
      

GEMFEN-AKIS

# GEMFEN-AKIS: Intelligent Evaluation Framework for Rural Revitalization A PyTorch implementation of the **Graph-Enhanced Multi-Source Feature Embedding Network (GEMFEN)** and the **Adaptive Knowledge Integration Strategy (AKIS)**, proposed by *Hongduan Zhu et al. (Sichuan Polytechnic University, 2025)*&#8203;:contentReference[oaicite:3]{index=3}. This project builds an intelligent evaluation system for **rural revitalization**, integrating **graph neural networks (GNNs)**, **multi-source feature embeddings**, and **domain knowledge–driven attention mechanisms** to deliver interpretable and data-driven insights. --- ## 🌾 Overview Rural revitalization involves complex socio-economic, environmental, and infrastructural factors. This framework formalizes the evaluation task as a **graph-based learning problem**, where: - **Nodes** represent rural entities (e.g., farms, communities, enterprises), - **Edges** represent structural or semantic relationships (e.g., trade, geography, communication), - **Features** come from heterogeneous data (satellite imagery, census data, IoT sensors, etc.). The system learns to **capture dependencies**, **fuse multi-source features**, and **generate interpretable evaluations** for decision-makers&#8203;:contentReference[oaicite:4]{index=4}. --- ## ⚙️ Framework Components ### 1. Graph-Enhanced Multi-Source Feature Embedding Network (GEMFEN) - Builds a **heterogeneous graph** \( G = (V, E) \) with node features from multiple data sources. - Embeds features using `Embed(D)` and propagates information via GNN layers: \[ H^{(l+1)} = \sigma(\tilde{A} H^{(l)} W^{(l)}) \] - Integrates **attention-based propagation**: \[ \alpha_{ij} = \frac{\exp(score(h_i, h_j))}{\sum_k \exp(score(h_i, h_k))} \] - Outputs evaluation embedding via: \[ Y = Aggregate(H^{(K)}) \] - See **Figure 1** (*page 5*) and **Figure 2** (*page 6*) for architecture and data fusion flow&#8203;:contentReference[oaicite:5]{index=5}. ### 2. Adaptive Knowledge Integration Strategy (AKIS) - Enhances interpretability and adaptability. - Fuses **data-driven embeddings** \( F' \) and **domain knowledge** \( K \): \[ H = \alpha F' + (1 - \alpha) K \] - Applies **context-aware attention modulation**: \[ H' = A \odot H, \quad A = softmax(W_c C) \] - Loss combines prediction and regularization: \[ L = L_{task}(Y, \hat{Y}) + \lambda L_{reg}(H') \] - Figures **3–4** (*pages 7–8*) visually show multimodal embedding and knowledge-driven fusion&#8203;:contentReference[oaicite:6]{index=6}. --- ## 🧩 Datasets Used | Dataset | Description | Domain | |----------|--------------|---------| | Rural Infrastructure Feature Dataset | Roads, utilities, communications | Infrastructure | | Multi-Source Agricultural Productivity Dataset | Crop, soil, irrigation, sensors | Agriculture | | Intelligent Community Development Dataset | IoT, energy, social metrics | Smart Villages | | GNN Evaluation Metrics Dataset | Benchmarking node/graph tasks | Machine Learning |&#8203;:contentReference[oaicite:7]{index=7} --- ## 🚀 Experimental Setup - Framework: **PyTorch** - Backbone: **ResNet-50 (ImageNet pretrained)** - Optimizer: `Adam` (lr=1e-3, decay×0.1 every 10 epochs) - Batch size: 64; Epochs: 100 - Data Augmentation: random crop, flip, jitter, temporal augmentation - Hardware: 4× NVIDIA Tesla V100 GPUs&#8203;:contentReference[oaicite:8]{index=8} Performance Summary (Top-1 Accuracy): | Dataset | Ours | ViT | DenseNet | MobileNet | |----------|------|------|-----------|-----------| | Rural Infrastructure | **89.34%** | 86.89% | 86.78% | 85.98% | | Agricultural Productivity | **91.12%** | 89.12% | 89.03% | 87.89% | | Intelligent Community | **89.34%** | 86.89% | 86.78% | 85.89% |&#8203;:contentReference[oaicite:9]{index=9} --- ## 💡 Example Usage ```python import torch from model import GEMFEN_AKIS x = torch.randn(32, 128, 64) # 32 nodes, 128 features adj = torch.randint(0, 2, (32, 32)).float() model = GEMFEN_AKIS(in_dim=64, hid_dim=128, out_dim=10) out = model(x, adj) print(out.shape) # torch.Size([32, 10])
```python
# model.py
# GEMFEN + AKIS for Intelligent Rural Evaluation System
# Based on Zhu et al. (2025)&#8203;:contentReference[oaicite:11]{index=11}

import torch
import torch.nn as nn
import torch.nn.functional as F


class GraphConvLayer(nn.Module):
    """Basic GNN layer (Eq. 1 & 4)."""
    def __init__(self, in_dim, out_dim):
        super().__init__()
        self.linear = nn.Linear(in_dim, out_dim)

    def forward(self, x, adj):
        h = torch.matmul(adj, x)
      

EVCAN-AVPAM

# EVCAN-AVPAM: Enhanced Visual Convolutional Attention Network for Art Style Recognition This repository provides a PyTorch implementation of the **Enhanced Visual Convolutional Attention Network (EVCAN)** combined with the **Adaptive Visual Perception and Attention Mechanism (AVPAM)**, proposed in the paper: > **Design of Art Style Recognition Method Based on Visual Convolution Perception and Attention Enhancement Mechanism** > *Authors: Xiang Shi, Feng He (2025)*&#8203;:contentReference[oaicite:1]{index=1} --- ## Overview Art style recognition is a complex and multifaceted challenge due to stylistic diversity, abstraction, and subtle variations in brushstrokes, color, and texture. EVCAN integrates **multi-scale CNN feature extraction** with **hierarchical attention**, while AVPAM adds a **domain-specific adaptive perception and attention strategy** to highlight salient regions and improve robustness to stylistic variability&#8203;:contentReference[oaicite:2]{index=2}&#8203;:contentReference[oaicite:3]{index=3}. ### 🔍 Framework Components - **EVCAN (Enhanced Visual Convolutional Attention Network):** - Multi-scale feature extraction across various kernel sizes. - Adaptive feature fusion with learnable weights \( \alpha_k \) (Eq. 9–10). - Hierarchical attention layers refine local and global stylistic features (Eq. 11–12). - End-to-end classification optimized with cross-entropy loss (Eq. 13–14). - **AVPAM (Adaptive Visual Perception and Attention Mechanism):** - Domain-aware convolutional encoder with hierarchical residual blocks. - Multi-scale attention fusion \( F_{multi} = \sum_s w_s F^{(s)} \) (Eq. 18). - Graphical propagation and attention refinement (Figure 3, p.7). - Style regularization \( L_{style} = \|F_i - F_j\|_2^2 \) (Eq. 20) to enforce stylistic coherence. Figures 1–4 in the paper (pp.5–8) illustrate the multimodal feature fusion, attention hierarchy, and adaptive visual perception architecture with residual and pooling pathways&#8203;:contentReference[oaicite:4]{index=4}&#8203;:contentReference[oaicite:5]{index=5}. --- ## 🚀 Key Features - **Hierarchical visual perception** across multiple convolutional layers. - **Multi-scale adaptive fusion** balancing fine-grained and global style features. - **Hierarchical attention** emphasizing stylistically relevant regions. - **Domain-specific regularization** for robust and interpretable classification. - **PyTorch-based modular architecture** for easy extension and training. --- ## 🧩 Model Architecture ```text Input Image ↓ [Convolution Blocks] → [Multi-Scale Feature Fusion (α_k softmax weighting)] ↓ [Hierarchical Attention Layers (QK^T / √d)] ↓ [Adaptive Visual Perception Module + Graphical Propagation] ↓ [Domain-Specific Regularization + Classification Head] ↓ Softmax Output (Art Style Category)
```python
# model.py
# Implementation of EVCAN + AVPAM for Art Style Recognition
# Based on equations and architecture in Shi & He (2025) :contentReference[oaicite:9]{index=9}

import torch
import torch.nn as nn
import torch.nn.functional as F
import math

class MultiScaleConv(nn.Module):
    """Multi-scale feature extraction (Eq. 7–10)."""
    def __init__(self, in_ch, out_ch, scales=(3,5,7)):
        super().__init__()
        self.branches = nn.ModuleList([
            nn.Conv2

TSLM-AKIS

# TSLM-AKIS: Transformer-Driven Student Behavior Prediction A PyTorch implementation of the **Transformer-Driven Sequential Learning Model (TSLM)** with an **Adaptive Knowledge Integration Strategy (AKIS)** for analyzing online education behavior sequences and predicting student learning behaviors. > Paper: *Transformer-Driven Online Education Behavior Sequence Analysis for Student Learning Behavior Prediction* (Qing Wang, Henan Open University). Key ideas, terminology, and equations follow the paper’s Sections 3.2–3.4. See the architecture diagrams (e.g., Figure 1 on page 5 and Figure 2 on page 6) for a visual overview of the encoder and attention workflow. :contentReference[oaicite:1]{index=1} --- ## Features - **TSLM encoder**: Multi-head self-attention over behavior-event embeddings with positional encodings, residual connections, and FFN blocks. - **Multimodal-ready inputs**: Action/resource/timestamp embeddings are concatenated to form an event vector (per Eq. (3) in the paper). :contentReference[oaicite:2]{index=2} - **AKIS module**: Lightweight domain-knowledge attention over a learnable concept bank + gating to adaptively fuse knowledge with sequence states (Eqs. (18)–(22)). :contentReference[oaicite:3]{index=3} - **Prediction heads**: Classification (softmax) or next-event prediction. - **Clean, minimal training loop hooks** so you can plug in your own datasets and losses. --- ## Quick Start ### 1) Install ```bash pip install torch torchvision torchaudio # pick versions compatible with your CUDA
```python
# model.py
# TSLM + AKIS (minimal PyTorch implementation)
# References to equations and sections follow the uploaded paper. :contentReference[oaicite:10]{index=10}

from typing import Optional, Tuple
import math
import torch
import torch.nn as nn
import torch.nn.functional as F


class SinusoidalPositionalEncoding(nn.Module):
    """
    Classic sinusoidal positional encodings (paper Eq. (9)-(10)). :contentReference[oaicite:11]{index=11}
    """
    def __init__(self, d