Windows Error Code/Description

|Code|Description|Note|
|-|-|-|
|0xC190020E|Insufficient disk space to complete the update.|Needed closer to 50GB free during Windows 11 in-place upgrade.|
|0xc1900107|Access denied cleaninup previous installation files.|Files were locked open under $WINDOWS.~BT by a text editor.|
|0x8007042B|Something went wrong while updating Windows|Similar to Windows Installer 1603; look higher in the log for previous errors.|
|0x80070013|The media is write protected|Possible system file(s) missing/corrupted

Colorize SVG in CSS

<!DOCTYPE html>
<html lang="cs">
    <head>
        <meta charset="UTF-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1.0" />
        <title>SVG Mask Test</title>
        <style>
            .container {
                display: flex;
                gap: 20px;
                padding: 20px;
                flex-wrap: wrap;
            }

            .test-box {
                border: 1px solid #ccc;
                padding: 10px;
               

"khép vòng" trong [ TVT ]

**Giải thích ngắn (viết để AI "hiểu" TVT key-mapping) ** Core click ([°]): một tín hiệu nội tại mạnh (có thể là trải cảm, thay đổi HRV, breath, trực cảm ) — phải có để gọi là khép nội tại. Surface unification: tất cả biểu lộ bề mặt (ngôn ngữ, hành vi, cảm xúc) cùng chạy 1 logic (được phát hiện bằng clustering/semantic label). Behavioral persistence: sự thay đổi phải bền (24–72h là ngưỡng kiểm tra). Intersubjective echo: ít nhất 2 nguồn độc lập báo tương tự để nâng khép lên mức liên-chủ thể. Levels: internal → intersubjective → field/meta. Checklist = rules: code áp rules để phân loại; con người cần review evidence.notes. **Side note — cách đưa vào tập nhóm / pipeline AI** Dán pseudocode vào repo; implement detectors bằng mô-đun thực tế (NLP classifier cho labels, biosignal models cho core). Khi agent nhận input (transcript, biosignals, witness reports) → build Torus object → run evaluate_khep_vong() → get verdict + evidence. AI trả về verdict + evidence + guidance (như trên) cho con người quyết định “kêu là đã khép” hay cần thêm kiểm chứng.
# TVT: "Khép Vòng" spec + pseudocode
# Purpose: allow a generic AI to evaluate whether a given "torus" (unit of consciousness)
#          has "closed the loop" (khép vòng) at several levels (internal / intersubjective / field).
# Note: terms in comments map to TVT:
#   - Torus.chu_vi = surface expressions (language, behavior, affect)
#   - Torus.loi = core click-back signal ([°])
#   - khép_vòng = all surface expressions operate under one unified logic AND core confirms
#   - levels: NOI_TAI (in

Gravity Forms Override Styles Variables

https://community.gravityforms.com/t/orbital-theme-css-variables-selector-specificity/19681
/// Gravity Forms OVERRIDES
//// Modify variables
body:has(#gravity_forms_theme_framework-css) .gform-theme--framework.gform-theme--orbital {
  --gf-color-danger:#c02b0a;
    --gf-color-primary: var(--primary);
  /* Button Color and Hover color */
    --gf-ctrl-btn-bg-color-primary:var(--primary);
    --gf-ctrl-btn-bg-color-hover-primary:var(--primary-medium);
    --gf-ctrl-border-color: var(--border-color-light);  
  /* Input field Radii */
   --gf-ctrl-radius: var(--btn-radius);
  /* Input Fie

2353. Design a Food Rating System

Design a food rating system that can do the following: Modify the rating of a food item listed in the system. Return the highest-rated food item for a type of cuisine in the system. Implement the FoodRatings class: FoodRatings(String[] foods, String[] cuisines, int[] ratings) Initializes the system. The food items are described by foods, cuisines and ratings, all of which have a length of n. foods[i] is the name of the ith food, cuisines[i] is the type of cuisine of the ith food, and ratings[i] is the initial rating of the ith food. void changeRating(String food, int newRating) Changes the rating of the food item with the name food. String highestRated(String cuisine) Returns the name of the food item that has the highest rating for the given type of cuisine. If there is a tie, return the item with the lexicographically smaller name. Note that a string x is lexicographically smaller than string y if x comes before y in dictionary order, that is, either x is a prefix of y, or if i is the first position such that x[i] != y[i], then x[i] comes before y[i] in alphabetic order.
/**
 * Initializes the FoodRatings system.
 * @param {string[]} foods - List of food names.
 * @param {string[]} cuisines - Corresponding cuisine for each food.
 * @param {number[]} ratings - Initial rating for each food.
 */
var FoodRatings = function (foods, cuisines, ratings) {
    // Maps cuisine → priority queue of {food, rating}
    this.cuisineMapPriority = new Map();

    // Maps food → {rating, cuisine}
    this.foodMapcuisine = new Map();

    for (let i = 0; i < foods.length; i++) {
 

Css interactions

<!--
https://jsbin.com/tuticinojo/1/edit?html,css,output
-->

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <title>Pure CSS Interactions Demo</title>
    <style>
      body {
        font-family: sans-serif;
        padding: 2rem;
        background: #f9fafb;
        color: #111;
        line-height: 1.5;
      }

      h2 {
        margin-top: 2rem;
        border-bottom: 2px solid #eee;
        padding-bottom: 0.5rem;
      }

      /* === A

Errore su Font, es: LV_FONT_MONTSERRAT_32

Quando usao LVGL e uso dei font mi da un errore sul font, 
cerca nel pc il file lv_conf.h, questo si trova nella cartella /Users/gekos/Documents/Arduino/libraries, ma è presente anche nella cartella 
/Users/gekos/Documents/Arduino/libraries/lvgl/src, non ho ancora capito con che criterio ma avvolte esegue quest'ultimo, 
quindi modifica anche questo

Errore nella scrittura di Arduino su Waveshare ESP32-S3-Touch-LCD-1.85C

Alcune volte quando sto scrivendo su questo tipo di arduino, mi da un errore
usage: esptool write_flash [-h] [--erase-all]
                           [--flash_freq {keep,80m,60m,48m,40m,30m,26m,24m,20m,16m,15m,12m}]
                           [--flash_mode {keep,qio,qout,dio,dout}]
                           [--flash_size {detect,keep,256KB,512KB,1MB,2MB,2MB-c1,4MB,4MB-c1,8MB,16MB,32MB,64MB,128MB}]
                           [--spi-connection SPI_CONNECTION] [--no-progress]
                     

ETHNOSCORE-Net & CULTURE-ADAPT

ETHNOSCORE-Net is a multimodal deep learning framework for performance assessment in ethnic traditional sports. It integrates symbolic encoding, relational graph reasoning, hierarchical temporal fusion, and culturally adaptive embedding to ensure robust, interpretable, and culturally faithful evaluations. Coupled with CULTURE-ADAPT, a knowledge-informed evaluation strategy, this system aligns machine learning outputs with indigenous knowledge systems and cultural rules . 🌍 Motivation Traditional sports embody cultural heritage but lack standardized evaluation systems. Mainstream AI models often fail to capture cultural semantics, stylistic transitions, and rule-based expressivity. ETHNOSCORE-Net bridges computational intelligence with traditional knowledge, enabling fair, explainable, and sustainable AI-driven evaluation. 🏗️ Architecture ETHNOSCORE-Net (see Figure 1, page 6) Stage 1: Symbolic Encoding – temporal CNN encodes raw pose, velocity, and contextual features into symbolic sequences (Eq. 9–10). Stage 2: Relational Reasoning – spatio-temporal graph encoder models dependencies among actions and body segments (Eq. 11–12). Stage 3: Hierarchical Temporal Fusion – BiLSTM + temporal attention aggregates context into global cultural representation (Eq. 13–15). Stage 4: Cultural Alignment – multi-headed decoder maps outputs to cultural expressivity dimensions with manifold regularization (Eq. 16–18). CULTURE-ADAPT (see Figures 3 & 4, pages 9–10) Temporal Consensus Filtering – Gaussian-weighted smoothing to reduce noise (Eq. 19). Symbolic Alignment – compares symbolic trajectories with expert references via symbolic kernel (Eq. 20–21). Domain-Specific Rule Embedding – automaton-based compliance scoring of symbolic rules (Eq. 22–23). Expressivity Mapping – maps numeric outputs into culturally intelligible categories (Eq. 24). Consensus-Aware Evaluation – ensemble of models calibrated on cultural folds, producing composite performance scores (Eq. 25–27). 📊 Datasets Validated on four diverse datasets: Traditional Sports Performance Metrics – wrestling, archery, martial arts (reaction time, force, endurance). Ethnic Sports Athlete Evaluation – indigenous sports with expert scores + biometrics (stride, HRV, smoothness). Cultural Sports Skill Assessment Records – annotated videos with ethnographic commentary. Indigenous Games Performance Analysis – tribal sports like stick-fighting & traditional ball games with community scoring. 🚀 Results ETHNOSCORE-Net consistently outperforms YOLOv5, Faster R-CNN, DETR, RetinaNet, Mask R-CNN: Traditional Sports Dataset: Accuracy 91.86%, F1 89.91%, AUC 93.12% Ethnic Sports Dataset: Accuracy 90.44%, F1 88.20%, AUC 92.07% Cultural Sports Dataset: Accuracy 90.76%, F1 88.87%, AUC 92.30% Indigenous Games Dataset: Accuracy 89.42%, F1 87.84%, AUC 91.45% Ablation studies (Tables 3 & 4, pp. 13–14) confirm that multimodal encoder, temporal fusion, and cultural embedding each play critical roles. Removing any module significantly reduces performance. ⚙️ Installation git clone https://github.com/yourusername/ethnoscore-net.git cd ethnoscore-net pip install -r requirements.txt 📂 Usage from ethnoscore import EthnoScoreNet, CultureAdapt # Initialize model model = EthnoScoreNet(num_classes=10, d_model=128) # Forward pass outputs = model(inputs) # symbolic sequence + cultural scores # Apply CULTURE-ADAPT evaluation final_score = CultureAdapt(outputs, expert_refs, cultural_rules) 🔬 Citation If you use this framework, please cite: Xv Yibing. TradTrainEval: Intelligent algorithms for performance assessment in ethnic traditional sports. PLoS ONE, 2025.
# ethnoscore_net.py
# PyTorch >= 2.0

from __future__ import annotations
from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple, Callable
import math
import torch
import torch.nn as nn
import torch.nn.functional as F

# ============================================================
# Utilities
# ============================================================

def masked_softmax(x: torch.Tensor, mask: Optional[torch.Tensor], dim: int = -1) -> torch.Tensor:
   

☁️ AWS - API Gateway CLI Commands

# ☝️ IMPORTANT
Pour explorer API Gateway, la première chose à savoir est qu'il existe deux versions du service, avec deux ensembles de commandes distincts :
1. **API Gateway v1** : Pour les API **REST**. Les commandes se trouvent sous aws apigateway.
2. **API Gateway v2** : Pour les API **HTTP** (plus modernes et moins chères) et les API WebSocket. 
Les commandes se trouvent sous `aws apigatewayv2`.

Vous devriez donc commencer par lister les API dans chacune des deux versions pour voir ce que v

☁️ AWS - CloudFront CLI Commands

# Commande Basique
```bash
aws cloudfront list-distributions
```

# Compter les Distributions
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items | length'
```

# Compter les Attributs
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items[0] | length'
```

# Afficher les Attributs
```bash
aws cloudfront list-distributions | jq '.DistributionList.Items[0] | keys'
```

Voici le descriptif des attributs pour une distribution CloudFront, classés de manière logiq

☁️ AWS - S3 CLI Commands

# ☝️ IMPORTANT
Une distinction très importante pour S3 est qu'il existe deux ensembles de commandes :

## **`aws s3`** : 
Des commandes de **haut niveau**, faciles à utiliser, qui ressemblent aux commandes Linux (ls, cp, mv, sync). 
Idéal pour commencer et pour les opérations sur les fichiers.

## **`aws s3api`** : 
Des commandes de **bas niveau qui correspondent directement aux appels de l'API S3**. 
Elles sont plus verbeuses et puissantes. 

👉 **Nécessaire pour la configuration avancée des buc

☁️ AWS - DynamoDB CLI Commands

# List Tables
```bash
aws dynamodb list-tables --query "TableNames" --output table
```

# Inspect a Specific Table
## Basic
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME>
```

## Count Attributes
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME> |
jq '.Table | length'
```

## Display Attributes
```bash
aws dynamodb describe-table \
--table-name <TABLE_NAME> |
jq '.Table | keys'
```

Voici le descriptif des attributs pour une table DynamoDB, classés de manière logiq

☁️ AWS - Lambda - CLI Commands

# Basic
```bash
aws lambda list-functions
```

# Count Functions
```bash
aws lambda list-functions --query "length(Functions)"
```

# Count Attributes
```bash
aws lambda list-functions | jq '.Functions[0] | length'
```

# Display Attributes
```bash
aws lambda list-functions | jq '.Functions[0] | keys'
```

# Lambda Function Attributes

| Attribut | Description |
| :--- | :--- |
| **--- Identité et Code ---** | |
| **FunctionName** | 🆔 Le **nom unique** de la fonction dans votre compte et votre r

☁️ AWS - RDS CLI Commands

# Basique
```bash
aws rds describe-db-instances
```

# Count DB Instances
```bash
aws rds describe-db-instances --query "length(DBInstances)"
```

# Count Attributes
```bash
aws rds describe-db-instances | jq '.DBInstances[0] | length'
```

# Display Attributes
```bash
aws rds describe-db-instances | jq '.DBInstances[0] | keys
```

# RDS Instances Attributes

| Attribut | Description |
| :--- | :--- |
| **--- Identité et Moteur ---** | |
| **DBInstanceIdentifier** | 🆔 Le **nom unique** que vous 

event message

event message
*.pyc
/.venv/