andare in:
admin/config/bdi/settings --> site Tools
ed inserire i file che devono essere resi pubblici es:
/en/header-footer.html
/en/header-footer.js
andare qui:
.com/ca/en/__include-links e verificare le risorse che verranno esportate
# VPCs
## Counting VPCs
```bash
aws ec2 describe-vpcs --query "length(Vpcs)"
```
## Displaying Attributes
```bash
aws ec2 describe-vpcs | jq '.Vpcs[0] | keys'
```
***
## AWS VPCs Attributes
| Attribute | Description |
| :--- | :--- |
| **VpcId** | 🆔 L'identifiant unique et non modifiable du VPC. C'est sa "plaque d'immatriculation" (ex: `vpc-0123456789abcdef0`). |
| **CidrBlock** | 🌐 La plage d'adresses IPv4 **principale** du VPC, définie en notation CIDR (ex: `10.0.0.0/16`). C'est l'espace d'
# Guide d'organisation .bashrc et scripts AWS
## Principe fondamental : garder .bashrc léger et rapide
Le fichier `.bashrc` est exécuté à chaque ouverture de terminal. Il doit rester rapide et ne contenir que l'essentiel. Pensez-y comme au vestibule de votre maison : vous y mettez les essentiels (porte-manteau, clés), pas votre établi complet.
## Table de décision : où placer quoi ?
| Type de code | Emplacement | Raison | Exemple |
|-------------|-------------|---------|---------|
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
class GraphEncoder(nn.Module):
"""Multimodal Encoder + Graph message passing (IntentGraphNet, Sec. 3.3)."""
def __init__(self, in_dim, hidden_dim):
super().__init__()
self.embed = nn.Linear(in_dim, hidden_dim)
self.attn = nn.Linear(2 * hidden_dim, 1)
self.update = nn.Linear(hidden_dim, hidden_dim)
def forward(self, x, adj):
"""
x: [
```python
# model.py
from __future__ import annotations
from dataclasses import dataclass, field
from typing import Dict, List, Tuple, Optional, Any
import math
import random
# -----------------------------
# Data structures
# -----------------------------
@dataclass
class WorldObject:
obj_id: str
cls: str
affordances: List[str]
linguistic_tags: List[str]
semantic_anchors: List[str]
state: Dict[str, Any] = field(default_factory=dict)
@dataclass
cl
```python
# model.py
# AeroPerceptNet + GeoNarrative Alignment (simplified PyTorch version)
# Author: your-name
# License: MIT
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Multimodal Fusion Encoding
# ----------------------------
class FusionEncoder(nn.Module):
def __init__(self, input_dim=64, latent_dim=64):
super().__init__()
self.vis_proj = nn.Linear(input_dim, latent_dim)
self.sem_proj = n
```python
# model.py
# MoLENet + KAST minimal PyTorch implementation
# Author: your-name
# License: MIT
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Tokenized Dual-Branch Encoder
# ----------------------------
class DualBranchEncoder(nn.Module):
def __init__(self, input_dim, latent_dim):
super().__init__()
# static branch
self.static = nn.Sequential(
nn.Linear(input_dim, latent_di
```python
# model.py
# Minimal PBEN + CAAS implementation in PyTorch
# Author: your-name
# License: MIT
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# PBEN: Polycentric Behavioral Embedding Network
# ----------------------------
class PBEN(nn.Module):
def __init__(self, input_dim=64, latent_dim=32, hidden_dim=64):
super().__init__()
self.latent_dim = latent_dim
# Linear projections
self
```python
# model.py
# Minimal PRIT + RRA + inter-phase contrastive loss + REDIP policy hooks
# Author: your-name
# License: MIT
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Phase-aware Reflective Interaction Transformer (PRIT)
# ----------------------------
class PhaseAwareAttention(nn.Module):
def __init__(self, embed_dim, num_heads):
super().__init__()
self.attn = nn.MultiheadAttention(embed_dim, nu
```python
# model.py
# Minimal GlypticNet + 3CL-style losses in PyTorch
# Author: your-name
# License: MIT
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Utility: simple GCN layer
# ----------------------------
class GCNLayer(nn.Module):
"""
Basic GCN layer with A_hat = A + I and symmetric normalization.
X_{l+1} = ReLU( D^{-1/2} A_hat D^{-1/2} X_l W )
"""
def __init_
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
# -------------------------------
# Modality-Specific Encoders
# -------------------------------
class ModalityEncoder(nn.Module):
def __init__(self, in_dim, out_dim):
super().__init__()
self.net = nn.Sequential(
nn.Linear(in_dim, out_dim),
nn.ReLU(),
nn.LayerNorm(out_dim)
)
def forward(self, x):
return self.net(x)
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Patch Embedding
# ----------------------------
class PatchEmbed(nn.Module):
def __init__(self, in_ch=3, embed_dim=128, patch_size=16):
super().__init__()
self.proj = nn.Conv2d(in_ch, embed_dim, kernel_size=patch_size, stride=patch_size)
def forward(self, x):
x = self.proj(x) # (B,D,H/ps,W/ps)
x = x.flatten(2).transpose(1
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
# -------------------------------
# Basic Convolutional Encoder
# -------------------------------
class ConvBlock(nn.Module):
def __init__(self, in_ch, out_ch, k=3, s=1, p=1):
super().__init__()
self.net = nn.Sequential(
nn.Conv2d(in_ch, out_ch, k, s, p),
nn.BatchNorm2d(out_ch),
nn.ReLU(inplace=True)
)
def forward(self, x):
```python
# model.py
import torch
import torch.nn as nn
import torch.nn.functional as F
class PatchEmbed(nn.Module):
"""Flatten MRI patch into token embeddings"""
def __init__(self, in_channels=1, embed_dim=128, patch_size=16):
super().__init__()
self.proj = nn.Conv2d(in_channels, embed_dim, kernel_size=patch_size, stride=patch_size)
def forward(self, x):
x = self.proj(x) # (B, embed_dim, H/ps, W/ps)
x = x.flatten(2).transpose(1, 2) # (
```python
# model.py
# RecurAlignNet + T-GEP (reference implementation)
# Author: Legend Co., Ltd. (implementation based on the uploaded paper)
# License: MIT (adjust as needed)
from typing import Dict, List, Optional, Tuple
import math
import torch
import torch.nn as nn
import torch.nn.functional as F
# ----------------------------
# Utilities
# ----------------------------
class TimeEncoding(nn.Module):
"""Sinusoidal or learned time encoding; here we use learned embed
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchdiffeq import odeint
class TemporalEncoder(nn.Module):
"""BiGRU-based encoder for multimodal temporal data."""
def __init__(self, input_dim, hidden_dim, latent_dim):
super().__init__()
self.bigru = nn.GRU(input_dim, hidden_dim, batch_first=True, bidirectional=True)
self.fc = nn.Linear(2 * hidden_dim, latent_dim)
def forward(self, x):
h, _ = self.big