GTRM

# GTRM: Genomic-Driven Treatment Response Model for AI-Assisted Drug Screening This repository provides a clean, PyTorch reference implementation of **GTRM** — an AI-assisted drug screening model that integrates **genomic, transcriptomic, and proteomic** data; models **tumor heterogeneity**; and scores **mutation–drug** interactions to enable personalized therapy ranking for urinary-system cancers. > Core ideas implemented here are derived from the paper “Construction and validation of an artificial intelligence-assisted drug screening model for urinary system cancer,” including: > (1) heterogeneity-aware mutation encoding with attention and pairwise interactions, > (2) cross-omics fusion with modality attention and outer-product interaction tensors, and > (3) a mutation–drug scoring head for ranking candidate drugs. :contentReference[oaicite:0]{index=0} --- ## Features - **Heterogeneity-Aware Encoding (HAE)** Learns per-mutation embeddings, attends over them, and augments patient vectors with pairwise interaction signals. - **Cross-Omics Fusion (COFN)** Omics-specific encoders + learned modality weights, plus an outer-product interaction tensor to capture cross-modal effects. - **Mutation–Drug Scoring (MDS)** Bilinear compatibility between patient and drug embeddings refined by a mutation-attention vector; outputs per-drug response scores. - **(Optional) Heterogeneity-Aware Learning (HAL)** Weighted aggregation across subclonal representations (if you have subclone-level inputs). --- ## Repository Structure
# model.py
# PyTorch reference implementation of GTRM
# - Heterogeneity-Aware Encoding (HAE)
# - Cross-Omics Fusion Network (COFN)
# - Mutation–Drug Scoring (MDS)
#
# This code is framework-faithful to the paper while remaining concise and easy to extend.

from dataclasses import dataclass
from typing import Dict, Optional, Tuple

import torch
import torch.nn as nn
import torch.nn.functional as F


# -----------------------------
# Utilities
# -----------------------------

d

PlasmoNet

# PlasmoNet: Species-Level Malaria Parasite Classification with Domain Adaptation PlasmoNet is a PyTorch implementation of a lightweight, field-ready model for species-level malaria parasite classification. It combines: - **Hierarchical Visual Encoding** (multi-stage Conv encoder) for robust local features - **Dual-Branch Attention** to fuse token self-attention with morphology-aware cues - **Graph-Based Reasoning** to propagate evidence across spatially distant but semantically similar regions - **PlasmoAdapt**: prototype-guided domain alignment + consistency & spectral matching regularizers for cross-domain robustness --- ## Highlights - **Morphology-aware attention**: boosts parasite-relevant regions even under noise. - **Graph reasoning**: GCN-like message passing over token nodes improves long-range consistency. - **Prototype alignment**: class prototypes guide unlabeled target features during adaptation. - **Consistency & spectral matching**: enforces invariance to augmentations and aligns channel statistics across domains. --- ## Repo Structure
# model.py
# PlasmoNet: Hierarchical encoder + Dual-Branch Attention + Graph-Based Reasoning
# with PlasmoAdapt utilities for domain adaptation.
#
# This implementation follows the design described in the uploaded article:
# - Hierarchical Visual Encoding (Sec. 3.3; Fig. 1–2)
# - Dual-Branch Attention (Eq. 13–17)
# - Graph-Based Inference (Eq. 18–21)
# - PlasmoAdapt (prototype alignment, consistency & spectral matching; Eq. 23–34)
#
# Minimal dependencies: torch

from typing import D

SDGN-ACR

# SDGN-ACR: Sparse Deviational Graph Network with Adaptive Confidence Routing **Goal:** real-time, context-aware anomaly detection for surgical/neurosurgical data modelled as a temporally evolving graph. Nodes are time steps (or clips/patches); edges encode temporal and semantic proximity. The model amplifies *meaningful* deviations while remaining structurally coherent, then refines scores via confidence routing over the graph. --- ## Highlights - **Deviation-aware attention**: neighbors that *differ* more get higher weight. - **Sparse focused propagation**: trainable thresholding keeps only salient edges. - **Multi-view anomaly modeling**: reconstruction error + density (mixture) likelihood. - **Adaptive Confidence Routing (ACR)**: iterative refinement with gated routing and consensus regularization. --- ## Repository Structure
# model.py
# Reference PyTorch implementation of SDGN + ACR
# Derived from the uploaded article's method section and equations.
# Minimal dependencies: torch, numpy

from __future__ import annotations
import math
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------
# Utilities
# ---------------------------

def pairwise_sqdist(a: torch.Tensor, b: torch.Tensor) -> torch.Tensor:
    """
    Compute squared Euclidea

CGINet-SARM

# CGINet-SARM: Interpretable, Uncertainty-Aware Medical Image Segmentation This repository contains a **PyTorch** reference implementation inspired by the paper _“Deep learning uncertainty quantification method for image segmentation and treatment decision-making in radiation oncology.”_ It focuses on two core ideas: - **CGINet (Concept-Guided Interpretability Network):** projects features into an **interpretable concept space**, models **cross-concept interactions**, and aligns representations **across layers**. - **SARM (Semantic Attribution Refinement Mechanism):** refines gradient-based attributions within the concept space, promotes **cross-layer consistency**, and enforces **stability** under small input perturbations. > At a glance, the **architecture overview** is depicted in the diagram on **page 5, Figure 1**; **layer & sequence alignment** appears on **page 7, Figure 2**; and **SARM** is illustrated on **page 9, Figure 3**. Datasets and training details are summarized in **Section 4.1–4.2 (pages 12–13)**. :contentReference[oaicite:0]{index=0} --- ## Why this repo Clinical segmentation must be **accurate** _and_ **trustworthy**. We provide: - **Voxel-wise segmentation logits** with **uncertainty maps** (aleatoric head + epistemic via MC Dropout). - **Concept activations** and **refined attributions** (SARM) for interpretability. - Pluggable backbone and modular losses so you can integrate into your pipeline. --- ## Features - **Semantic Projection Mechanism (SPM)**: linear projection to concept space with sparsity; rebuilds an interpretable representation (Eq. 10–13). - **Cross-Concept Interaction Modeling (CCIM)**: input-conditioned adjacency over concepts; graph-like diffusion in concept space (Eq. 14–17). - **Layer & Sequence Alignment (LSA)**: soft projection at intermediate layers with reconstruction bounds (Eq. 18–19). - **SARM**: - **Relevance-aware filtering** of attributions in concept space (Eq. 23–27). - **Cross-layer attribution consistency** (Eq. 28–31). - **Stability under perturbations** and **prior alignment hooks** (Eq. 32–35). References to equations and components follow the paper’s notation. :contentReference[oaicite:1]{index=1} --- ## Quick start ### 1) Install ```bash python -m venv .venv source .venv/bin/activate # Windows: .venv\Scripts\activate pip install torch torchvision

```python
# model.py
# A compact PyTorch implementation inspired by CGINet + SARM for medical image segmentation
# from: "Deep learning uncertainty quantification method for image segmentation and treatment decision-making in radiation oncology"
# Key components: Semantic Projection Mechanism (SPM), Cross-Concept Interaction Modeling (CCIM),
# Layer & Sequence Alignment (LSA), and SARM attribution refinement / consistency.
#
# NOTE:
# - This is a minimal, readable reference, not a hype

Enhancing Image Classification Reliability in Radiation Oncology via Uncertainty Quantification

# Enhancing Image Classification Reliability in Radiation Oncology via Uncertainty Quantification This repository provides a clean, **reference PyTorch implementation** of the methods proposed in the paper *“Enhancing Image Classification Reliability in Radiation Oncology via Uncertainty Quantification.”* It implements: - **MedStructFormer** — a transformer-style encoder with anatomical positional encodings and anatomically-constrained attention. - **KIDL (Knowledge-Infused Dynamic Learning)** — a lightweight knowledge-gated feature modulation block with consistency losses. - **Uncertainty Quantification** — Monte Carlo Dropout inference utilities that return predictive mean and variance. --- ## Key Features - **Anatomical Positional Encoding (APE):** Combines normalized coordinates with optional atlas probability maps. - **Anatomically-Constrained Attention (ACA):** Attention scores penalized by anatomical distances. - **Multi-Scale Consistency:** Auxiliary heads for intermediate supervision. - **Knowledge-Gated Modulation (KIDL):** Fuses knowledge vectors with latent features to improve reliability. - **Uncertainty Estimation:** Predictive variance computed via MC-Dropout. --- ## Installation ```bash python -m venv .venv && source .venv/bin/activate pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 pip install numpy
# model.py
# Reference PyTorch implementation of MedStructFormer + KIDL ideas (2D classification)
# Paper reference: "Enhancing Image Classification Reliability in Radiation Oncology via Uncertainty Quantification"
# Key elements: Anatomical Positional Encoding (Eq. 15), Anatomically-Constrained Attention (Eqs. 16–18),
# Multi-scale outputs (Eqs. 20–24), Knowledge-Gated Modulation (Eq. 26), MC-Dropout uncertainty (Eqs. 36–37).

from typing import List, Optional, Tuple
import math
import 

Linux useful commands

### Replaces all newline characters in input.txt with spaces, outputting a single-line result.

```sh
sed ':a;N;$!ba;s/\n/ /g' input.txt
```

### Change ownership of current directory (recursive)
```sh
# Run with sudo to change ownership
# Set both user and group to the current logged-in user
# Apply recursively (-R) to all files and subdirectories
sudo chown -R "$USER:$USER" "$PWD"
```


### Convert images to PDF and extract images from PDF

```sh
# Install tools
sudo apt install img2pdf popple

Git usefull commands


### Reset branch to a specific commit
```sh
git reset --hard <commit_id>       # Move local branch to the given commit, discarding changes
git push --force <remote> <branch> # Overwrite remote branch history with local state
```

### Git configuration behind a proxy
```
git config user.name "Your Name"
git config user.email "your.email@example.com"

# Proxy via cntlm
git config http.proxy 'http://127.0.0.1:3128'
git config https.proxy 'http://127.0.0.1:3128'

# Cache credentials for 30 days
git

Fechar modal close modal dentro de modal

apex.theme.closeRegion("ALTERA_RECEBEDOR");

フッターが見えたらバナー非表示

document.addEventListener("DOMContentLoaded", () => {
  const footer = document.querySelector(".l-footer");
  const banner = document.querySelector(".floating-banner");

  if (!footer || !banner) return;

  const observer = new IntersectionObserver((entries) => {
    entries.forEach((entry) => {
      if (entry.isIntersecting) {
        // footerが見えたら非表示
        banner.classList.add("is-hidden");
      } else {
        // footerが見えなくなったら表示
        banner.classList.remove("is-hidden");
      }
  

フローティングバナーの表示制御

// フローティングバナーの表示制御
document.addEventListener('DOMContentLoaded', function () {
  const fixedEl = document.querySelector('.-fixed');
  const kvEl = document.querySelector('.sectionKv_ttl');
  const ctaEl = document.querySelector('.ctaContainerBottom_block');

  if (!fixedEl || !kvEl || !ctaEl) return;

  const kvHeight = kvEl.offsetHeight;

  function getTriggerPoint() {
    if (window.innerWidth <= 768) {
      return kvHeight * 0.25; // SP
    } else {
      return kvHeight * 0.1; // PC
    }
 

Smart Card Certificate support on domain controllers

$DCs = [System.DirectoryServices.ActiveDirectory.Domain]::GetCurrentDomain().DomainControllers.Name
$ScriptBlock = { Get-ChildItem -Path Cert:\LocalMachine\My | Select-Object -ExpandProperty EnhancedKeyUsageList }
Invoke-Command -ScriptBlock $ScriptBlock -ComputerName $DCs -ErrorAction SilentlyContinue | Select-Object -Property * -ExcludeProperty RunspaceId | Sort-Object -Property PSComputerName | Where-Object FriendlyName -match 'smart card' | Format-Table -AutoSize

SHELL

when we open a terminal on Unix machine - the default shell is Bash
to check type:
which $SHELL
#! - shebang

BASH - the Bourne again shell

to activate BASH from kali type:
bash

bash starts with $

BASH is also dynamic interpreted command language that can be used to write scripts
to create a custom script to add to my project create a file that ends with .sh or with no extencion:
touch filename.sh or filename


write command like:
echo "my script"

create variables:
GREET="Rita"

to access th

3025. Find the Number of Ways to Place People I

You are given a 2D array points of size n x 2 representing integer coordinates of some points on a 2D plane, where points[i] = [xi, yi]. Count the number of pairs of points (A, B), where A is on the upper left side of B, and there are no other points in the rectangle (or line) they make (including the border). Return the count.
/**
 * @param {number[][]} points
 * @return {number}
 */
var numberOfPairs = function(points) {
    let count = 0;

    // Step 1: Sort points by x ascending, then y descending
    // This ensures that for any point[i], all point[j] with j < i have x <= x[i]
    points.sort((a, b) => {
        if (a[0] === b[0]) return b[1] - a[1]; // If x is equal, sort by y descending
        return a[0] - b[0]; // Otherwise, sort by x ascending
    });

    // Step 2: Iterate through each point[i] as potenti

HydroOpt

# HydroOpt: Deep Learning and Optimization for Water Resource Management **HydroOpt** is a research-oriented framework that integrates **time series forecasting** and **resource allocation optimization** to tackle challenges in water resource management. It combines modern **deep learning models** (LSTM, CNN, Transformer, HydroCortex) with **convex optimization** and the **Adaptive Intertemporal Allocation (AIA)** strategy to enable robust planning under uncertainty. This repository provides a complete, modular codebase with preprocessing, modeling, optimization, evaluation, and visualization components. It is designed for **reproducibility**, **scalable experiments**, and **practical deployment**. --- ## ✨ Key Highlights - **Forecasting Models** - LSTM, CNN, Transformer for temporal prediction - HydroCortex: graph-based spatiotemporal learning framework - **Optimization Framework** - Convex allocation solver with demand, capacity, and ecological constraints - Adaptive Intertemporal Allocation (AIA) for multi-period planning - **Feature Engineering** - Seasonal, lagged, and rolling statistics - **Evaluation** - MAE, RMSE, MAPE, R² metrics - **Visualization** - Time series plots, allocation heatmaps - **Testing** - Extensive `pytest` suite for models, optimization, and utilities --- ## 📂 Repository Structure HydroOpt/ │── README.md │── LICENSE │── requirements.txt │── setup.py │ ├── data/ # Example or linked datasets │ ├── README.md │ └── samples/ │ ├── src/ # Core modules (15 Python files) │ ├── init.py │ ├── preprocessing.py │ ├── feature_engineering.py │ ├── model_lstm.py │ ├── model_cnn.py │ ├── model_transformer.py │ ├── model_hydrocortex.py │ ├── optimization_core.py │ ├── optimization_aia.py │ ├── evaluation.py │ ├── visualization.py │ ├── utils.py │ ├── config.py │ ├── train.py │ └── run_experiment.py │ ├── experiments/ # Configs & results │ ├── configs/ │ └── results/ │ └── tests/ # Unit tests (pytest) ├── test_models.py ├── test_optimization.py └── test_utils.py --- ## ⚙️ Installation Clone the repository and set up the environment: ```bash git clone https://github.com/yourusername/HydroOpt.git cd HydroOpt pip install -r requirements.txt Requirements Key dependencies include: torch ≥ 2.0 scikit-learn cvxpy xarray, netCDF4 (for climate/hydrology data) matplotlib, seaborn, plotly Full list: see requirements.txt.
# Core scientific stack
numpy>=1.24.0
pandas>=2.0.0
scipy>=1.11.0

# Deep learning framework
torch>=2.0.0
torchvision>=0.15.0
torchaudio>=2.0.0

# Machine learning & utilities
scikit-learn>=1.3.0
networkx>=3.1
tqdm>=4.65.0

# Optimization
cvxpy>=1.3.2

# Data handling
xarray>=2023.1.0
netCDF4>=1.6.4
pyyaml>=6.0
hydra-core>=1.3.2

# Visualization
matplotlib>=3.7.0
seaborn>=0.12.2
plotly>=5.14.0

# Optional: GPU accelerated optimization (only if CUDA is available)
cup

Epicor BPM - Prevent Deletion

// Check if any QuoteHed row is marked for deletion
var deletedQuote = ds.QuoteHed.FirstOrDefault(r => r.RowMod == "D");

if (deletedQuote != null)
{
    // Throw a business logic exception to prevent deletion
    // This ensures the user cannot delete sales quotes through the UI or service
    throw new BLException("Sales Quote deletion is not allowed.");
}

Hydroponic Smart System

BPVP Samarinda Ini adalah description
#include <WiFi.h>
#include <Wire.h>
#include <LiquidCrystal_I2C.h>
#include <DHT.h>
#include <BlynkSimpleEsp32.h>
#include <NTPClient.h>
#include <WiFiUdp.h>