BACE-CICI Multimodal Assessment

# BACE-CICI Multimodal Assessment ## Overview BACE-CICI-Multimodal-Assessment is a research-style implementation inspired by the paper **“Multimodal Fusion for Analyzing English Learning Behaviors and Competence Assessment” by Long Shi**. The repository explores how multimodal behavioral data and curriculum optimization can be combined to model learner competence and generate adaptive instructional paths. :contentReference[oaicite:1]{index=1} The framework centers on two key components: - **Behaviorally Augmented Competence Embedding (BACE)** A multimodal encoder that fuses linguistic complexity, behavioral signals, and task descriptors into a unified competence embedding. - **Competence-Informed Curriculum Inference (CICI)** A curriculum strategy that selects tasks based on predicted competence gains, feasibility constraints, and topical coherence, effectively turning curriculum sequencing into a constrained optimization problem over competence trajectories. :contentReference[oaicite:2]{index=2} This repository is intended for experimentation and demonstration, rather than production use. --- ## Core Ideas ### Behaviorally Augmented Competence Embedding (BACE) BACE models the interaction between: - **Task descriptors** Including linguistic content, task difficulty and complexity features. - **Behavioral logs** Response correctness, response latency, revision counts and other behavioral markers. - **Latent competence** A low-dimensional representation of a learner’s current state, evolving over time as more tasks are completed. :contentReference[oaicite:3]{index=3} The model pipeline can be summarized as: 1. Encode the task text into a dense representation using a language encoder. 2. Map behavioral tuples into a behavioral embedding space. 3. Project task difficulty and structural descriptors into a complexity embedding. 4. Fuse all representations with a gated mechanism that allows behavior signals to dominate when textual information is ambiguous. 5. Use attention-based pooling over a learner’s history to obtain a global competence embedding. 6. Predict task-level success probabilities and competence trajectories, while regularizing for smooth progression over time. :contentReference[oaicite:4]{index=4} ### Competence-Informed Curriculum Inference (CICI) CICI treats curriculum design as an optimization problem over the latent competence space: - It computes **competence gaps** between the learner’s current state and each candidate task. - A feasibility mask ensures that only tasks within a reasonable difficulty band are considered. - The algorithm estimates the **expected learning gain** for each feasible task, balancing reinforcement against cognitive load. :contentReference[oaicite:5]{index=5} - A coherence constraint encourages smooth topic transitions between consecutive tasks. - Curriculum segments are selected to maximize cumulative reward while discouraging redundant or overly similar tasks. The result is an adaptive curriculum that aligns short-term difficulty with long-term competence development. --- ## High-Level Architecture Conceptually, the repository is organized into: - A **model module** that implements: - text encoding - behavioral encoding - complexity encoding - multimodal fusion - competence inference and prediction heads - A **training and curriculum module** that: - maintains learner histories - updates competence embeddings - invokes the CICI strategy to select the next tasks The implementation focuses on clarity and extensibility rather than maximum efficiency. --- ## Data Assumptions The framework assumes that each interaction example contains: - A **task** with: - raw text or a text identifier - scalar difficulty score - a vector of complexity features - A **learner behavior record** with: - a binary correctness label - response time - auxiliary behavioral features (for example revision counts or hesitation indicators) - A **learner identifier** so that multiple interactions can be grouped into a chronological session. Real-world multimodal deployments may additionally integrate audio, video, and gaze signals, but those are abstracted as generic behavioral features in this reference implementation. :contentReference[oaicite:6]{index=6} --- ## Intended Usage This project is intended for: - Researchers working on: - multimodal learning analytics - competence modeling - adaptive curriculum design - Engineers prototyping: - behavior-aware recommendation for learning platforms - learner modeling components inside intelligent tutoring systems - Educators and learning designers exploring: - how behavioral logs can be turned into interpretable competence trajectories - how data-driven curricula differ from static placement and sequencing The code is designed to be modified and extended to fit specific datasets and evaluation protocols. --- ## Limitations - The implementation is **simplified** relative to the full framework described in the paper and does not directly process raw audio or video streams. - The quality of competence estimation depends strongly on: - dataset size and coverage - reliability of behavioral annotations - diversity of tasks and difficulty levels - CICI assumes consistent engagement and does not explicitly model motivational or affective factors, which may be important in real educational environments. :contentReference[oaicite:7]{index=7} --- ## Future Work Potential extensions include: - Integration with real multimodal encoders for audio and video. - More advanced sequence models for long-term competence evolution. - Richer diversity and fairness constraints in curriculum optimization. - Dashboards to visualize learner trajectories and curriculum recommendations. - Interfaces to plug the model into real online learning platforms. --- ## Citation If this repository or its ideas are useful in your work, please consider citing the original paper: > Long Shi. *Multimodal Fusion for Analyzing English Learning Behaviors and Competence Assessment*. School of Foreign Languages, Pingdingshan University. --- ## License This project is provided for academic research and educational exploration only. Before using it in any real educational product, please review ethical, privacy, and fairness implications carefully.
from __future__ import annotations

from dataclasses import dataclass
from typing import Dict, List, Optional, Tuple

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------------------------------
# Utility modules
# ---------------------------------------------------------------------------


class MLP(nn.Module):
    """Simple multi-layer perceptron with LayerNorm and residual option."""

    def __init__(
 

Adaptive-LSAN-Tutor

# Adaptive-LSAN-Tutor ## Overview Adaptive-LSAN-Tutor is a research-oriented framework inspired by the paper *“Enhancing English Language Acquisition through Deep Learning-Based Adaptive Tutoring Systems”* by **Xi Yang and Ling Li**. The project explores how deep learning can enhance personalized English language learning by combining linguistic modeling, cognitive principles, and multimodal representations. At its core, the framework consists of two major components: ### **1. Lexical-Semantic Alignment Network (LSAN)** A specialized neural architecture designed to align lexical, semantic, and phonological information into a unified representation. The model integrates: - Lexical embeddings - Contextual semantic representations - Phonological (IPA-based) embeddings - Dual-path encoding (Transformer + semantic graph encoder) - Error-sensitive auxiliary objectives LSAN is designed to simulate how human instructors provide feedback on grammar, meaning, and pronunciation simultaneously. ### **2. Curriculum-Guided Adaptive Transfer (CGAT)** A dynamic curriculum strategy that adjusts training difficulty by evaluating: - Task complexity - Current model performance - Transfer benefits across tasks - Learner-specific latent profiles CGAT allows the model to learn in a pedagogically meaningful way, gradually exposing it to harder tasks while reinforcing weaker areas. --- ## Key Features ### **Multimodal Linguistic Embedding** The framework brings together three essential dimensions of language learning: - **Lexical level**: vocabulary, token identity - **Semantic level**: contextual meaning and coherence - **Phonological level**: IPA-based sound features This gives the model a holistic view of linguistic structure, enabling richer error detection and feedback. ### **Dual-Path Encoder Architecture** The LSAN architecture includes: - A **structural Transformer encoder** to capture sequential and syntactic patterns - A **semantic graph encoder** that models relationships derived from dependency parsing or contextual linking This dual-path design helps the system reconcile surface-level and deep language features. ### **Error-Aware Auxiliary Tasks** The model is trained using multiple objectives: - Grammatical form classification - Semantic alignment through contrastive learning - Phoneme-aware contrastive discrimination These tasks improve robustness and interpretability in an educational context. ### **Curriculum-Guided Learning Strategy** CGAT uses a dynamic sampling schedule that: - Encourages early exposure to simple patterns - Prioritizes tasks where the model performs poorly - Adjusts difficulty gradually - Encourages transfer across related linguistic tasks This learning process mimics how human educators scaffold learning materials. --- ## Theoretical Motivation The Adaptive-LSAN-Tutor framework is grounded in two core ideas: 1. **Language acquisition is multimodal** Human learners use not only words and meaning, but also phonological cues and structural patterns. LSAN attempts to approximate this integration. 2. **Effective instruction follows a curriculum** Learners progress from simpler to more complex tasks, but not linearly. CGAT simulates this by evaluating both task difficulty and learner performance. Together, these ideas support a personalized, data-driven learning process suitable for diverse learner profiles.

---

# ✔️ 4. model.py(较长、结构清晰的代码,无需真实可运行)

```python
"""
model.py
Implementation of the Lexical-Semantic Alignment Network (LSAN)
and supporting components inspired by the research framework.
"""

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------
# Embedding Modules
# ---------------------------------------------------

class LexicalEmbedding(nn.Module):
    """Basic word embedding layer."""
    def 

CreditGraphFormer & RegulaLogic: Privacy-Aware Federated Enterprise Credit Rating

# CreditGraphFormer & RegulaLogic: Privacy-Aware Federated Enterprise Credit Rating This repository provides a reference implementation of **CreditGraphFormer** and **RegulaLogic**, a unified framework for **privacy-aware federated enterprise credit rating with fiscal big data**. It is inspired by the work *"Privacy-Aware Federated Learning for Enterprise Credit Rating with Fiscal Big Data"* by **Xuan Huang**, which integrates fiscal knowledge graphs, transformer-based graph encoders, and regulatory reasoning modules to deliver accurate and interpretable credit scores under strict privacy and compliance constraints. :contentReference[oaicite:1]{index=1} The framework is designed for scenarios where enterprise tax bureaus, financial institutions, and regulators must collaborate on credit modeling without centralizing raw fiscal data. --- ## 1. Motivation Modern enterprise credit rating faces three key challenges: - **Data Privacy** Fiscal records, tax filings, and transaction logs are highly sensitive. Regulations such as GDPR and local financial data laws restrict direct data sharing and central aggregation. - **Heterogeneous Fiscal Big Data** Enterprise behavior is represented by multi-source, multi-view data: - Tax declarations, invoices, and audit records - Corporate registration information and legal events - Supply-chain and transaction networks These data are high-dimensional, sparse, and structurally complex. - **Regulatory Compliance & Interpretability** Credit ratings must be: - Transparent to regulators - Consistent with policy rules and audit histories - Robust against distribution shifts in decentralized environments CreditGraphFormer and RegulaLogic together address these issues by combining **heterogeneous graph representation learning**, **temporal stability modeling**, **federated optimization**, and **rule-aligned reasoning**. --- ## 2. Conceptual Overview The framework consists of three conceptual layers: 1. **CreditGraphFormer** A transformer-style graph encoder operating on a **fiscal knowledge graph**: - Nodes: enterprises and regulatory entities - Edges: typed relations such as invoice issuance, supply-chain links, equity relations, legal representation, etc. - Attributes: revenue, VAT behavior, audit frequencies, penalties, industry and regional codes :contentReference[oaicite:2]{index=2} Key ideas: - Multimodal feature projection into multiple fiscal “views” - Relation-specific message passing with attention over neighbors - Regulatory-aware transformer blocks with compliance masks - Temporal stability embeddings based on indicator volatility over time 2. **RegulaLogic** A compliance-driven reasoning layer that adjusts neural predictions using: - Compliance coefficients (filing timeliness, penalty frequency, audit signals) - Constraint masks and deterministic downgrade rules - Sector-based rating floors and policy penalties - Temporal smoothing of ratings and confidence calibration :contentReference[oaicite:3]{index=3} 3. **Federated Learning Pipeline** A privacy-preserving training scheme where: - Multiple clients (institutions) keep data local - Models are trained locally and aggregated securely - Optional differential privacy and secure aggregation mechanisms are used to prevent leakage of sensitive gradients. --- ## 3. Key Components ### 3.1 Multimodal Feature Projection Enterprise attributes are projected into multiple fiscal “views” (e.g., taxation profile, invoice behavior, audit trail): - Each view has its own projection matrix. - View embeddings are concatenated into a composite representation. - Relation-specific attention modules aggregate information from neighbors based on relation type. :contentReference[oaicite:4]{index=4} This allows the encoder to exploit domain structure instead of treating all features as a flat vector. --- ### 3.2 Regulatory-Aware Transformer Encoder CreditGraphFormer uses a transformer-style encoder that: - Operates on aggregated relational messages. - Applies a **regulatory mask** that: - Suppresses contributions from non-compliant or high-risk neighbors. - Enforces compliance-aware attention patterns. - Produces contextualized node embeddings that already respect policy-level constraints. This design connects low-level fiscal indicators with high-level regulatory semantics. --- ### 3.3 Temporal Stability Embedding Credit behavior is inherently temporal. The framework: - Computes temporal differences of fiscal indicators over a time window. - Aggregates volatility into a **stability embedding**. - Injects stability signals into the final representation, allowing: - Distinction between structurally stable and highly volatile enterprises. - Better robustness to noisy or transient events. :contentReference[oaicite:5]{index=5} --- ### 3.4 RegulaLogic Reasoning Layer RegulaLogic refines CreditGraphFormer outputs according to explicit rules: - **Compliance Coefficient and Constraint Mask** - Enterprises with low compliance (e.g., late filings, frequent penalties) receive rating penalties. - A binary mask gates compliant vs. non-compliant entities. - **Audit-Based Downgrades** - When audit risk exceeds a threshold, ratings are deterministically downgraded by at least a fixed step. - **Sector-Based Rating Floors** - Each industry sector can define a minimum rating floor, ensuring sector-consistent decisions. - **Temporal Smoothing and Policy Penalties** - Historical predictions are smoothed with an exponential decay kernel. - Policy rules generate a penalty score that further adjusts the smoothed rating. :contentReference[oaicite:6]{index=6} - **Confidence Calibration** - High-entropy predictions can be replaced with sector median ratings, improving stability and regulatory trust. --- ## 4. Typical Workflow A typical usage pattern for this project is: 1. **Data Preparation** - Build a fiscal knowledge graph from: - Tax records, invoices, registrations, and audit logs. - Encode: - Node attributes (enterprise features). - Relation types (edge labels). - Temporal slices of indicators. 2. **Federated Setup (Optional)** - Partition data by institution (e.g., different tax bureaus or banks). - Initialize federated clients that each train a local CreditGraphFormer. - Configure secure aggregation and optional differential privacy. 3. **Model Training** - Train CreditGraphFormer with an **ordinal loss** for rating labels. - Add regularization terms for graph smoothness and compliance alignment. - Periodically run federated aggregation if in decentralized mode. 4. **RegulaLogic Inference** - Feed encoder outputs and metadata (compliance coefficients, sector tags, audit signals) into RegulaLogic. - Obtain adjusted logits, calibrated probabilities, and final discrete ratings. 5. **Evaluation** - Use metrics such as: - Accuracy, macro and weighted F1, AUC. - Default detection rates and precision@k. - Stress-test under label noise, distribution shift, and different privacy budgets. --- ## 5. Example Use Cases - Enterprise tax and credit rating for government fiscal regulators. - Credit risk assessment for financial institutions with strong privacy constraints. - Cross-region or cross-sector credit modeling without pooling raw data. - Research on trustworthy and regulation-compliant AI in financial governance. --- ## 6. Limitations and Future Directions - **Rule Maintenance** RegulaLogic currently relies on manually specified policy rules. As regulations evolve, rules must be updated or partially learned from data. - **Federated Assumptions** The framework assumes reasonably honest participants and stable communication. Adversarial clients and highly unstable networks require more robust protocols. - **Scalability** Very large fiscal graphs and complex rule sets may require: - Hierarchical modeling strategies. - Further compression and sampling techniques. Future work may explore dynamic rule learning, stronger cryptographic protections, and automated monitoring of model–policy alignment.
from __future__ import annotations

from dataclasses import dataclass
from typing import Dict, Any, Optional, List, Tuple

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------------------------------
# Data containers
# ---------------------------------------------------------------------------


@dataclass
class CreditGraphBatch:
    """
    Container for a mini-batch of fiscal knowledge graph data.

    S

FairCoDNet: Fairness-Constrained Financial Regression Model

# FairCoDNet: Fairness-Constrained Financial Regression Model FairCoDNet is an interpretable and fairness-aware financial regression architecture designed to provide equitable, transparent, and high-performance predictions under sensitive-attribute constraints. The framework is built upon the principles described in the paper *“Study on the enterprise tax risk identification model based on multi-source fiscal and tax data”* by **Xuan Huang**, integrating multi-source data fusion, fairness-driven optimization, and convex reformulations to ensure strong predictive ability without violating fairness constraints. This repository aims to provide an easy-to-extend implementation of the model components, including coefficient-decoupled regression layers, fairness-aware CoD constraints, and a novel Convex Reformulation & Constraint-Aware Optimization Dynamics (CAOD) framework. --- ## ✨ Key Features ### **Fairness via CoD Constraint** FairCoDNet explicitly restricts the proportion of predictive variance that can be explained by sensitive attributes such as gender, age, region, or protected financial indicators. The fairness constraint is implemented using: - Coefficient of Determination (CoD) between predictions and sensitive attributes - Tunable fairness threshold ε - Second-order cone (SOC) reformulation for tractable optimization --- ### **Coefficient Decoupling Strategy** The regression head decomposes prediction coefficients into: - A sensitive feature component - A non-sensitive feature component The mixing factor γ ∈ [0,1] allows smooth control over how much sensitive information the model is allowed to use. This design enables: - Transparent fairness–accuracy trade-off - Real-time audit capability - Fully interpretable model components --- ### **Convex Reformulation** FairCoDNet transforms the fairness-constrained QCQP into a convex problem via: - Variable lifting - Coefficient reparameterization - Conic relaxation This guarantees: - Global optimality under mild assumptions - Stable training - Compatible integration with existing convex solvers --- ### **Constraint-Aware Optimization Dynamics (CAOD)** CAOD provides: - Dynamic fairness trajectory monitoring - Adaptive rescaling of sensitive coefficients - Coupled updates of (α, β, γ) - Projection into feasible fairness regions This ensures the model does not drift toward unfair solutions while maintaining predictive performance. --- ## 🧠 Applications FairCoDNet can be applied in various financial and risk-sensitive contexts: - **Enterprise tax-risk identification** - **Credit scoring & loan underwriting** - **Corporate risk modeling** - **Financial QA and legal risk assessment** - **Regulated environments requiring transparency** --- ## 📌 Requirements - Python 3.8+ - PyTorch ≥ 1.10 - NumPy, SciPy - CVXPY (for convex refinement) - Optional: CUDA for acceleration
import torch
import torch.nn as nn
import torch.nn.functional as F

class CoefficientDecouplingLayer(nn.Module):
    """
    Implements the coefficient decoupling strategy:
    y_hat = γ * (s^T θ_s) + (1 - γ) * (u^T θ_u)
    """

    def __init__(self, dim_sensitive, dim_nonsensitive):
        super().__init__()
        self.theta_s = nn.Parameter(torch.randn(dim_sensitive))
        self.theta_u = nn.Parameter(torch.randn(dim_nonsensitive))
        self.gamma = nn.Parameter(torch.t

GridFormer & InformaNet: Spatio-Informational Intelligence for Power Grid Informatization

# GridFormer & InformaNet: Spatio-Informational Intelligence for Power Grid Informatization This repository provides a reference implementation of **GridFormer** and the **InformaNet Policy Layer**, a unified modeling and decision framework for modern power grid informatization. The goal of this project is to bridge **physical grid dynamics**, **information flow**, and **adaptive control** into a single, coherent learning system. The implementation is inspired by the paper *"Big Data Platform Architecture and Information Flow Mechanisms in Power Grid Informatization"* and its proposed graph-based transformer architecture for spatio-temporal reasoning in cyber-physical power systems. --- ## 1. Motivation Modern power grids are evolving into highly digitized, cyber-physical infrastructures. They are: - Geographically distributed - Topologically complex - Continuously monitored by heterogeneous sensors and control devices - Subject to uncertainty from renewables, changing demand, and communication delays Traditional centralized or purely data-driven models: - Struggle to scale to large, dynamic networks - Often ignore the explicit grid topology - Provide limited interpretability for operators - Are not tightly integrated with control and operational constraints This repository focuses on an architecture that explicitly encodes: - The **physical network graph** of the grid - The **information and communication graph** - The **temporal evolution** of system states - A **policy layer** that respects operational feasibility and domain constraints --- ## 2. Core Ideas The project is organized around two major components. ### 2.1 GridFormer GridFormer is a spatio-informational modeling module that: - Represents the grid as a graph of buses and transmission lines - Integrates multimodal node features (physical states, measurements, control inputs) - Uses dual-graph propagation over: - The physical grid topology - The communication / information topology - Employs a temporal transformer to capture long-range time dependencies - Outputs: - Next-step state estimates - Candidate control actions Key design aspects: - Multimodal encoder for fusing physical, informational, and control features - Graph attention layers that propagate messages across the physical and communication graphs - Transformer-based temporal reasoning over a rolling time window - Decoders for state prediction and control proposals ### 2.2 InformaNet Policy Layer The InformaNet policy layer sits on top of GridFormer and transforms raw control proposals into **feasible**, **risk-aware**, and **structurally consistent** actions. It provides: - A **domain-constrained decision manifold**, approximating operational constraints such as voltage bounds and line flow limits - **Spatio-temporal action refinement**, which adjusts actions based on recent trajectories and residual errors - **Graph-based priors**, enforcing smoothness and structural coherence across electrically or topologically close nodes - **Risk-aware diversification**, which uses uncertainty estimates to inject controlled stochasticity for robustness --- ## 3. Conceptual Workflow The typical end-to-end workflow for this project looks like this: 1. **Data ingestion** - Load time-series data of grid states and measurements. - Load or construct graph descriptions of: - Physical grid topology - Communication / information topology 2. **Feature construction** - Build multimodal node features, including: - Physical states (voltages, angles, flows, etc.) - Measurements and sensor readings - Local or regional control-related signals 3. **Spatio-temporal modeling with GridFormer** - Encode node features with the multimodal encoder. - Propagate information across both graphs using dual-graph attention. - Stack a temporal transformer over sliding windows of historical states. - Decode: - Next-step state predictions - Candidate control action proposals 4. **Policy refinement with InformaNet** - Project actions into an approximate feasible set using simple domain-inspired operations. - Refine actions using recent state and prediction history. - Adjust actions with graph-based regularization. - Optionally diversify actions using uncertainty-aware perturbations. 5. **Deployment or simulation** - Use the final actions in: - Power system simulators - Digital twins - Decision-support tools - Log performance metrics such as forecast error, constraint violations, and stability indicators. --- ## 4. Key Features - Explicit handling of **two coupled graphs**: - The physical power network - The information / communication network - Support for **multimodal inputs**, including: - Physical measurements - Control and scheduling signals - High-level system descriptors - **Temporal reasoning** via transformer-style mechanisms for: - Capturing long-term dependencies - Handling variable-length histories - **Policy layer** designed around: - Operational constraints - Structural priors from grid topology - Risk-aware decision diversification - Modular design: - Components can be replaced or extended (for example, using different GNNs, temporal models, or policy mechanisms) --- ## 5. Getting Started High-level steps to use this codebase: 1. Install Python and standard machine learning dependencies (for example PyTorch). 2. Prepare datasets that contain: - Time sequences of node-level features - Physical adjacency matrices for the grid - Communication adjacency matrices (or approximations) 3. Configure model dimensions and hyperparameters. 4. Instantiate the model, construct data loaders, and implement a training loop. 5. Evaluate model performance using application-specific metrics such as: - Prediction error metrics - Rate of constraint violations - Stability or resilience indicators --- ## 6. Potential Use Cases - Short-term state forecasting in power grids - Proactive control and decision support for grid operators - Analysis of information flow and its impact on operational robustness - Research on cyber-physical system integration in smart grids - Educational demos on graph-based and transformer-based models for infrastructure systems --- ## 7. Disclaimer This repository provides a **research-oriented reference implementation**. It is not certified for real-time operation in critical infrastructure. Any deployment in real-world grids must include additional validation, safety layers, and compliance with local regulations and operational standards.
from __future__ import annotations

from dataclasses import dataclass
from typing import Optional, Tuple, Dict, Any

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------------------------------
# Utility dataclasses
# ---------------------------------------------------------------------------


@dataclass
class GridBatch:
    """
    Container for a mini-batch of grid data.

    Shapes (typical):
        x

SEO/UX Copywriter AI Prompt

You are an expert UX/SEO copywriter. 
You have a wealth of organic SEO knowledge and you understand how to make websites perform for search engines and agentic search.
You are also an established copywriter for UX, you know how to strike the perfect balance between converting a user and making it perform well in the digital space.
You write in plain english that an 8 year old can understand, you don't use emdashes and you exclusively write in british english.

How to show report in playwright?

npx playwright show-report

add custom field and post title to cf7 mail body

add_action( 'wpcf7_before_send_mail', function( $contact_form ) {

    $submission = WPCF7_Submission::get_instance();
    if ( ! $submission ) return;

    $url = $submission->get_meta( 'url' );
    if ( ! $url ) return;

    $post_id = url_to_postid( $url );
    if ( ! $post_id ) return;

    $post_title = get_the_title( $post_id );

    if ( function_exists( 'get_field' ) ) {
        $download_url = get_field( 'pdf_link', $post_id );
    } else {
        $download_url = get_post_meta( $post_i

mailhog

Ok chill, để mình gom hết lại thành **một hướng dẫn gọn – rõ – chạy được ngay**, từ lúc cài MailHog bằng Homebrew cho tới lúc WordPress test mail, cả case chạy bằng Docker hoặc chạy thuần trên máy.

---

# 🚀 HƯỚNG DẪN FULL: CÀI MAILHOG + SETUP WORDPRESS (DOCKER & NON-DOCKER)

## 🟩 PHẦN 1 — Cài MailHog bằng Homebrew

### 1. Cài đặt

```bash
brew install mailhog
```

### 2. Chạy MailHog

```bash
mailhog
```

### 3. Truy cập giao diện

* Web UI: **[http://localhost:8025](http://localhost:8025)**
* 

CloudShell Quick Reference

curl -fsSL https://cli.kiro.dev/install | bash

717. 1-bit and 2-bit Characters

We have two special characters: The first character can be represented by one bit 0. The second character can be represented by two bits (10 or 11). Given a binary array bits that ends with 0, return true if the last character must be a one-bit character.
/**
 * @param {number[]} bits
 * @return {boolean}
 */
var isOneBitCharacter = function(bits) {
    // Start from the first bit
    let i = 0;

    // Traverse until the second-to-last bit
    // (because the last bit is always 0, we want to see if it's consumed or not)
    while (i < bits.length - 1) {
        if (bits[i] === 1) {
            // If we see a '1', it must form a two-bit character (10 or 11)
            // So we skip TWO positions
            i += 2;
        } else {
            /

Project folder file structure

import os
from pathlib import Path

# -------------------------
# Define project name (main package)
# -------------------------
project_name = "src"  # More descriptive than "src"

# -------------------------
# Define additional folders
# -------------------------
cicd_folder       = "Github"
configs_folder    = "configs"
data_folder       = "data"
notebooks_folder  = "notebooks"
static_css_folder = "static/css"
templates_folder  = "templates"
tests_folder      = "tests"
scrip

Chatbot

print("Hello")
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_classic.chains import ConversationChain
from langchain_classic.memory import ConversationBufferMemory
from langchain_classic.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_classic.schema import SystemMessage,HumanMessage
# Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY"
if not openai_api_key:
    raise ValueError("O

LangChain Core Package


## Messages

from langchain.core.messages import (
    AIMessage,
    AIMessageChunk,
    BaseMessage,
    BaseMessageChunk,
    HumanMessage,
    HumanMessageChunk,
    SystemMessage,
    SystemMessageChunk,
    ToolMessage,
    ToolMessageChunk,
    FunctionMessage,
    FunctionMessageChunk,
)

  ## Prompts
  
  from langchain.core.prompts import (
    PromptTemplate,
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    AIMessag

Chatbot Prompt

🚀 Project Prompt: Build a Smart End-to-End Chatbot
🧩 Objective
Design and implement a robust, modular, and intelligent chatbot system using modern AI and web technologies. The chatbot should be capable of handling dynamic conversations, storing history, and providing a clean user interface.
🛠️ Tech Stack

🧠 Brain: LangChain + OpenAI (for LLM orchestration and prompt management)
⚙️ Backend: FastAPI (for serving the chatbot API)
💬 Frontend: Streamlit (for interactive chat UI)
🔒 Security: .

Correlation Matrix

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

# Load your dataset
df = pd.read_csv("your_data.csv")  # Replace with your actual file

# Compute correlation matrix
corr_matrix = df.corr(numeric_only=True)

# Plot heatmap
plt.figure(figsize=(10, 8))
sns.heatmap(corr_matrix, annot=True, fmt=".2f", cmap="coolwarm", linewidths=0.5)
plt.title("Correlation Heatmap")
plt.tight_layout()
plt.show()