CreditGraphFormer & RegulaLogic: Privacy-Aware Federated Enterprise Credit Rating

# CreditGraphFormer & RegulaLogic: Privacy-Aware Federated Enterprise Credit Rating This repository provides a reference implementation of **CreditGraphFormer** and **RegulaLogic**, a unified framework for **privacy-aware federated enterprise credit rating with fiscal big data**. It is inspired by the work *"Privacy-Aware Federated Learning for Enterprise Credit Rating with Fiscal Big Data"* by **Xuan Huang**, which integrates fiscal knowledge graphs, transformer-based graph encoders, and regulatory reasoning modules to deliver accurate and interpretable credit scores under strict privacy and compliance constraints. :contentReference[oaicite:1]{index=1} The framework is designed for scenarios where enterprise tax bureaus, financial institutions, and regulators must collaborate on credit modeling without centralizing raw fiscal data. --- ## 1. Motivation Modern enterprise credit rating faces three key challenges: - **Data Privacy** Fiscal records, tax filings, and transaction logs are highly sensitive. Regulations such as GDPR and local financial data laws restrict direct data sharing and central aggregation. - **Heterogeneous Fiscal Big Data** Enterprise behavior is represented by multi-source, multi-view data: - Tax declarations, invoices, and audit records - Corporate registration information and legal events - Supply-chain and transaction networks These data are high-dimensional, sparse, and structurally complex. - **Regulatory Compliance & Interpretability** Credit ratings must be: - Transparent to regulators - Consistent with policy rules and audit histories - Robust against distribution shifts in decentralized environments CreditGraphFormer and RegulaLogic together address these issues by combining **heterogeneous graph representation learning**, **temporal stability modeling**, **federated optimization**, and **rule-aligned reasoning**. --- ## 2. Conceptual Overview The framework consists of three conceptual layers: 1. **CreditGraphFormer** A transformer-style graph encoder operating on a **fiscal knowledge graph**: - Nodes: enterprises and regulatory entities - Edges: typed relations such as invoice issuance, supply-chain links, equity relations, legal representation, etc. - Attributes: revenue, VAT behavior, audit frequencies, penalties, industry and regional codes :contentReference[oaicite:2]{index=2} Key ideas: - Multimodal feature projection into multiple fiscal “views” - Relation-specific message passing with attention over neighbors - Regulatory-aware transformer blocks with compliance masks - Temporal stability embeddings based on indicator volatility over time 2. **RegulaLogic** A compliance-driven reasoning layer that adjusts neural predictions using: - Compliance coefficients (filing timeliness, penalty frequency, audit signals) - Constraint masks and deterministic downgrade rules - Sector-based rating floors and policy penalties - Temporal smoothing of ratings and confidence calibration :contentReference[oaicite:3]{index=3} 3. **Federated Learning Pipeline** A privacy-preserving training scheme where: - Multiple clients (institutions) keep data local - Models are trained locally and aggregated securely - Optional differential privacy and secure aggregation mechanisms are used to prevent leakage of sensitive gradients. --- ## 3. Key Components ### 3.1 Multimodal Feature Projection Enterprise attributes are projected into multiple fiscal “views” (e.g., taxation profile, invoice behavior, audit trail): - Each view has its own projection matrix. - View embeddings are concatenated into a composite representation. - Relation-specific attention modules aggregate information from neighbors based on relation type. :contentReference[oaicite:4]{index=4} This allows the encoder to exploit domain structure instead of treating all features as a flat vector. --- ### 3.2 Regulatory-Aware Transformer Encoder CreditGraphFormer uses a transformer-style encoder that: - Operates on aggregated relational messages. - Applies a **regulatory mask** that: - Suppresses contributions from non-compliant or high-risk neighbors. - Enforces compliance-aware attention patterns. - Produces contextualized node embeddings that already respect policy-level constraints. This design connects low-level fiscal indicators with high-level regulatory semantics. --- ### 3.3 Temporal Stability Embedding Credit behavior is inherently temporal. The framework: - Computes temporal differences of fiscal indicators over a time window. - Aggregates volatility into a **stability embedding**. - Injects stability signals into the final representation, allowing: - Distinction between structurally stable and highly volatile enterprises. - Better robustness to noisy or transient events. :contentReference[oaicite:5]{index=5} --- ### 3.4 RegulaLogic Reasoning Layer RegulaLogic refines CreditGraphFormer outputs according to explicit rules: - **Compliance Coefficient and Constraint Mask** - Enterprises with low compliance (e.g., late filings, frequent penalties) receive rating penalties. - A binary mask gates compliant vs. non-compliant entities. - **Audit-Based Downgrades** - When audit risk exceeds a threshold, ratings are deterministically downgraded by at least a fixed step. - **Sector-Based Rating Floors** - Each industry sector can define a minimum rating floor, ensuring sector-consistent decisions. - **Temporal Smoothing and Policy Penalties** - Historical predictions are smoothed with an exponential decay kernel. - Policy rules generate a penalty score that further adjusts the smoothed rating. :contentReference[oaicite:6]{index=6} - **Confidence Calibration** - High-entropy predictions can be replaced with sector median ratings, improving stability and regulatory trust. --- ## 4. Typical Workflow A typical usage pattern for this project is: 1. **Data Preparation** - Build a fiscal knowledge graph from: - Tax records, invoices, registrations, and audit logs. - Encode: - Node attributes (enterprise features). - Relation types (edge labels). - Temporal slices of indicators. 2. **Federated Setup (Optional)** - Partition data by institution (e.g., different tax bureaus or banks). - Initialize federated clients that each train a local CreditGraphFormer. - Configure secure aggregation and optional differential privacy. 3. **Model Training** - Train CreditGraphFormer with an **ordinal loss** for rating labels. - Add regularization terms for graph smoothness and compliance alignment. - Periodically run federated aggregation if in decentralized mode. 4. **RegulaLogic Inference** - Feed encoder outputs and metadata (compliance coefficients, sector tags, audit signals) into RegulaLogic. - Obtain adjusted logits, calibrated probabilities, and final discrete ratings. 5. **Evaluation** - Use metrics such as: - Accuracy, macro and weighted F1, AUC. - Default detection rates and precision@k. - Stress-test under label noise, distribution shift, and different privacy budgets. --- ## 5. Example Use Cases - Enterprise tax and credit rating for government fiscal regulators. - Credit risk assessment for financial institutions with strong privacy constraints. - Cross-region or cross-sector credit modeling without pooling raw data. - Research on trustworthy and regulation-compliant AI in financial governance. --- ## 6. Limitations and Future Directions - **Rule Maintenance** RegulaLogic currently relies on manually specified policy rules. As regulations evolve, rules must be updated or partially learned from data. - **Federated Assumptions** The framework assumes reasonably honest participants and stable communication. Adversarial clients and highly unstable networks require more robust protocols. - **Scalability** Very large fiscal graphs and complex rule sets may require: - Hierarchical modeling strategies. - Further compression and sampling techniques. Future work may explore dynamic rule learning, stronger cryptographic protections, and automated monitoring of model–policy alignment.
from __future__ import annotations

from dataclasses import dataclass
from typing import Dict, Any, Optional, List, Tuple

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------------------------------
# Data containers
# ---------------------------------------------------------------------------


@dataclass
class CreditGraphBatch:
    """
    Container for a mini-batch of fiscal knowledge graph data.

    S

FairCoDNet: Fairness-Constrained Financial Regression Model

# FairCoDNet: Fairness-Constrained Financial Regression Model FairCoDNet is an interpretable and fairness-aware financial regression architecture designed to provide equitable, transparent, and high-performance predictions under sensitive-attribute constraints. The framework is built upon the principles described in the paper *“Study on the enterprise tax risk identification model based on multi-source fiscal and tax data”* by **Xuan Huang**, integrating multi-source data fusion, fairness-driven optimization, and convex reformulations to ensure strong predictive ability without violating fairness constraints. This repository aims to provide an easy-to-extend implementation of the model components, including coefficient-decoupled regression layers, fairness-aware CoD constraints, and a novel Convex Reformulation & Constraint-Aware Optimization Dynamics (CAOD) framework. --- ## ✨ Key Features ### **Fairness via CoD Constraint** FairCoDNet explicitly restricts the proportion of predictive variance that can be explained by sensitive attributes such as gender, age, region, or protected financial indicators. The fairness constraint is implemented using: - Coefficient of Determination (CoD) between predictions and sensitive attributes - Tunable fairness threshold ε - Second-order cone (SOC) reformulation for tractable optimization --- ### **Coefficient Decoupling Strategy** The regression head decomposes prediction coefficients into: - A sensitive feature component - A non-sensitive feature component The mixing factor γ ∈ [0,1] allows smooth control over how much sensitive information the model is allowed to use. This design enables: - Transparent fairness–accuracy trade-off - Real-time audit capability - Fully interpretable model components --- ### **Convex Reformulation** FairCoDNet transforms the fairness-constrained QCQP into a convex problem via: - Variable lifting - Coefficient reparameterization - Conic relaxation This guarantees: - Global optimality under mild assumptions - Stable training - Compatible integration with existing convex solvers --- ### **Constraint-Aware Optimization Dynamics (CAOD)** CAOD provides: - Dynamic fairness trajectory monitoring - Adaptive rescaling of sensitive coefficients - Coupled updates of (α, β, γ) - Projection into feasible fairness regions This ensures the model does not drift toward unfair solutions while maintaining predictive performance. --- ## 🧠 Applications FairCoDNet can be applied in various financial and risk-sensitive contexts: - **Enterprise tax-risk identification** - **Credit scoring & loan underwriting** - **Corporate risk modeling** - **Financial QA and legal risk assessment** - **Regulated environments requiring transparency** --- ## 📌 Requirements - Python 3.8+ - PyTorch ≥ 1.10 - NumPy, SciPy - CVXPY (for convex refinement) - Optional: CUDA for acceleration
import torch
import torch.nn as nn
import torch.nn.functional as F

class CoefficientDecouplingLayer(nn.Module):
    """
    Implements the coefficient decoupling strategy:
    y_hat = γ * (s^T θ_s) + (1 - γ) * (u^T θ_u)
    """

    def __init__(self, dim_sensitive, dim_nonsensitive):
        super().__init__()
        self.theta_s = nn.Parameter(torch.randn(dim_sensitive))
        self.theta_u = nn.Parameter(torch.randn(dim_nonsensitive))
        self.gamma = nn.Parameter(torch.t

GridFormer & InformaNet: Spatio-Informational Intelligence for Power Grid Informatization

# GridFormer & InformaNet: Spatio-Informational Intelligence for Power Grid Informatization This repository provides a reference implementation of **GridFormer** and the **InformaNet Policy Layer**, a unified modeling and decision framework for modern power grid informatization. The goal of this project is to bridge **physical grid dynamics**, **information flow**, and **adaptive control** into a single, coherent learning system. The implementation is inspired by the paper *"Big Data Platform Architecture and Information Flow Mechanisms in Power Grid Informatization"* and its proposed graph-based transformer architecture for spatio-temporal reasoning in cyber-physical power systems. --- ## 1. Motivation Modern power grids are evolving into highly digitized, cyber-physical infrastructures. They are: - Geographically distributed - Topologically complex - Continuously monitored by heterogeneous sensors and control devices - Subject to uncertainty from renewables, changing demand, and communication delays Traditional centralized or purely data-driven models: - Struggle to scale to large, dynamic networks - Often ignore the explicit grid topology - Provide limited interpretability for operators - Are not tightly integrated with control and operational constraints This repository focuses on an architecture that explicitly encodes: - The **physical network graph** of the grid - The **information and communication graph** - The **temporal evolution** of system states - A **policy layer** that respects operational feasibility and domain constraints --- ## 2. Core Ideas The project is organized around two major components. ### 2.1 GridFormer GridFormer is a spatio-informational modeling module that: - Represents the grid as a graph of buses and transmission lines - Integrates multimodal node features (physical states, measurements, control inputs) - Uses dual-graph propagation over: - The physical grid topology - The communication / information topology - Employs a temporal transformer to capture long-range time dependencies - Outputs: - Next-step state estimates - Candidate control actions Key design aspects: - Multimodal encoder for fusing physical, informational, and control features - Graph attention layers that propagate messages across the physical and communication graphs - Transformer-based temporal reasoning over a rolling time window - Decoders for state prediction and control proposals ### 2.2 InformaNet Policy Layer The InformaNet policy layer sits on top of GridFormer and transforms raw control proposals into **feasible**, **risk-aware**, and **structurally consistent** actions. It provides: - A **domain-constrained decision manifold**, approximating operational constraints such as voltage bounds and line flow limits - **Spatio-temporal action refinement**, which adjusts actions based on recent trajectories and residual errors - **Graph-based priors**, enforcing smoothness and structural coherence across electrically or topologically close nodes - **Risk-aware diversification**, which uses uncertainty estimates to inject controlled stochasticity for robustness --- ## 3. Conceptual Workflow The typical end-to-end workflow for this project looks like this: 1. **Data ingestion** - Load time-series data of grid states and measurements. - Load or construct graph descriptions of: - Physical grid topology - Communication / information topology 2. **Feature construction** - Build multimodal node features, including: - Physical states (voltages, angles, flows, etc.) - Measurements and sensor readings - Local or regional control-related signals 3. **Spatio-temporal modeling with GridFormer** - Encode node features with the multimodal encoder. - Propagate information across both graphs using dual-graph attention. - Stack a temporal transformer over sliding windows of historical states. - Decode: - Next-step state predictions - Candidate control action proposals 4. **Policy refinement with InformaNet** - Project actions into an approximate feasible set using simple domain-inspired operations. - Refine actions using recent state and prediction history. - Adjust actions with graph-based regularization. - Optionally diversify actions using uncertainty-aware perturbations. 5. **Deployment or simulation** - Use the final actions in: - Power system simulators - Digital twins - Decision-support tools - Log performance metrics such as forecast error, constraint violations, and stability indicators. --- ## 4. Key Features - Explicit handling of **two coupled graphs**: - The physical power network - The information / communication network - Support for **multimodal inputs**, including: - Physical measurements - Control and scheduling signals - High-level system descriptors - **Temporal reasoning** via transformer-style mechanisms for: - Capturing long-term dependencies - Handling variable-length histories - **Policy layer** designed around: - Operational constraints - Structural priors from grid topology - Risk-aware decision diversification - Modular design: - Components can be replaced or extended (for example, using different GNNs, temporal models, or policy mechanisms) --- ## 5. Getting Started High-level steps to use this codebase: 1. Install Python and standard machine learning dependencies (for example PyTorch). 2. Prepare datasets that contain: - Time sequences of node-level features - Physical adjacency matrices for the grid - Communication adjacency matrices (or approximations) 3. Configure model dimensions and hyperparameters. 4. Instantiate the model, construct data loaders, and implement a training loop. 5. Evaluate model performance using application-specific metrics such as: - Prediction error metrics - Rate of constraint violations - Stability or resilience indicators --- ## 6. Potential Use Cases - Short-term state forecasting in power grids - Proactive control and decision support for grid operators - Analysis of information flow and its impact on operational robustness - Research on cyber-physical system integration in smart grids - Educational demos on graph-based and transformer-based models for infrastructure systems --- ## 7. Disclaimer This repository provides a **research-oriented reference implementation**. It is not certified for real-time operation in critical infrastructure. Any deployment in real-world grids must include additional validation, safety layers, and compliance with local regulations and operational standards.
from __future__ import annotations

from dataclasses import dataclass
from typing import Optional, Tuple, Dict, Any

import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------------------------------------------------------
# Utility dataclasses
# ---------------------------------------------------------------------------


@dataclass
class GridBatch:
    """
    Container for a mini-batch of grid data.

    Shapes (typical):
        x

SEO/UX Copywriter AI Prompt

You are an expert UX/SEO copywriter. 
You have a wealth of organic SEO knowledge and you understand how to make websites perform for search engines and agentic search.
You are also an established copywriter for UX, you know how to strike the perfect balance between converting a user and making it perform well in the digital space.
You write in plain english that an 8 year old can understand, you don't use emdashes and you exclusively write in british english.

How to show report in playwright?

npx playwright show-report

add custom field and post title to cf7 mail body

add_action( 'wpcf7_before_send_mail', function( $contact_form ) {

    $submission = WPCF7_Submission::get_instance();
    if ( ! $submission ) return;

    $url = $submission->get_meta( 'url' );
    if ( ! $url ) return;

    $post_id = url_to_postid( $url );
    if ( ! $post_id ) return;

    $post_title = get_the_title( $post_id );

    if ( function_exists( 'get_field' ) ) {
        $download_url = get_field( 'pdf_link', $post_id );
    } else {
        $download_url = get_post_meta( $post_i

mailhog

Ok chill, để mình gom hết lại thành **một hướng dẫn gọn – rõ – chạy được ngay**, từ lúc cài MailHog bằng Homebrew cho tới lúc WordPress test mail, cả case chạy bằng Docker hoặc chạy thuần trên máy.

---

# 🚀 HƯỚNG DẪN FULL: CÀI MAILHOG + SETUP WORDPRESS (DOCKER & NON-DOCKER)

## 🟩 PHẦN 1 — Cài MailHog bằng Homebrew

### 1. Cài đặt

```bash
brew install mailhog
```

### 2. Chạy MailHog

```bash
mailhog
```

### 3. Truy cập giao diện

* Web UI: **[http://localhost:8025](http://localhost:8025)**
* 

CloudShell Quick Reference

curl -fsSL https://cli.kiro.dev/install | bash

717. 1-bit and 2-bit Characters

We have two special characters: The first character can be represented by one bit 0. The second character can be represented by two bits (10 or 11). Given a binary array bits that ends with 0, return true if the last character must be a one-bit character.
/**
 * @param {number[]} bits
 * @return {boolean}
 */
var isOneBitCharacter = function(bits) {
    // Start from the first bit
    let i = 0;

    // Traverse until the second-to-last bit
    // (because the last bit is always 0, we want to see if it's consumed or not)
    while (i < bits.length - 1) {
        if (bits[i] === 1) {
            // If we see a '1', it must form a two-bit character (10 or 11)
            // So we skip TWO positions
            i += 2;
        } else {
            /

Project folder file structure

import os
from pathlib import Path

# -------------------------
# Define project name (main package)
# -------------------------
project_name = "src"  # More descriptive than "src"

# -------------------------
# Define additional folders
# -------------------------
cicd_folder       = "Github"
configs_folder    = "configs"
data_folder       = "data"
notebooks_folder  = "notebooks"
static_css_folder = "static/css"
templates_folder  = "templates"
tests_folder      = "tests"
scrip

Chatbot

print("Hello")
import os
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_classic.chains import ConversationChain
from langchain_classic.memory import ConversationBufferMemory
from langchain_classic.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_classic.schema import SystemMessage,HumanMessage
# Load environment variables
load_dotenv()
openai_api_key = os.getenv("OPENAI_API_KEY"
if not openai_api_key:
    raise ValueError("O

LangChain Core Package


## Messages

from langchain.core.messages import (
    AIMessage,
    AIMessageChunk,
    BaseMessage,
    BaseMessageChunk,
    HumanMessage,
    HumanMessageChunk,
    SystemMessage,
    SystemMessageChunk,
    ToolMessage,
    ToolMessageChunk,
    FunctionMessage,
    FunctionMessageChunk,
)

  ## Prompts
  
  from langchain.core.prompts import (
    PromptTemplate,
    ChatPromptTemplate,
    SystemMessagePromptTemplate,
    HumanMessagePromptTemplate,
    AIMessag

Chatbot Prompt

🚀 Project Prompt: Build a Smart End-to-End Chatbot
🧩 Objective
Design and implement a robust, modular, and intelligent chatbot system using modern AI and web technologies. The chatbot should be capable of handling dynamic conversations, storing history, and providing a clean user interface.
🛠️ Tech Stack

🧠 Brain: LangChain + OpenAI (for LLM orchestration and prompt management)
⚙️ Backend: FastAPI (for serving the chatbot API)
💬 Frontend: Streamlit (for interactive chat UI)
🔒 Security: .

Correlation Matrix

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

# Load your dataset
df = pd.read_csv("your_data.csv")  # Replace with your actual file

# Compute correlation matrix
corr_matrix = df.corr(numeric_only=True)

# Plot heatmap
plt.figure(figsize=(10, 8))
sns.heatmap(corr_matrix, annot=True, fmt=".2f", cmap="coolwarm", linewidths=0.5)
plt.title("Correlation Heatmap")
plt.tight_layout()
plt.show()

Pandas

final_df = pd.concat([
    temperature_humidity[['time', 'day_temperature_C', 'day_humidity_percent',
                          'dayofweek_sin', 'dayofweek_cos',
                          'dayofmonth_sin', 'dayofmonth_cos',
                          'dayofyear_sin', 'dayofyear_cos']],
    daily_counts[['COUNT']].rename(columns={'COUNT': 'complaint_count'})
], axis=1)

remove_outliers_iqr

import pandas as pd

# Sample data
data = {'temperature': [22, 23, 21, 24, 100, 22, 23, 25, 20, 21]}
df = pd.DataFrame(data)

# Function to remove outliers using IQR
def remove_outliers_iqr(df, column):
    Q1 = df[column].quantile(0.25)
    Q3 = df[column].quantile(0.75)
    IQR = Q3 - Q1
    lower_bound = Q1 - 1.5 * IQR
    upper_bound = Q3 + 1.5 * IQR
    return df[(df[column] >= lower_bound) & (df[column] <= upper_bound)]

# Call the function
df_clean = remove_outliers_iqr(df