2598. Smallest Missing Non-negative Integer After Operations

You are given a 0-indexed integer array nums and an integer value. In one operation, you can add or subtract value from any element of nums. For example, if nums = [1,2,3] and value = 2, you can choose to subtract value from nums[0] to make nums = [-1,2,3]. The MEX (minimum excluded) of an array is the smallest missing non-negative integer in it. For example, the MEX of [-1,2,3] is 0 while the MEX of [1,0,3] is 2. Return the maximum MEX of nums after applying the mentioned operation any number of times.
/**
 * @param {number[]} nums
 * @param {number} value
 * @return {number}
 */
var findSmallestInteger = function(nums, value) {
    // Step 1: Create a frequency map to count how many times each remainder appears
    const freq = new Map();

    for (let num of nums) {
        // Normalize the remainder to always be non-negative
        let mod = ((num % value) + value) % value;

        // Count how many times each remainder appears
        freq.set(mod, (freq.get(mod) || 0) + 1);
    }

    /

Layout resources

PLUGINS

β€’ https://www.kadencewp.com/kadence-blocks/
β€’ https://www.kadencewp.com/kadence-blocks/

ChildLangNet: Interaction Pattern Recognition for Early Childhood Language

Overview Understanding early childhood language development is critical for supporting cognitive, emotional, and social growth. Traditional observation methods often lack scalability and objectivity. ChildLangNet addresses these challenges by: Capturing complex multimodal interactions using advanced neural architectures. Modeling temporal and hierarchical structures in preschool interactions. Incorporating an Adaptive Interaction Strategy (AIS) to enhance interpretability. Providing a scalable and efficient computational framework for educational research. ✨ Key Features 🧠 Multimodal Data Fusion: Combines audio, visual, and contextual metadata to represent interaction dynamics. πŸ•’ Temporal & Hierarchical Modeling: Leverages RNN and attention-based mechanisms to capture both short- and long-term dependencies. 🌐 Adaptive Interaction Strategy (AIS): Introduces dynamic-static fusion through cross-attention and domain-specific constraints. πŸ“Š Strong Performance: Outperforms baseline models like ResNet, ViT, and I3D on multiple benchmark datasets. ⚑ Scalable & Efficient: Designed for real-world preschool environments with lightweight components. 🧱 Model Architecture The ChildLangNet architecture consists of three main components: Multimodal Encoder Extracts and fuses audio, visual, and contextual signals. Implements weak/strong augmentations, backbone encoders, and teacher-student EMA mechanisms. (See Figure 2 in the paper for a schematic view.) Graphical Propagation Layer Uses RNN with attention to model temporal dependencies. Aggregates segment-level representations hierarchically. Adaptive Interaction Strategy (AIS) Integrates static and dynamic feature streams via Neighborhood Cross Attention (NCA) and Dynamic-Static Interaction (DSI). Enhances interpretability and domain alignment (see Figure 3 on page 10). Hierarchical Classification Framework Aggregates features across segments and predicts interaction pattern categories. πŸ“Š Experimental Results ChildLangNet achieved strong results on multiple benchmark datasets: Dataset Accuracy Recall F1 Score AUC Preschool Language Interaction 89.34% 88.72% 88.15% 88.42% Early Childhood Communication Patterns 90.12% 89.63% 89.08% 89.35% Child Speech and Gesture Analysis 89.23% 88.67% 88.12% 88.39% Preschool Social Language Dynamics 91.02% 90.56% 90.01% 90.28% (Detailed tables are available on pages 14–15 of the paper.)​ πŸ§ͺ Datasets Preschool Language Interaction Dataset β€” multimodal audio and transcripts of preschool interactions. Early Childhood Communication Patterns Dataset β€” verbal and non-verbal communication data. Child Speech and Gesture Analysis Dataset β€” gesture-speech integration for language understanding. Preschool Social Language Dynamics Dataset β€” longitudinal recordings of social language use. βš™οΈ Installation bash # Clone the repository git clone https://github.com/yourusername/childlangnet.git cd childlangnet # Create and activate a virtual environment python -m venv venv source venv/bin/activate # (Windows: venv\Scripts\activate) # Install dependencies pip install -r requirements.txt πŸš€ Usage bash # Train the model python train.py --config configs/config.yaml # Evaluate the model python evaluate.py --checkpoint checkpoints/model_best.pth # Inference on new data python inference.py --input your_data/ πŸ“š Citation If you use ChildLangNet in your research, please cite: sql @article{ChildLangNet2025, title={ChildLangNet: Interaction Pattern Recognition for Early Childhood Language in Preschool Settings}, author={Yang, Xiaolan}, journal={PLOS}, year={2025} } 🧠 Future Work Improving model interpretability for non-specialists through explainable AI tools. Developing data augmentation and semi-supervised learning strategies for low-resource environments. Extending AIS with cross-linguistic generalization for multilingual preschool settings. 🀝 Contributing Contributions are welcome! Please: Fork the repository Create a feature branch (git checkout -b feature/new-feature) Commit your changes and open a Pull Request πŸ›‘οΈ License This project is licensed under the MIT License. πŸ™ Acknowledgments This work was supported by the Humanities and Social Science Youth Fund of the Ministry of Education under the project β€œResearch on Interactive Language of Chinese Children Aged 4–6 Based on a Tracking Corpus” (Grant No. 21YJC740073)​.
# model.py
# ChildLangNet: Interaction Pattern Recognition for Early Childhood Language
# PyTorch >= 1.10

from typing import Optional, Tuple, Dict
import torch
import torch.nn as nn
import torch.nn.functional as F


# ---------------------------
# Small building blocks
# ---------------------------

class MLP(nn.Module):
    def __init__(self, in_dim: int, hidden: int, out_dim: int, p: float = 0.1):
        super().__init__()
        self.net = nn.Sequential(
            nn.Li

MatchEduNet: Deep Learning-Enhanced Alignment of Vocational Curriculum Content with Enterprise Needs

Overview MatchEduNet is a deep learning-based framework designed to address the challenge of aligning vocational curriculum content with enterprise needs. The system dynamically aligns curriculum topics with industry skill demands using neural network architectures including convolutional, recurrent, and attention-based models. Additionally, the framework utilizes a Vocational Curriculum Alignment Network (VCAN) and an Adaptive Alignment Strategy to capture intricate relationships between educational content and industry-specific requirements, ensuring curriculum relevance and workforce readiness. ✨ Key Features Deep Learning-based Alignment: Leverages deep learning architectures to automatically model the relationships between curriculum topics and enterprise requirements. VCAN (Vocational Curriculum Alignment Network): Integrates CNNs, RNNs, and attention mechanisms to dynamically capture complex curriculum-to-enterprise mappings. Adaptive Alignment Strategy: Employs real-time feedback and optimization strategies to ensure ongoing adaptability to industry changes. Scalable and Contextual: Designed to handle large-scale datasets and varying industry contexts, ensuring broad applicability across different sectors. Interpretability: Focuses on model transparency to foster trust and adoption by stakeholders, including educators and policymakers. 🧠 Model Architecture The framework consists of the following components: Vocational Curriculum Alignment Network (VCAN): A neural network model integrating convolutional and recurrent layers with attention mechanisms to process curriculum content and enterprise requirements. Adaptive Alignment Strategy: A dynamic, feedback-driven mechanism to adjust curriculum-to-enterprise mappings based on evolving industry demands. Real-Time Feedback Mechanism: Incorporates enterprise feedback continuously to refine embeddings and optimize curriculum alignment in real time. Dynamic Relevance Optimization: A deep learning-based method that maximizes the relevance between curriculum topics and enterprise skill requirements using a scoring function. For detailed architecture diagrams, refer to Figures 1–4 in the paper. πŸ§ͺ Datasets MatchEduNet was evaluated using several datasets: Vocational Curriculum Content Dataset: Includes course descriptions, syllabi, and learning objectives from vocational training institutions. Enterprise Skill Requirements Dataset: Compiled from job postings, industry reports, and employer surveys to capture the skills and competencies required in the workforce. Curriculum to Industry Alignment Dataset: Provides a mapping of curriculum content to industry skill demands. Deep Learning Job Skills Mapping Dataset: A specialized dataset analyzing deep learning job roles and related skill requirements. These datasets enable a holistic modeling approach for curriculum-enterprise alignment. πŸ“Š Experimental Results MatchEduNet outperforms traditional and state-of-the-art methods such as ResNet, ViT, I3D, and DenseNet across multiple alignment tasks: Vocational Curriculum Content Dataset: 89.34% accuracy, 88.79% recall, 88.42% F1 score. Enterprise Skill Requirements Dataset: 90.56% accuracy, 90.12% recall, 89.68% F1 score. These results highlight the robustness and scalability of MatchEduNet for real-world applications. βš™οΈ Installation bash # Clone the repository git clone https://github.com/yourusername/MatchEduNet.git cd MatchEduNet # Create a virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt πŸš€ Usage Training the Model bash python train.py --config configs/config.yaml Evaluating the Model bash python evaluate.py --model checkpoints/model.pth You can modify configuration parameters in configs/config.yaml to adjust hyperparameters, datasets, or model components. πŸ“š Citation If you use this work in your research, please cite: bibtex @article{MatchEduNet2025, title={MatchEduNet: Deep Learning-Enhanced Alignment of Vocational Curriculum Content with Enterprise Needs}, author={Jian Wu}, journal={Journal of Vocational Education and Workforce Development}, year={2025} } 🀝 Contributing Contributions are welcome! Please fork the repository and submit a pull request. For major changes, open an issue first to discuss what you would like to change. πŸ›‘οΈ License This project is licensed under the MIT License. πŸ™ Acknowledgments This research was supported by the Sichuan Provincial Education Science Planning Project, Grant No. SCJG24C266, focusing on the integration of vocational education with emerging high-quality productivity.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Vocational Curriculum Alignment Network (VCAN)
class VCAN(nn.Module):
    def __init__(self, curriculum_dim, enterprise_dim, hidden_dim, output_dim):
        super(VCAN, self).__init__()

        # Convolutional layer to process curriculum and enterprise data
        self.curriculum_conv = nn.Conv1d(curriculum_dim, hidden_dim, kernel_size=3, padding=1)
        self.enterprise_conv = nn.Conv1d(enterprise_d

OperaSkillNet: Temporal Learning Path Analysis for Skill Advancement in Opera Performance

Overview OperaSkillNet is a novel framework designed for analyzing the temporal learning paths of skill advancement in opera performance. The system leverages hierarchical modeling, recurrent neural architectures, and attention mechanisms to capture long-term dependencies in opera training. By decomposing skills into multi-level sub-skills, OperaSkillNet offers a comprehensive and interpretable framework to model skill acquisition in opera and provides actionable insights for personalized learning strategies. Additionally, OperaSkillNet incorporates the Skill Advancement Strategy (SAS), a domain-specific optimization method that refines learning paths through feedback and hierarchical representations. Key Features Temporal Skill Progression Modeling: Models the long-term evolution of skills through recurrent neural networks (LSTM) to capture the sequential nature of skill development. Hierarchical Skill Representation: Breaks down complex skills into multi-level sub-skills, allowing for finer-grained analysis of progression. Attention Mechanism: Focuses on the most relevant features during skill progression, enhancing both predictive accuracy and model interpretability. Skill Advancement Strategy (SAS): Optimizes learning paths by integrating learning activities, feedback mechanisms, and hierarchical skill representations. Explainable AI: Incorporates explainable AI techniques to improve the transparency and interpretability of the model, making it useful for educators and performers alike. Installation bash # Clone the repository git clone https://github.com/yourusername/OperaSkillNet.git cd OperaSkillNet # Install dependencies pip install -r requirements.txt Usage Model Training To train the model, run the following command: bash python train.py --data_path /path/to/your/dataset --epochs 100 --batch_size 64 Model Inference Once the model is trained, you can use it for skill progression prediction: python from operaskillnet import OperaSkillNet # Initialize the model model = OperaSkillNet() # Load trained weights model.load_state_dict(torch.load('model.pth')) # Make a prediction predicted_skill_state = model.predict(input_data) Datasets OperaSkillNet uses several datasets for training and evaluation: Opera Performance Skill Progression Dataset: Contains high-resolution video, audio, and metadata on opera performances, tracking vocal range, pitch accuracy, stage presence, and emotional expression. Temporal Learning Patterns in Vocal Training Dataset: Includes detailed recordings of vocal exercises, capturing the evolution of vocal techniques such as breath control and resonance. Opera Performer Gesture Dynamics Dataset: Focuses on physical gestures in opera performances, including motion capture and video recordings, to study non-verbal communication in opera. Skill Development Trajectories in Opera Singing Dataset: Tracks the long-term development of opera singing skills through multi-modal data, including audio and physiological measurements. Model Architecture Temporal Sequence Modeling with Hierarchical Skill Representation The core of OperaSkillNet is a temporal sequence modeling framework that encodes learning events over time using LSTM to predict skill states at each time step. It incorporates multi-level skill representations and ensures smooth transitions between skill states. Skill Advancement Strategy (SAS) The SAS enhances the model by integrating learning activities and feedback, using domain-specific knowledge to refine learning paths and optimize skill advancement. Attention Mechanism An attention mechanism is used to focus on the most relevant features during training, dynamically adjusting the weights based on feature importance for skill progression. Experimental Results The framework has demonstrated superior performance in capturing the temporal dynamics of skill advancement in opera: Opera Performance Skill Progression Dataset: 89.24% accuracy, 88.56% precision, 88.12% recall, 88.34% F1 score. Vocal Training Dataset: 90.15% accuracy, 89.47% precision, 89.02% recall, 89.24% F1 score. These results highlight OperaSkillNet's ability to model and optimize learning trajectories effectively. Acknowledgments We would like to thank the contributors who provided datasets for this work, as well as the researchers working in the field of opera training and performance analysis. Contributing We welcome contributions to improve OperaSkillNet. To contribute: Fork the repository. Create a new branch. Make your changes and submit a pull request. License This project is licensed under the MIT License - see the LICENSE file for details. References Jiang, Y. (2025). OperaSkillNet: Temporal Learning Path Analysis for Skill Advancement in Opera Performance. Journal of Computational Methods in Performing Arts.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Temporal Skill Progression Network (TSPNet)
class TSPNet(nn.Module):
    def __init__(self, skill_dim, hidden_dim, output_dim):
        super(TSPNet, self).__init__()
        
        # Recurrent neural network (LSTM) to model temporal dependencies in skill progression
        self.lstm = nn.LSTM(skill_dim, hidden_dim, batch_first=True)
        
        # Attention mechanism to focus on the most relevant

TourFusionNet: Multi-Source Data Fusion for Sports Tourism Preference Prediction

Overview Sports tourism is rapidly expanding, but predicting user preferences remains challenging due to the heterogeneity and sparsity of data. TourFusionNet addresses this by: Fusing multiple data modalities (e.g., user profiles, destination attributes, interactions, and contextual variables). Dynamically adjusting the significance of each data source through the Adaptive Preference Integration Strategy (APIS). Achieving state-of-the-art accuracy for preference prediction and recommendation tasks. ✨ Key Features Multi-source data fusion: Combines structured and unstructured data seamlessly. Hierarchical attention-based architecture: Captures both intra-source and inter-source relationships. Graph propagation layer: Models complex dependencies across users, destinations, and contexts. Adaptive weighting (APIS): Dynamically adjusts data source importance over time. High accuracy & scalability: Validated through extensive experiments on multiple datasets. 🧠 Model Architecture The framework consists of: Multimodal Encoder – Extracts latent representations from different data sources. Hierarchical Fusion Mechanism – Integrates intra-source and inter-source dependencies. Graph Propagation Layer – Refines predictions through structural relationships. Adaptive Preference Integration Strategy – Enhances robustness and interpretability. (See Figures 1–4 in the paper for detailed architecture diagrams.) πŸ§ͺ Datasets TourFusionNet was evaluated on several tourism datasets, including: Sports Tourism Behavior Dataset Multi-Source Travel Preferences Dataset Regional Sports Tourism Trends Dataset Tourist Activity Fusion Dataset These datasets include user behavior, event information, geospatial data, and user-generated content, enabling holistic modeling. πŸ“Š Experimental Results Outperformed baseline methods such as ResNet, ViT, I3D, BLIP, DenseNet, and MobileNet. Achieved up to 4.2% improvement in accuracy compared to state-of-the-art models. Demonstrated strong generalization and efficiency on large-scale datasets. Refer to Tables 1–4 in the paper for detailed results and ablation studies. βš™οΈ Installation bash # Clone the repository git clone https://github.com/yourusername/tourfusionnet.git cd tourfusionnet # Create a virtual environment python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate # Install dependencies pip install -r requirements.txt πŸš€ Usage bash # Training the model python train.py --config configs/config.yaml # Evaluating the model python evaluate.py --model checkpoints/model.pth You can modify configuration parameters in configs/config.yaml to adjust hyperparameters, datasets, or model components. πŸ“š Citation If you use this work in your research, please cite: sql @article{TourFusionNet2025, title={TourFusionNet: Multi-Source Data Fusion for Sports Tourism Preference Prediction}, author={Zhang, Feng}, journal={Journal of Tourism Analytics}, year={2025} } 🀝 Contributing Contributions are welcome! Please fork the repository and submit a pull request. For major changes, open an issue first to discuss what you would like to change. πŸ›‘οΈ License This project is licensed under the MIT License. πŸ™ Acknowledgments This research was supported by the National Social Science Fund of China and the Blue Project for Colleges and Universities in Jiangsu Province.
# model.py
# TourFusionNet: Multi-Source Data Fusion for Sports Tourism Preference Prediction
# Implements: modality encoders, hierarchical fusion (self & cross attention),
#             graph propagation, and Adaptive Preference Integration Strategy (APIS).
# PyTorch >= 1.10 recommended.

from typing import Dict, Optional, List, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F


# -----------------------------
# Utility blocks
# ----------------------------

MusicSceneNet: Content-Driven Scenario Recognition and Preference Prediction for Cultural Tourism Integration

Overview MusicSceneNet is an advanced framework designed for content-driven scenario recognition and preference prediction within the domain of cultural tourism integration. This system combines multimodal data, including music, scene, and user interaction features, to capture the complex interplay between cultural context and individual preferences. MusicSceneNet utilizes the Harmonic Scene Integration Network (HSIN) and the Content-Driven Scenario Optimization (CDSO) strategy to offer a robust solution for personalized cultural tourism experiences. The core components of this framework, including cross-modal attention, cultural knowledge integration, and dynamic optimization, allow for the precise recognition of cultural scenarios and the prediction of user preferences. Key Features Harmonic Scene Integration Network (HSIN): A multi-modal encoder and cross-modal attention mechanism that aligns music and scene data to create a unified representation for scenario recognition and preference prediction. Content-Driven Scenario Optimization (CDSO): Enhances alignment between content features and user expectations, using domain-specific knowledge to refine the predictions. Cross-modal Attention: Ensures robust fusion of music and scene features by learning the interactions between these modalities. Cultural Contextualization: Enriches representations with cultural knowledge to provide deeper contextual relevance for predictions. Scalable and Adaptive: Applicable to a wide range of cultural tourism contexts, ensuring adaptability to diverse user preferences and cultural environments. Installation bash # Clone the repository git clone https://github.com/yourusername/MusicSceneNet.git cd MusicSceneNet # Install dependencies pip install -r requirements.txt Usage Model Training To train the model, use the following command: bash python train.py --data_path /path/to/your/dataset --epochs 100 --batch_size 64 Model Inference Once the model is trained, you can use it to predict preferences or recognize cultural tourism scenarios: python from musicscenenet import MusicSceneNet # Initialize the model model = MusicSceneNet() # Load the trained model model.load_state_dict(torch.load('model.pth')) # Make a prediction predicted_preference = model.predict(input_data) Datasets The system utilizes several datasets to train and evaluate the model: Music Scene Recognition Dataset: Includes audio-visual recordings from music-related environments such as concerts and festivals, annotated for scenario classification. Cultural Tourism Behavior Dataset: Contains multimodal data from cultural tourism sites, including text, images, and geolocation metadata. Scenario-Based Music Preference Dataset: Captures user-generated data such as playlists and listening histories, annotated with contextual information like time, activity, and mood. Integrated Tourism and Music Dataset: Combines music and tourism data to explore the intersection of music and cultural tourism experiences. Model Architecture Harmonic Scene Integration Network (HSIN) HSIN integrates multimodal data through: A multi-modal encoder that extracts music and scene features. A harmonic alignment module using cross-modal attention to align music and scene data. A scenario-preference decoder that predicts user preferences based on the aligned features. Content-Driven Scenario Optimization (CDSO) CDSO optimizes the alignment between content-driven features and user preferences by: Creating scenario-specific embeddings to capture the semantic relationships between scenarios. Using a content-driven attention mechanism to dynamically weigh the importance of different content modalities. Incorporating domain-specific knowledge for better contextual predictions. Multimodal Encoder Architecture The encoder uses a combination of convolutional and transformer networks to process both music and scene features, ensuring that both structural and semantic properties are preserved. Experimental Results MusicSceneNet demonstrates strong performance across several benchmark datasets, surpassing state-of-the-art methods in both scenario recognition and preference prediction: Music Scene Recognition Dataset: 89.74% accuracy, 89.21% recall, and 89.05% F1 score. Cultural Tourism Behavior Dataset: 91.02% accuracy, 90.48% recall, and 90.30% AUC. Scenario-Based Music Preference Dataset: 89.34% accuracy, 88.79% recall, and 88.62% AUC. Acknowledgments We would like to acknowledge the contributors and institutions that provided the datasets used in this work, as well as the researchers who advanced the field of cultural tourism integration. Contributing We welcome contributions to enhance MusicSceneNet. To contribute: Fork the repository. Create a new branch. Make your changes and submit a pull request. License This project is licensed under the MIT License - see the LICENSE file for details. References Xie, Z., & Chen, S. (2025). MusicSceneNet: Content-Driven Scenario Recognition and Preference Prediction for Cultural Tourism Integration. Frontiers in Tourism and Technology.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Harmonic Scene Integration Network (HSIN)
class HSIN(nn.Module):
    def __init__(self, music_dim, scene_dim, embedding_dim, output_dim):
        super(HSIN, self).__init__()
        
        # Define the multi-modal encoder for music and scene data
        self.music_encoder = nn.Sequential(
            nn.Linear(music_dim, embedding_dim),
            nn.ReLU(),
            nn.Dropout(0.5)
        )

VIM Notes

### Commands / Description
Command | Description
---------|-----------
**Search** |
`/{earch_term}` | searches for `{search_term}` in the document
`?{search_term}` } | searches for `{search_term}` in the document
`CNTRL - D` | resets the cursor
**Substitute** | place cursor on the line 
`:s/old/new/g` | every instance of `old` replaced with `new` if `g` or global is used
`:%s/old/new/gc` | `gc` means with prompt, search and replace WHILE typing
**Execute Terminal Commands** | 
`:!{terminal comma

TeachEvalNet: Behavioral Recognition-Driven Intelligent Quality Assessment for Physical Education Classes

Overview TeachEvalNet is an advanced framework designed to provide intelligent quality assessment for physical education (PE) classes using behavioral recognition. Traditional assessment methods in PE rely on subjective observations and limited quantitative metrics, failing to capture the complex dynamics between students and instructors. TeachEvalNet introduces a novel Behavioral Recognition-Driven Quality Assessment Network (BRQAN) and a Behavioral-Driven Optimization Strategy (BDOS), leveraging machine learning algorithms to analyze multimodal data (e.g., video, motion, and sensor data). This approach enhances teaching effectiveness evaluation by recognizing intricate behavioral patterns and aggregating them into meaningful features to predict class quality. Key Features Multimodal Data Analysis: Integrates video, motion, and sensor data to provide a comprehensive understanding of student and instructor interactions. BRQAN: A model designed for analyzing complex behavioral patterns and predicting quality scores. BDOS: Enhances model accuracy and adaptability by integrating domain-specific knowledge, optimizing multi-objective criteria, and using dynamic feedback mechanisms. Scalable and Adaptable: Suitable for a wide range of PE activities and educational contexts, offering scalability for real-world applications. High Efficiency: Optimized architecture for robust performance with improved predictive accuracy, as demonstrated in experimental results. Installation bash # Clone the repository git clone https://github.com/yourusername/TeachEvalNet.git cd TeachEvalNet # Install dependencies pip install -r requirements.txt Usage Model Training To train the model, use the following command with your dataset: bash python train.py --data_path /path/to/your/dataset --epochs 100 --batch_size 64 Model Inference Once trained, you can use the model to assess PE class quality: python from teachevalnet import TeachEvalNet # Initialize the model model = TeachEvalNet() # Load the trained model model.load_state_dict(torch.load('model.pth')) # Make a prediction quality_score = model.predict(input_data) Datasets TeachEvalNet uses the following datasets to train and evaluate the model: Physical Education Behavior Dataset: Captures various physical activities and student interactions in PE classes. Classroom Activity Recognition Dataset: Includes audiovisual recordings of classroom activities for recognizing interactions and behaviors. Student Engagement Assessment Dataset: Measures student engagement through facial expressions, body language, and interaction levels. Teacher Performance Observation Dataset: Focuses on evaluating teacher performance, including instructional clarity and classroom management skills. Experimental Results TeachEvalNet has shown superior performance in predicting class quality scores compared to state-of-the-art methods: Physical Education Behavior Dataset: 89.73% accuracy, 89.20% F1-score. Student Engagement Assessment Dataset: 90.12% accuracy, 89.56% F1-score. Teacher Performance Observation Dataset: 89.94% accuracy, 89.56% F1-score. Architecture TeachEvalNet uses a hybrid framework, combining pre-trained models for feature extraction with task-specific modules for behavioral analysis. The Behavioral Recognition-Driven Quality Assessment Network (BRQAN) and Behavioral-Driven Optimization Strategy (BDOS) modules work together to: Extract meaningful features from multimodal data using CNNs and GNNs. Aggregate features with temporal convolutional networks (TCNs) and multi-head attention mechanisms. Optimize class quality prediction through adaptive learning, domain-specific insights, and feedback integration. Diagram Overview Figure 1: Schematic diagram of the BRQAN. The model extracts hierarchical feature maps from multimodal data, capturing both spatial and relational patterns. Contributing We welcome contributions to improve TeachEvalNet. If you want to contribute: Fork the repository. Create a new branch for your changes. Make your changes and submit a pull request. License This project is licensed under the MIT License - see the LICENSE file for details. Acknowledgments We would like to thank the researchers and institutions that contributed to the datasets used in this project, as well as the developers of the deep learning frameworks that made this work possible. References Li, C. (2025). TeachEvalNet: Behavioral Recognition-Driven Intelligent Quality Assessment for Physical Education Classes. Frontiers in Education.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Behavioral Recognition-Driven Quality Assessment Network (BRQAN)
class BRQAN(nn.Module):
    def __init__(self, input_dim, feature_dim, output_dim):
        super(BRQAN, self).__init__()
        
        # Define the multimodal encoder for feature extraction
        self.conv1 = nn.Conv2d(input_dim, 64, kernel_size=3, stride=2, padding=1)
        self.conv2 = nn.Conv2d(64, 128, kernel_size=3, stride=2, pa

DeepHeritageNet: Modeling the Transmission of Intangible Musical Heritage

Overview DeepHeritageNet is a deep learning framework designed to address the challenges in preserving and transmitting intangible musical heritage. This model integrates various computational techniques to simulate the transmission of musical motifs across generations. By leveraging hierarchical architectures, cultural memory embeddings, and context-aware mechanisms, DeepHeritageNet ensures both the fidelity and creative evolution of musical traditions, making it a significant tool for cultural heritage preservation. Key Features Cultural Embedding Transmission Network (CETNet): A novel model that integrates symbolic and neural representations for cultural continuity and stylistic innovation in musical heritage transmission. Context-Attuned Modulated Inheritance (CAMI): A strategy for adapting transmission processes to dynamic cultural and performative contexts. Multimodal Learning: Incorporates diverse input types (e.g., audio, symbolic notations, ethnographic descriptions) for a comprehensive understanding of musical heritage. Generative Modeling: Balances historical preservation with stylistic evolution, enabling the modeling of cultural diffusion and stylistic change. Enhanced Accuracy: Demonstrated superior results in motif recognition and stylistic classification compared to existing models, ensuring high-quality preservation and revitalization of endangered musical traditions. Installation bash # Clone the repository git clone https://github.com/yourusername/DeepHeritageNet.git cd DeepHeritageNet # Install dependencies pip install -r requirements.txt Usage Model Training To train the model, use the following script: bash python train.py --data_path /path/to/your/dataset --epochs 100 --batch_size 64 Model Inference Once trained, you can use the model to generate new motifs or analyze cultural transmission: python from deep_heritage_net import DeepHeritageNet # Initialize model model = DeepHeritageNet() # Load trained weights model.load_state_dict(torch.load('model.pth')) # Generate new motif new_motif = model.generate_motif(context_data) Datasets The model supports multiple datasets, including: Intangible Musical Heritage Audio Dataset: A collection of audio recordings from various regional and ethnic traditions. Traditional Music Style Classification Dataset: For genre-level style classification, including labeled audio samples. Cultural Music Transmission Network Dataset: Models the dissemination of musical practices and stylistic evolution across geographical and social networks. Folk Song Feature Extraction Dataset: Focused on computational analysis of folk music, including pitch contours, harmonic annotations, and lyrical content. Experimental Results DeepHeritageNet has been evaluated on multiple cultural corpora, including: Intangible Musical Heritage Audio Dataset: 91.76% accuracy, 90.33% F1-score. Traditional Music Style Classification Dataset: 93.02% accuracy, 91.45% F1-score. Cultural Music Transmission Network Dataset: 91.29% accuracy, 93.04% AUC. Folk Song Feature Extraction Dataset: 92.58% accuracy, 93.89% AUC. Research & Development DeepHeritageNet incorporates cutting-edge methods in deep learning, including: Transformer-based Sequence Modeling: For capturing long-term dependencies in musical motifs. Cultural Memory Embedding: To preserve historical context while allowing for stylistic drift. Contextual Dynamics and Modulation: Adapts the model’s output to fit cultural and performative contexts, ensuring that outputs remain authentic while allowing for creative variations. Contributing We welcome contributions to improve the system! To contribute: Fork the repository. Create a new branch. Make your changes and submit a pull request. License This project is licensed under the MIT License - see the LICENSE file for details. Acknowledgments We would like to acknowledge the contribution of the ethnomusicological community and all the researchers who have worked on the datasets used in this project. References Zhang, W. et al. (2025). DeepHeritageNet: Applying Deep Learning Models in the Transmission Network of Intangible Musical Heritage. Frontiers in Arts and Humanities.
import torch
import torch.nn as nn
import torch.nn.functional as F

# Define the Cultural Embedding Transmission Network (CETNet)
class CETNet(nn.Module):
    def __init__(self, input_dim, hidden_dim, output_dim):
        super(CETNet, self).__init__()
        
        # Define layers for encoding the motif and cultural memory
        self.encoder = nn.LSTM(input_dim, hidden_dim, batch_first=True)
        self.memory_embedding = nn.Linear(hidden_dim, hidden_dim)  # Cultural memory upd

CultureGraphNet

Overview CultureGraphNet introduces a graph-based structural attention learning framework designed to capture and analyze implicit cultural propagation across corporate networks. Traditional models often overlook the dynamic and heterogeneous nature of organizational interactions. CultureGraphNet addresses this by integrating structural attention mechanisms, graph-based learning, and temporal propagation dynamics, providing both predictive power and interpretability. 🧠 Key Components 1. Structural Attention Graph Network (SAGN) A hierarchical attention-driven graph neural network that dynamically assigns importance to nodes and edges. Learns latent cultural influences. Captures both local and global dependencies in corporate structures. Integrates multimodal encoders and graphical propagation layers. 2. Adaptive Cultural Diffusion Strategy (ACDS) A tailored propagation strategy that simulates reinforcement, resistance, and decay of cultural traits. Models time-dependent cultural evolution. Adapts to changing corporate structures and relationships. 3. Interpretability Framework Provides insight into how cultural traits propagate by identifying influential nodes and relationships within the network. βš™οΈ Architecture The schematic diagram on page 6 of the paper illustrates the model’s workflow: Multimodal Encoder Architecture: Extracts hierarchical features using a Swin Transformer backbone with patch merging and attention refinement. Graphical Propagation Layer: Aggregates node-level representations through attention-weighted adjacency matrices. Structural Attention Mechanism: (page 10, Figure 4) Combines multimodal features from visual and textual inputs to learn hierarchical dependencies. Temporal Cultural Dynamics: Simulates evolution of node-level cultural states over time using dynamic attention scores. 🧩 Model Formulation CultureGraphNet models a corporate network as a directed graph: Nodes (V): corporate entities (employees, teams, departments) Edges (E): relationships or interactions Node Features: cultural attributes, contextual data Adjacency Matrix (A): influence weights among entities Core equations: scss X(t + 1) = A Β· X(t) + F(X(t)) # Cultural propagation Ξ±_ij = softmax(LeakyReLU(aα΅€[W₁hα΅’ || Wβ‚‚hβ±Ό])) # Structural attention h'_u = Οƒ(Ξ£ Ξ±_uv W₃h_v) # Node updates cα΅’(t+1) = Ο†(cα΅’(t), Ξ£ Ξ±_ij cβ±Ό(t), zα΅’) # Temporal dynamics πŸ“Š Datasets CultureGraphNet is evaluated on four major datasets: Dataset Focus Description Corporate Network Interaction Dataset Communication Patterns Logs of emails, chats, and meetings capturing information flow. Organizational Culture Propagation Dataset Cultural Dynamics Longitudinal survey and observational data on cultural norms. Employee Relationship Graph Dataset Social Structures Mapping of formal and informal relationships within teams. Workplace Structural Dynamics Dataset Temporal Changes Data on restructuring, turnover, and adaptability metrics. πŸ§ͺ Experimental Results According to Tables 1–4 (pages 12–13): Achieved up to 91.5% accuracy and 90.7% AUC across datasets. Outperformed baselines like ResNet, ViT, DenseNet, and BLIP by 2–4%. Showed strong robustness, efficiency, and interpretability. Ablation studies confirm the critical impact of structural attention, temporal modeling, and interpretability features. βš™οΈ Implementation Details Parameter Value Framework PyTorch GPU NVIDIA A100 (40 GB) Optimizer Adam (lr = 1e-3, cosine decay) Batch Size 64 Epochs 100 Dropout 0.5 Weight Decay 1e-4 Data Augmentation MixUp, CutMix, random crop, jitter Metrics Accuracy, Recall, F1, AUC, mAP πŸš€ How to Use Installation bash git clone https://github.com/<your-username>/CultureGraphNet.git cd CultureGraphNet pip install -r requirements.txt Training bash python train.py --config configs/culturegraphnet.yaml Evaluation bash python evaluate.py --checkpoint checkpoints/best_model.pth Visualization bash python visualize_attention.py --graph data/sample_graph.json 🧩 Folder Structure bash β”œβ”€β”€ data/ # Example datasets or loaders β”œβ”€β”€ models/ # CultureGraphNet model files β”œβ”€β”€ configs/ # Experiment configurations β”œβ”€β”€ utils/ # Helper scripts (metrics, visualization) β”œβ”€β”€ results/ # Logs and output metrics └── README.md # Project documentation πŸ“š Citation If you use this framework, please cite: css @article{xu2025culturegraphnet, title={CultureGraphNet: Graph-Based Structural Attention Learning for Implicit Culture Propagation in Corporate Networks}, author={Gaofan Xu}, journal={China Three Gorges University}, year={2025} } πŸ“œ License This repository is licensed under the MIT License. See LICENSE for details.
# models/structural_attention.py
from __future__ import annotations

from typing import Optional, Tuple
import torch
from torch import nn, Tensor
import torch.nn.functional as F


class StructuralAttentionConv(nn.Module):
    r"""
    Structural Attention message passing (single head) inspired by the paper's
    formulations (attention over neighbors with separate W1/W2 and vector 'a').&#8203;:contentReference[oaicite:3]{index=3}

    For a directed graph G=(V,E), we compute for edge u->v:
     

M3DecideNet: Multi-Modal Attention-Driven Fusion for Enterprise Management Decision Support

Overview M3DecideNet is a cutting-edge multi-modal attention-driven fusion framework designed to enhance enterprise management decision-making by integrating diverse data sources, such as financial metrics, operational statistics, and external market indicators. This system utilizes a dynamic attention mechanism to optimize predictive accuracy and interpretability in real-time decision-making. Key Features Multi-Modal Fusion: Integrates various data types (text, numerical, visual) using attention mechanisms. Adaptive Decision Strategy: Context-aware fusion strategy to adapt to different enterprise environments. State-of-the-art Performance: Empirical results show superior predictive performance across enterprise management scenarios. Scalability & Flexibility: Suitable for real-time business applications and adaptable to different enterprise contexts. Installation bash # Clone the repository git clone https://github.com/yourusername/M3DecideNet.git cd M3DecideNet # Install dependencies pip install -r requirements.txt Usage python from m3decidenet import M3DecideNet # Initialize the model model = M3DecideNet() # Train the model with your enterprise data model.train(training_data) # Make predictions predictions = model.predict(new_data) Documentation Components Preliminaries: Defines the mathematical foundation for enterprise decision-making, addressing multi-modal data fusion challenges. M3FusionNet: The core model that uses attention mechanisms for dynamic data integration. Adaptive Decision Fusion Strategy (ADFS): A robust strategy to optimize decision-making through attention modulation and regularization. For a deeper dive into the methodology, refer to the research paper linked in the References section. Experiments The framework has been evaluated using multiple datasets, including Enterprise Decision Support Data, Multi-Modal Management Insights, and others. Performance metrics such as Accuracy, Precision, Recall, and AUC show that M3DecideNet outperforms leading models. Evaluation Results Accuracy: 90.45% (on Multi-Modal Management Insights Dataset) Precision: 89.78% Recall: 89.23% AUC: 90.12% Contributing We welcome contributions to improve and expand M3DecideNet. Please follow these steps to contribute: Fork the repository. Create a new branch for your feature or fix. Submit a pull request with a clear description of your changes. License This project is licensed under the MIT License - see the LICENSE file for details. References [Bi et al., 2022] Enterprise strategic management using multi-modal emotion recognition. Frontiers in Psychology. [Ren et al., 2024] Multi-modal fusion for review helpfulness prediction. Information Processing & Management. [Wang et al., 2023] Attentive statement fraud detection with multi-modal financial data. Decision Support Systems.
import torch
import torch.nn as nn
import torch.nn.functional as F

class MultiModalAttention(nn.Module):
    def __init__(self, input_dims, attention_dims):
        super(MultiModalAttention, self).__init__()
        
        self.attention_layers = nn.ModuleList([
            nn.Linear(input_dim, attention_dims) for input_dim in input_dims
        ])
        self.attention_weights = nn.Parameter(torch.ones(len(input_dims)))

    def forward(self, inputs):
        attention_scores

Multimodal Data Fusion for Evaluating the Effectiveness of Cadre Training

Overview This repository implements the research presented in β€œMultimodal Data Fusion for Evaluating the Effectiveness of Cadre Training.” The project proposes a unified framework that integrates multimodal dataβ€”text, audio, video, and physiological signalsβ€”to assess the effectiveness of cadre (leadership) training. It emphasizes interpretability, adaptability, and policy alignment in performance evaluation. 🧠 Key Components Hierarchically Attentive Progression Encoder (HAPE): A temporal encoder that captures cross-time and cross-unit dependencies in performance data using hierarchical attention and GRU-based modeling. (See architecture diagram in Figure 1, page 8 of the paper.) Policy-Aligned Knowledge-Guided Adaptation (PAKGA): A reinforcement learning strategy that integrates institutional policies and domain knowledge into adaptive decision-making. (See schematic on page 11 for workflow illustration.) Multimodal Fusion Framework: Combines heterogeneous data modalities (video, text, audio, sensor data) into interpretable embeddings, improving robustness and accuracy. πŸ“Š Experimental Highlights Achieved ~91% accuracy and ~92% AUC on benchmark datasets: Cadre Training Performance Dataset Multimodal Leadership Assessment Dataset Training Effectiveness Metrics Dataset Behavioral Insights Fusion Dataset Outperforms baseline models like OC-SVM, TranAD, and MSCRED by 4–6%. βš™οΈ Implementation Details Frameworks: PyTorch, Hugging Face Transformers Optimizers: AdamW with cosine annealing Learning Rate: 3e-4 (with warmup and decay) Batch Size: 64 per GPU Evaluation Metrics: Accuracy, F1-Score, AUC, MSE, Pearson Correlation 🧩 Folder Structure graphql β”œβ”€β”€ data/ # Sample datasets or data loading scripts β”œβ”€β”€ models/ # HAPE and PAKGA model definitions β”œβ”€β”€ utils/ # Helper functions (training, evaluation, visualization) β”œβ”€β”€ experiments/ # Configuration files and logs β”œβ”€β”€ figures/ # Architecture diagrams and result visualizations └── README.md # Project documentation πŸš€ Usage Install Dependencies bash pip install -r requirements.txt Train Model bash python train.py --config configs/hape_pakga.yaml Evaluate bash python evaluate.py --checkpoint checkpoints/best_model.pth 🧩 Citation If you use this work, please cite: scss @article{wang2025multimodal, title={Multimodal Data Fusion for Evaluating the Effectiveness of Cadre Training}, author={Wang, Ke}, journal={Shengli Oilfield Party School (Training Center)}, year={2025} } πŸ“œ License This project is released under the MIT License. See LICENSE for details.
# models/hape.py
from __future__ import annotations

import math
from typing import Optional, Tuple, Dict

import torch
from torch import nn, Tensor
import torch.nn.functional as F


class SinusoidalPositionalEncoding(nn.Module):
    """
    Classic transformer-style fixed positional encoding.

    Args:
        dim: feature dimension
        max_len: maximum sequence length supported
    """
    def __init__(self, dim: int, max_len: int = 10_000):
        super().__init__()
        pe = torch.z

Verifica Objetos invΓ‘lidos

DECLARE
    v_total_pkg     NUMBER := 0;
    v_success_pkg   NUMBER := 0;
    v_error_pkg     NUMBER := 0;
    v_total_prc     NUMBER := 0;
    v_success_prc   NUMBER := 0;
    v_error_prc     NUMBER := 0;
    v_total_fnc     NUMBER := 0;
    v_success_fnc   NUMBER := 0;
    v_error_fnc     NUMBER := 0;
    v_total_trg     NUMBER := 0;
    v_success_trg   NUMBER := 0;
    v_error_trg     NUMBER := 0;
BEGIN
    -- PACKAGES
    FOR pkg IN (SELECT DISTINCT object_name FROM user_objects WHERE object

shareX audio recording parameters

# shareX audio recording parameters

![](https://cdn.cacher.io/attachments/u/3fx93fy4dqwj6/wKxhyPsYdB5zhLVZ92KkCxnxrgaaM702/b2jobpocb.png)

γƒ•γƒ­γƒ³γƒˆγ‚¨γƒ³γƒ‰γ γ‘γ§γƒγƒͺγƒ‡γƒΌγ‚·γƒ§γƒ³γ—γ¦γ‚‚ζ„ε‘³γŒγͺいァンプル

<form id="purchase-form">
  <label>
    ε€‹ζ•°οΌˆζœ€ε€§2個まで):
    <input type="number" id="quantity" name="quantity" min="1" max="2" required />
  </label>
  <button type="submit">θ³Όε…₯</button>
</form>

<script>
  document.getElementById("purchase-form").addEventListener("submit", async (e) => {
    e.preventDefault();

    const quantity = parseInt(document.getElementById("quantity").value, 10);

    // γƒ•γƒ­γƒ³γƒˆε΄γƒγƒͺデーション
    if (quantity > 2) {
      alert("2個までしか購ε…₯できません");
      return;
    }

    // APIに送俑