AI & Dynamical Systems

About Me :Kaichen Ouyang

After graduating with a Bachelor's degree from the University of Science and Technology of China (USTC) in June 2024, I am now a Master's student in the Department of Physics at USTC. My research interests lie at the intersection of artificial intelligence and dynamical systems, specifically focusing on three interconnected directions:

  • Dynamical Systems of AI: understanding the training dynamics of deep learning from the perspective of dynamical systems. This includes studying the dynamics of optimizers such as SGD and Adam, as well as understanding training dynamics through the lenses of statistical physics and neural evolution. (Theory)
  • Dynamical Systems for AI: designing novel AI algorithms inspired by real-world dynamical systems. This encompasses metaheuristic algorithms such as evolutionary computation and swarm intelligence, deep learning architectures like diffusion models and flow matching, as well as the development of new deep learning optimizers. (Algorithm)
  • AI for Dynamical Systems: leveraging AI to solve various complex dynamical system problems. This includes using AI to accelerate the simulation of dynamical systems, as well as prediction and optimization tasks based on dynamical systems. (Application)
Google Scholar: View Profile
CV: Download PDF

Education Background

2020.09 – 2024.06

University of Science and Technology of China

Bachelor of Mathematics and Applied Mathematics

2024.09 – 2027.06 (Expected)

University of Science and Technology of China, China

Master of Physics

IELTS: 7.0/9.0

Certificate: Plan for strengthening basic academic disciplines, Strengthening Foundation Plan in Mathematics

Conference Experience

The 40th AAAI Conference on Artificial Intelligence (AAAI) 2026, Poster Presentation

IEEE Congress on Evolutionary Computation (CEC) 2025, Oral Presentation

5th Amorphous Physics and Materials Symposium 2024, Attendee

3rd International Conference on Applied Mathematics, Modelling, and Intelligent Computing (CAMMIC 2023), Attendee

2023 4th International Conference on Computer Vision, Image and Deep Learning (CVIDL 2023), Attendee

2023 8th International Conference on Intelligent Computing and Signal Processing, Attendee

School Experience

2022: Mathematical Analysis B1, Teaching Assistant

2024: Mathematical Modeling, Teaching Assistant

Research Experience

2025.5 – Present

Surrogate-Driven Multi-Objective Evolutionary Design of Battery Phase Change Materials, Cooperation Project at USTC
Advisor: Prof. Qiangling Duan

2024.2 – 2024.9

Discovery of Metallic Glasses Driven by Large Language Models and Graph Neural Networks, University Innovation Project at Songshan Lake Materials Laboratory
Advisor: Prof. Yuanchao Hu

2023.3 – 2024.6

Deep Neural Network-Based Control of Quantum Uncertain Systems, University Innovation Project at USTC
Advisor: Prof. Sen Kuang

2023.5 – Present

Statistical Physics & Artificial Intelligence, Master Student at USTC
Advisor: Prof. Hua Tong

2021.9 – Present

Evolutionary Algorithms & Machine Learning, Research Assistant at Wenzhou University
Advisor: Prof. Huiling Chen

2021.3 – 2022.9

Non-equilibrium Statistical Physics & Complex Networks, Research Assistant at USTC
Advisor: Prof. Binghong Wang

Software Copyright

KC-optimizer: An Integrated Interactive Platform for Metaheuristic Algorithms for Function Optimization V1.0 (Software Copyright, Registration No. 2024SR0164438)

Reviewer Experience

2024: Swarm and Evolutionary Computation (JCR Q1, IF: 8.2), Reviewer

2025: Knowledge-Based Systems (JCR Q1, IF: 7.2), Reviewer

2025: International Joint Conference on Neural Networks (IJCNN), Reviewer

2025: International Conference on Intelligent Computing (ICIC), Reviewer

2025: AAAI 2026, Reviewer

2025: Computers and Electrical Engineering (JCR Q1, IF: 4.9), Reviewer

2025: International Journal of Computational Intelligence Systems (JCR Q2, IF: 3.116), Reviewer

2025: Information Sciences (JCR Q1, IF: 6.8), Reviewer

2025: Neurocomputing (JCR Q1, IF: 6.5), Reviewer

2025: Scientific Reports (JCR Q1, IF: 3.9), Reviewer

2025: Engineering Reports (JCR Q2, IF: 2.0), Reviewer

2025: Biomedical Signal Processing and Control (JCR Q2, IF: 4.9), Reviewer

2025: Engineering Computations (JCR Q2, IF: 1.9), Reviewer

2025: Cluster Computing (JCR Q1, IF: 4.1), Reviewer

2025: Control and Decision (Chinese Core Journals, IF:3.012), Reviewer

2025: ISPRS Journal of Photogrammetry and Remote Sensing (JCR Q1, IF: 12.2), Reviewer

2025: Results in Engineering (JCR Q1, IF: 7.9), Reviewer

2025: Smart Agricultural Technology (JCR Q1, IF: 5.7), Reviewer

2025: Optik, Reviewer

2025: Advanced Engineering Informatics (JCR Q1, IF: 9.9), Reviewer

2026: IEEE WCCI 2026, Reviewer

2026: Mathematics (JCR Q1, IF: 2.2), Reviewer

2026: Discover Informatics, Reviewer

Skills

Language skills: Chinese (Native), English (Fluent)

Computer Skills: Microsoft Office 365, Python, MATLAB, MySQL, Java, C/C++, Lammps

Contact

Email: oykc@mail.ustc.edu.cn

Tel: +86 15888787619

Discussions on Optimization

Emergence & Optimization

1. Unified Optimization Perspective

Despite their different origins, Deep Learning (DL), Evolutionary Computation (EC), and Statistical Physics (SP) share a common mathematical essence: optimization over complex landscapes. All three fields aim to find optimal states in high-dimensional spaces, where the objective function can be viewed as an energy landscape.

Deep Learning

Function: Loss minimization L(θ)

Method: Gradient descent in parameter space

Analogy: Water flowing downhill in a complex terrain

Evolutionary Computation

Function: Fitness maximization F(x)

Method: Population-based stochastic search

Analogy: Species adapting to changing environments

Statistical Physics

Function: Free energy minimization F(s)

Method: Statistical mechanics & Monte Carlo

Analogy: Physical system reaching thermal equilibrium

Optimization Landscape Visualization

The fundamental challenge across all three domains is navigating these high-dimensional landscapes efficiently, avoiding local minima, and understanding their topological properties. Statistical Physics provides the unifying framework through concepts of energy landscapes, phase transitions, and entropy.

2. Bidirectional Interactions in the Optimization Triangle

A. Deep Learning ↔ Evolutionary Computation

EC → DL (Forward): Evolutionary algorithms automate the design of neural architectures, loss functions, and hyperparameters. They explore discrete, non-differentiable spaces that gradient methods cannot handle, enabling architecture search and meta-learning.

DL → EC (Reverse): Deep learning provides surrogate models to accelerate fitness evaluation, learns representations to guide evolutionary search, and enables neural-based variation operators. Trained models can predict which evolutionary strategies will be most effective.

B. Deep Learning ↔ Statistical Physics

SP → DL (Forward): Statistical physics provides theoretical foundations for understanding loss landscapes (curvature, saddle points), inspires optimization strategies (simulated annealing, parallel tempering), and explains generalization through concepts like phase transitions and the flattening of minima.

DL → SP (Reverse): Deep learning serves as a computational tool for solving statistical physics problems: calculating partition functions, detecting phase transitions, learning order parameters, and designing materials with desired properties (AI for Science). Neural networks provide new ways to simulate and understand complex physical systems.

C. Evolutionary Computation ↔ Statistical Physics

SP → EC (Forward): Statistical physics enhances EC by providing theoretical frameworks to understand and improve evolutionary dynamics, such as analyzing selection pressure as "temperature" and diversity as "entropy," and designing adaptive strategies based on thermodynamic principles.

EC → SP (Reverse): Evolutionary computation solves SP optimization problems for complex systems with rugged energy landscapes, and serves as a computational tool for studying non-equilibrium statistical mechanics and testing adaptation theories.

3. Closed-Loop Optimization System

My research integrates these interactions into a self-improving optimization engine that operates through three complementary mechanisms:

1. EC as Global Explorer: Evolutionary algorithms perform broad exploration in complex, multimodal spaces, identifying promising regions for detailed investigation.

2. DL as Local Refiner: Deep learning performs gradient-based optimization within identified regions, efficiently converging to high-quality solutions.

3. SP as Diagnostic Analyst: Statistical physics analyzes the optimization landscape and process dynamics, providing insights that guide adaptive control of both EC and DL parameters.

The three components form a virtuous cycle: EC explores and proposes candidate solutions; DL refines these solutions through gradient optimization; SP analyzes the resulting landscapes and dynamics; this analysis then informs the adaptive control of both EC and DL, creating a self-improving optimization system.

Research Implications:

  • For Deep Learning: More robust architectures and training procedures informed by evolutionary principles and physical insights about energy landscapes
  • For Evolutionary Computation: More efficient search strategies guided by gradient information and landscape analysis from statistical physics
  • For Statistical Physics: New computational tools and testbeds for studying complex systems and non-equilibrium dynamics
  • Cross-disciplinary: A unified framework for understanding optimization across artificial and natural systems, bridging AI with physics and biology

Matter, Life and Consciousness: The Hierarchical Containment

"Consciousness emerges from life, which emerges from matter. Understanding this hierarchy is key to understanding intelligence in both natural and artificial systems."

The Fundamental Containment: SP→EC→DL as Matter→Life→Consciousness

This optimization framework reflects a deeper ontological truth about the nature of reality. The progression Matter → Life → Consciousness establishes a strict containment relationship that directly corresponds to our mathematical framework of Statistical Physics (SP) → Evolutionary Computation (EC) → Deep Learning (DL).

Matter → Statistical Physics (SP)
Fundamental physical laws & constraints
↓ emerges
Life → Evolutionary Computation (EC)
Biological adaptation within physical constraints
↓ emerges
Consciousness → Deep Learning (DL)
Intelligent processing in living systems
SPECDL
All DL ∈ EC ∈ SP (mathematically and ontologically)

Statistical Physics (Matter)

All life and consciousness must obey physical laws. Statistical physics describes how order emerges from disorder through phase transitions, providing the fundamental constraints within which life can evolve.

Key Concepts: Thermodynamics, entropy, non-equilibrium systems, self-organization

Evolutionary Computation (Life)

Life represents matter organized in a way that allows adaptation and evolution. EC models how genetic information evolves within physical constraints, creating complex adaptive systems.

Key Concepts: Natural selection, genetic algorithms, population dynamics, adaptation

Deep Learning (Consciousness)

Consciousness emerges in living systems capable of sophisticated information processing. DL models how neural networks can learn and make decisions, representing the computational aspect of consciousness.

Key Concepts: Neural networks, learning algorithms, pattern recognition, decision-making

Mathematical Correspondence: Why This Hierarchy Matters

The containment relationship is mathematically precise and reflects fundamental constraints:

SP Search Space ⊃ EC Search Space ⊃ DL Search Space
(All physical states) ⊃ (All possible genotypes) ⊃ (All neural network parameters)

This means:

  • DL problems are special cases of EC problems
  • EC problems are special cases of SP problems
  • Higher-level optimization must occur within lower-level constraints
  • All artificial intelligence systems inherit constraints from both evolutionary principles and physical laws

Implications for Artificial Intelligence Research

For AI Development

  • Understand DL's physical limits (energy consumption, thermodynamics)
  • Incorporate evolutionary principles for more robust architectures
  • Use SP to analyze neural network phase transitions
  • Design AI systems that respect biological constraints

For Understanding Natural Intelligence

  • Consciousness emerges from evolved biological systems
  • Learning algorithms have evolutionary origins
  • All cognition is physically instantiated
  • Biological systems operate within energy and resource constraints

For Theoretical Foundations

  • Life as non-equilibrium statistical physics
  • Consciousness as information processing in complex systems
  • Unified theory of optimization across scales
  • Bridging physics, biology, and computer science

Core Proposition: The Physical Basis of All Intelligence

All forms of intelligence—biological, artificial, individual, or collective— must obey physical laws (SP), are shaped by evolutionary processes (EC), and manifest through learning mechanisms (DL). The SP→EC→DL hierarchy is not just a technical framework but reflects the fundamental layered structure through which intelligence emerges in nature.

The long-term vision arising from this framework is to establish a statistical mechanics of AI, develop an evolutionary theory of AI, and ultimately advance our understanding of the nature of consciousness in AI.

To truly understand and develop intelligent systems, we must consider all three layers simultaneously: their physical basis, evolutionary history, and computational mechanisms.

Publications and Preprints

K Ouyang, S Fu. Learn from Global Correlations: Enhancing Evolutionary Algorithm via Spectral GNN. 40th AAAI Conference on Artificial Intelligence (AAAI 2026) arXiv:2412.17629, 2024. First Author https://doi.org/10.48550/arXiv.2412.17629

K Ouyang, S Fu, Y Chen, H Chen. Dynamic Graph Neural Evolution: An Evolutionary Framework Integrating Graph Neural Networks with Adaptive Filtering. 2025 IEEE Congress on Evolutionary Computation (Oral). First Author https://ieeexplore.ieee.org/document/11042917

K Ouyang, S Fu, Y Chen, Q Cai, AA Heidari, H Chen. Escape: an optimization method based on crowd evacuation behaviors. Artificial Intelligence Review 58(1), 2024. First Author https://doi.org/10.1007/s10462-024-11008-6

K Ouyang, D Wei, X Sha, J Yu, Y Zhao, M Qiu, S Fu, AA Heidar, H Chen. Beaver Behavior Optimizer: A Novel Metaheuristic Algorithm for Solar PV Parameter Identification and Engineering Problems. Journal of Advanced Research. First Author https://doi.org/10.1016/j.jare.2025.09.001

K Ouyang, D Wei, S Fu, S Gu, X Sha, J Yu, J Yu, AA Heidar, Z Cai, H Chen. Multi-objective Red-billed Blue Magpie Optimizer: A Novel Algorithm for Multi-objective UAV Path Planning. Results in Engineering. First Author https://doi.org/10.1016/j.rineng.2025.106785

K Ouyang‡, S Zhang‡, S Liu‡, J Tian, Y Li, H Tong, H Bai, YC Hu, WH Wang. Graph Learning Metallic Glass Discovery from Wikipedia. AI for Science First Author https://iopscience.iop.org/article/10.1088/3050-287X/ae1b20

W Xiao, JJ Lian, K Ouyang, S Gu, Z Ke, D Wei, X Sha, J Wang, S Fu, M Qiu, C Xu. Newton Downhill Optimizer for Global Optimization with Application to Breast Cancer Feature Selection. Biomedical Signal Processing and Control. Corresponding Author https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312159

D Wei, Z Wang, M Qiu, J Yu, J Yu, Y Jin, X Sha, K Ouyang. Multiple Objectives Escaping Bird Search Optimization and Its application in Stock Market Prediction Based on Transformer Model. Scientific Reports 15(1), 5730, 2025. Corresponding Author https://doi.org/10.1038/s41598-025-88883-8

K Ouyang, et al. A Comprehensive Analysis of Digital Inclusive Finance's Influence on High Quality Enterprise Development through Fixed Effects and Deep Learning Frameworks. Scientific Reports. Corresponding Author https://doi.org/10.1038/s41598-025-14610-y

S Qiu, Y Wang, Z Ke, Q Shen, Z Li, R Zhang, K Ouyang. A Generative Adversarial Network-Based Investor Sentiment Indicator. Mathematics 13(9), 1476, 2025. Corresponding Author https://doi.org/10.3390/math13091476

JB Lian, K Ouyang, R Zhong, Y Zhang, S Luo, L Ma, X Wu, H Chen. Trend-Aware Mechanism for metaheuristic algorithms. Applied Soft Computing, 2025. Second Author https://doi.org/10.1016/j.asoc.2025.113505

S Fu, M Yu, K Ouyang, Q Fan, H Huang. MLLMs-MR: Multi-modal Recognition based on Multi-modal Large Language Models. Knowledge-Based Systems, 2025. Second Author https://doi.org/10.1016/j.knosys.2025.114717

JJ Lian, H Chen, K Ouyang, Y Zhang, R Zhong, H Chen. Twisted Convolutional Networks (TCNs): Enhancing Feature Interactions for Non-Spatial Data Classification. Neural Networks, 2025. doi: 10.1016/j.neunet.2025.108451. Co Author

YQ Wang, C Xu, ML Fang, TZ Li, LW Zhang, DS Wei, K Ouyang, et al. Study of nonequilibrium phase transitions mechanisms in exclusive network and node model of heterogeneous assignment based on real experimental data of KIF3AC and KIF3CC motors. The European Physical Journal Plus 137(10), 1-22, 2022. Co Author https://doi.org/10.1140/epjp/s13360-022-03372-5

YQ Wang, DS Wei, LW Zhang, TY Zhang, TZ Li, ML Fang, KC Ouyang, et al. Physical mechanisms of exit dynamics in microchannels. International Journal of Modern Physics B 38(15), 2450193. Co Author https://doi.org/10.1142/S0217979224501935

S Gu, ..., K Ouyang, et al. Wave Optics Optimizer: A novel meta-heuristic algorithm for engineering optimization. Communications In Nonlinear Science And Numerical Simulation. Co Author https://doi.org/10.1016/j.cnsns.2025.109337

K Ouyang, T Hou, JJ Lian, S Fu, Z Ke, D Wei, M Qiu, J Ouyang. Stochastic Gradient-guided Adaptive Differential Evolution: Algorithm and Its Application in the Diagnosis of COVID-19, Influenza, and Bacterial Pneumonia. Artificial Intelligence In Medicine (Under Review). First Author

K Ouyang. Rethinking Over-Smoothing in Graph Neural Networks: A Perspective from Anderson Localization. arXiv preprint arXiv:2507.05263, 2025. Sole First Author https://doi.org/10.48550/arXiv.2507.05263

K Ouyang. Consciousness as a Jamming Phase. arXiv preprint arXiv:2507.08197, 2025. Sole First Author https://arxiv.org/abs/2507.08197

K Ouyang. Why Flow Matching is Particle Swarm Optimization?. arXiv preprint arXiv:2507.20810, 2025. Sole First Author https://arxiv.org/abs/2507.20810

J Yu, J Yu, D Wei, X Sha, S Fu, M Qiu, Y Jin, K Ouyang. Multi-Objective Mobile Damped Wave Algorithm (MOMDWA): A Novel Approach For Quantum System Control. arXiv preprint arXiv:2502.05228, 2025. Corresponding Author https://arxiv.org/abs/2502.05228

D Wei, K Ouyang, Z Wang, X Sha, M Qiu, Z Yi, AA Heidari, H Chen. Multi-strategy boosted dung beetle algorithm and its application for bankruptcy prediction. Neural Networks (Under Review). Corresponding Author

JJ Lian, K Ouyang, et al. IKUN: A mean-field game theoretic KD-tree density guided mechanism for swarm optimization. Swarm and Evolutionary Computation (Under Review). Co Author

Honors and Awards

2024

Second Prize (Honorable Mention), MCM/ICM

2023

First Prize (Meritorious), Huashu Cup International Mathematical Contest in Modeling

2023

First Prize, National College Students' Mathematics Competition

2022

International Second Prize, Asia-Pacific Mathematical Modeling Competition

2020-2021

Outstanding Student Gold Award, University of Science and Technology of China

Original Algorithms

Metaheuristic Algorithms

Escape Algorithm (ESC)

Inspiration Video:

Code: Download ESC Code

DOI Link: https://doi.org/10.1007/s10462-024-11008-6

Beaver Behavior Optimizer (BBO)

Inspiration Video:

Code: Download BBO Code

DOI Link: https://doi.org/10.1016/j.jare.2025.09.001

Wave Optics Optimizer (WOO)

Inspiration Video:

Code: Download WOO Code

DOI Link: https://doi.org/10.1016/j.cnsns.2025.109337

Newton Downhill Optimizer (NDO)

Inspiration Video:

Code: Download NDO Code

DOI Link: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5312159

Graph Neural Evolution (GNE)

Inspiration Video:

Code: Download GNE Code

DOI Link: https://doi.org/10.48550/arXiv.2412.17629