Research
Research Vision & Mission
I aim to develop safe, interpretable, and adaptive AI systems for real-world cyber-physical environments that operate under uncertainty, constraints, and adversarial conditions. My research bridges the domains of machine learning, optimization, and control theory, with a strong emphasis on safety, robustness, and generalization.
My work centers around the following pillars:
- Safe & Trustworthy Reinforcement Learning: Designing agents that are robust to adversarial attacks, resilient to distributional shifts, and capable of safe exploration.
- Physics-informed Deep Reinforcement Learning (DRL): Embedding physical laws and constraints into learning frameworks for stability, interpretability, and faster convergence.
- Probabilistic & Bayesian Modeling: Probabilistic & Bayesian Modeling: Capturing both epistemic and aleatoric uncertainties for reliable control in high-stakes, partially observable systems.
- Large Language Models (LLMs) for autonomous reasoning: Leveraging large language models (LLMs) to enhance planning, explainability, and human-AI collaboration in control systems.
- Vision-based simulation environments: Using platforms like CARLA and CityLearn to train agents in multimodal, visually rich, and interactive worlds.
By tightly integrating domain knowledge into learning frameworks, I aim to enable resilient, generalizable, and safe AI for critical applications including smart grids, autonomous systems, and intelligent infrastructure.
My Research Focus Areas
Application Domains
| Domain | Description |
|---|---|
| Smart Energy Systems | Volt-VAR control, DER coordination, and federated DRL for power grid stability |
| Autonomous Systems | Safe navigation, adaptive planning, and control in simulation and real-world environments |
| Secure AI for Infrastructure | Resilience against cyber-attacks and adversarial scenarios in safety-critical systems |
Publications
Arif Hussian, Kundan Kumar, Gelli Ravikumar
Bayesian-optimized bidirectional long-short-term memory network for wind power forecasting with uncertainty quantification , Electric Power Systems Research, 2026
Paper Code PosterKundan Kumar, Gelli Ravikumar
Physics-based Deep Reinforcement Learning for Grid-Resilient Volt-VAR Control (Under Review), IEEE Transactions on Smart Grid, 2025
Paper Code Poster
Kundan Kumar, Kumar Utkarsh, Wang Jiyu and Padullaparti Harsha Advanced Semi-Supervised Learning With Uncertainty Estimation for Phase Identification in Distribution Systems in Proceedings of the IEEE PES General Meeting, 2025
Paper Code PosterKundan Kumar, Gelli Ravikumar
Transfer Learning Enhanced Deep Reinforcement Learning for Volt-Var Control in Smart Grids in Proceedings of the IEEE PES Grid Edge Technologies Conference & Exposition, 2025
Paper Code PosterKundan Kumar, Aditya Akilesh Mantha, Gelli Ravikumar
Bayesian Optimization for Deep Reinforcement Learning in Robust Volt-Var Control in Proceedings of the IEEE PES General Meeting, 2024
Paper Code Poster
Ongoing Projects
Federated DRL for Cyber-Resilient Volt-VAR Optimization
Decentralized, communication-efficient control using LSTM-enhanced PPO agents across distributed DERs.One-Shot Policy Transfer with Physics Priors
Train agents on small topologies and adapt to IEEE 123-bus, 8500-node networks in a few iterations.LLM-Guided Autonomous Planning for Smart Buildings
Convert user prompts to interpretable control policies using LLMs (OpenAI, Claude) in CityLearn environments.
DRL for Volt-VAR
Sim-to-Real Transfer
Robust & Stable Learning
Uncertainty-Aware Policies
Domain Adaptation
Meta-RL for Efficiency
Perception-Control Fusion
Multi-modal Representations
LLM-Guided Control