Research
I develop safe, interpretable, and adaptive AI systems for real-world cyber-physical environments that must operate under uncertainty, strict physical constraints, and adversarial conditions. My work sits at the intersection of safe reinforcement learning, physics-informed AI, probabilistic modeling, and AI safety evaluation with a core focus on building systems that are robust, verifiable, and deployable at scale in safety-critical domains.
Vision
My research centers around three core thrusts:
- Safe & Trustworthy Reinforcement Learning: Designing agents that remain reliable under sensor noise, hardware faults, non-stationarity, and adversarial perturbations β including constraint-aware learning, certified robustness, and safe exploration in safety-critical settings.
- Physics-Informed AI & Probabilistic Modeling: Embedding physical laws, invariants, and feasibility constraints directly into model architectures; quantifying epistemic and aleatoric uncertainty for risk-aware planning in partially observable environments.
- AI Safety, Alignment & LLM Evaluation: Designing behavioral and mechanistic evaluations to detect deceptive alignment, reward hacking, and eval-awareness in LLM agents; building scalable oversight frameworks for agentic systems in safety-critical deployments.
Mission
By integrating physics-guided structure, probabilistic reasoning, and safe reinforcement learning, my goal is to build the next generation of AI systems that are:
- Reliable β under uncertainty, noise, and adversarial conditions
- Generalizable β across tasks, scales, and distribution shifts
- Interpretable β to operators, engineers, and decision-makers
- Deployable β in large-scale, real-world cyber-physical environments
Research Focus
These focus areas organize my ongoing and recent projects that bridge fundamental methods and deployable systems.
Physics-informed deep RL, safe and uncertainty-aware control, LLM-guided planning, and sim-to-real transfer for smart grid and DER-integrated energy systems.
Behavioral and mechanistic evaluations for deceptive alignment, adversarial robustness, reward hacking detection, and scalable oversight for agentic LLM systems.
End-to-end perception-control pipelines, multimodal sensor fusion, and sim-to-real transfer using CARLA, AirSim, and OpenDSS for autonomous and cyber-physical systems.
Publications
Journal Papers Total: 2
-
Arif Hussian, , Gelli Ravikumar
Bayesian-optimized bidirectional long-short-term memory network for wind power forecasting with uncertainty quantification , Electric Power Systems Research, 2026
Paper Code οΈPoster -
, Gelli Ravikumar
Physics-based Deep Reinforcement Learning for Grid-Resilient Volt-VAR Control (Under Review), IEEE Transactions on Smart Grid, 2025
Paper Code Poster
Conference Papers Total: 7
-
, Gelli Ravikumar
A Multi-Objective Optimization Framework for Carbon-Aware Smart Energy Management , IEEE North American Power Symposium (NAPS), 2025
Paper Presentation -
, Kumar Utkarsh, Wang Jiyu, Padullaparti Harsha
Advanced Semi-Supervised Learning With Uncertainty Estimation for Phase Identification in Distribution Systems , IEEE PES General Meeting, 2025
Paper Presentation Poster -
, Gelli Ravikumar
Transfer Learning Enhanced Deep Reinforcement Learning for Volt-Var Control in Smart Grids , IEEE PES Grid Edge Technologies Conference & Exposition, 2025
Paper Poster