Building Safer AI: Alignment and Robust Cyber-Physical Systems
Why the next generation of AI must be predictable, aligned, and physically grounded
AI Safety Research Fellow @ Algoverse | PhD Researcher | Safe Reinforcement Learning, Multi-Agent Systems & LLM Agents
Hi! I’m Kundan Kumar, a Ph.D. candidate in Computer Science with a minor in Statistics at Iowa State University and currently an AI Safety Research Fellow at Algoverse. My research centers on building safe, reliable, and adaptable AI systems for next-generation cyber-physical infrastructure, including smart grids, autonomous systems, and multi-agent environments, with a particular focus on evaluations, adversarial robustness, and scalable oversight for agentic systems.
I design safety-critical deep reinforcement learning (DRL) systems that integrate domain knowledge, uncertainty, and constraints for robust decision-making under distribution shifts and partial observability. My focus includes adversarial robustness, transfer learning, and scalable oversight for reliability in high-stakes environments. Recently, I’ve developed LLM-integrated frameworks that connect perception, planning, and language reasoning, linking low-level control with interpretable decision-making. I am particularly interested in AI safety, alignment, and evaluation at the intersection of foundation models and physical systems.
Beyond research, I enjoy sharing my insights through educational content on Substack and YouTube. Outside of work, I love cooking and Ice skating 🛼.