PROBABILISTIC SAFE RL: MODELLING AND MANAGING RISK IN DYNAMIC ENVIRONMENTS


Dr. N Subbukrishna Sastry, Mr. Arunachalam T
1.Professor, School of Management, CMR University, Bangalore, Karnataka, 2. Computer Science & Engineering Student,Presidency University, Bangalore, Karnataka, India
Abstract
In real-world applications, reinforcement learning (RL) systems are being increasingly used in high-stakes and safety-critical situations, such as autonomous driving, robotics, healthcare, and finance. However, standard RL methods often do not account for risk and uncertainty, which can lead to unsafe or suboptimal decisions in dynamic and uncertain conditions. This research presents a new method for, Probabilistic Safe Reinforcement Learning (Safe RL), which is designed to model, measure, and manage risks in complex and non-stationary environments. The study's objective is to enable learning agents to achieve high performance while minimizing the likelihood of unsafe actions or catastrophic failures during exploration and deployment. The researchers aim to bridge the gap between theoretical safety guarantees and practical real-world implementation by evaluating the framework on benchmark environments and real-time simulation tasks. The researchers in their research contributes to the advancement of trustworthy and deployable AI systems capable of robust decision-making under uncertainty.
Keywords:
Journal Name :
EPRA International Journal of Research & Development (IJRD)

VIEW PDF
Published on : 2025-09-12

Vol : 10
Issue : 9
Month : September
Year : 2025
Copyright © 2025 EPRA JOURNALS. All rights reserved
Developed by Peace Soft