Deep RL Racecar
Navigating the Future: My Autonomous Racecar Project 🏁
Welcome to my exciting journey into the world of autonomous racecar navigation! I'm on a mission to train a racecar to conquer complex racetracks autonomously, using cutting-edge techniques in deep reinforcement learning. Join me as I guide this machine to stay on the midline while respecting track boundaries, all while maximizing rewards.
Unleashing the Power of Reinforcement Learning
I've taken the helm in this project, employing state-of-the-art deep reinforcement learning techniques to transform this ambitious goal into reality. My weapon of choice? Proximal Policy Optimization (PPO), a proven algorithm that excels at training agents for complex tasks.
Rewriting the Rules with Reward Maximization
Training an autonomous racecar isn't just about steering and accelerating; it's about creating an experience worth striving for. That's why I've meticulously crafted reward functions that motivate the racecar to exhibit the behavior we desire:
Proximity-Based Reward: I reward the racecar for hugging the track's center, promoting precision driving.
Progress-Based Reward: The racecar earns its keep by advancing along the racetrack, accumulating rewards for each step forward.
Combined Reward: Why choose between precision and progress? I've combined both rewards to strike the perfect balance, ensuring the racecar learns to navigate with finesse while making steady progress.
Code Variations for Exploration
In my quest for the ultimate racing agent, I've explored multiple variations of the reward functions. Check out these code variants:
Proximity-Based Reward: Get up close and personal with the track center.
Progress-Based Reward: Race against the clock and claim rewards with each lap.
Combined Reward: The best of both worlds—precision and progress harmoniously blended.