Reinforcement Learning · Unity · 2022 ML-Agents

LeMAC Project

An exploration of Reinforcement Learning in Unity using ML-Agents — training a camel character (Lem) to navigate environments using both Ray Perception (LiDAR-like sensing) and Vision (camera-based input).

Year
2022
Category
Reinforcement Learning
Engine
Unity
Framework
ML-Agents

Overview

This project explored Reinforcement Learning in a 3D game environment using Unity's ML-Agents package. The agent — a camel character named Lem — was trained to navigate and interact with its environment using two distinct perception methods.

The primary goal was to gain hands-on experience with the Unity game engine while developing an engaging, visual application of RL concepts. Comparing Ray Perception (similar to LiDAR) with Vision (camera input) provided insight into how different observation spaces affect agent learning.

Perception methods

Ray Perception
LiDAR-like sensing using raycasts to detect nearby objects — fast training, reliable obstacle avoidance.
Vision (camera input)
Camera-based observations feeding pixel data to the agent — more complex but closer to real-world perception.
3D environment
Custom Unity scene with obstacles, rewards, and navigation challenges designed for agent training.
Training visualisation
Real-time visualisation of the agent learning, showing training progress and behaviour evolution.

Built with

Unity ML-Agents Reinforcement Learning C# Python