
Team RiDE
Subteams
Computer Vision
The Computer Vision sub-team uses data from the robot's camera and lidar sensors in order to perform object detection and tracking tasks. This processed data is sent to the navigational sub-team ROS nodes in order to update the robot's navigational path.
​
Tasks:
-
Creation YOLOv8 Node for basic object detection
-
Creation of DeepSORT Node for tracking objects across image frames
-
Creation of Nodes for doing preprocessing on data coming from stereo vision and lidar
​
The CV team is currently testing built nodes on a Uni-Robot System, as most of our capabilities will not be impacted by our project shift to Multi-Robot-Systems.
Navigation
The Navigation sub-team is responsible for developing path planning and decision making capabilities using Multi-Agent Reinforcement Learning (MARL). It receives processed data from the computer vision and localization sub-teams to determine optimal robot movement in dynamic environments. The team’s focus is on decentralized navigation strategies that allow each robot to act autonomously while coordinating with others.
Tasks:
-
Creation of MARL-based navigation node for decentralized decision-making
-
Implementation of reward functions for adaptive path optimization in SAR scenarios
-
Integration with ROS to receive input from CV and localization nodes and issue movement commands
The Navigation team is currently building and testing MARL algorithms in simulation environments. These will be ported to a Uni Robot System before scaling up to Multi-Robot Systems later in the project timeline.
Mechanical
The Mechanical sub-team is responsible for designing, assembling, modifying, and maintaining the physical infrastructure of the robots. Their work ensures the platform is robust, adaptable to search-and-rescue environments, and optimized for sensor integration, mobility, and durability in rough or collapsed terrain.​
Tasks:
-
Get the 2 MIT Racecar robots running and fully functional
-
Modify the robot to handle harsh terrain and rugged environments
-
Help cognitive sub-teams integrate features onto the robot​
​
The Mechanical sub-team is currently working on getting the MIT Racecars functional while also collaborating with other cognitive sub-teams to integrate features into the robot
Localization and Mapping
The Localization and Mapping sub-team is responsible for enabling each robot to perceive its position in space, build maps of the environment, and contribute to a shared situational awareness across the team. Their work ensures accurate navigation, obstacle avoidance, and coordinated search coverage.
Tasks:
-
Integrate sensor data (e.g., LiDAR, IMU, and camera) to improve localization accuracy and environmental mapping
-
Develop and optimize Simultaneous Localization and Mapping (SLAM) algorithms
-
Maintain a consistent global map by fusing data from multiple robots during exploration
​
The Localization and Mapping sub-team is currently working on developing a sensor fusion pipeline to integrate data from sources such as LiDAR, IMU, and wheel encoders, forming the foundation for accurate localization and mapping.
​