The left shows our python simulations of reinforcement learning (top) and supervised learning (bottom) for multi agent control. The middle shows a multi agent communication graph in ROS. The right shows mid fidelity simulations in gazebo/rviz.
Here we see five drones flying in coordination using our multi agent control system. This system was also used for distributed vision research using the MNIST digits on the floor.
This was a multi agent targeting project. This slide shows initial work in Gazebo simulation. Agent 1 goes down mid flight, so Agents 2 and 3 reassign to the highest priority targets.
This video shows the results on hardware (sped up 16x).
This slide shows the motion capture view of the same experiment.
This is a custom drone we built to study smaller vehicles for use in multi agent settings.
We eventually moved to COTS multi agent drones – the Crazyflies. The left shows our simulation environment. The right shows five drones flying to their targets, which are sensed in real time.
We also implemented Model Predictive Control for 16 crazyflies in simulation (left), though we only tested on 1 in hardware (right).