Our work extends research currently being done in Robotic Embedded Systems Laboratory (RESL) at the University of Southern California. We’re leveraging RESL’s quadrotor simulation environment, which is compatible with Open AI Gym. The environment simulates a quadrotor in an x configuration, as shown in Figure 1, and thoroughly models its dynamics.
Figure 1: X configuration of quadrotor.
Figure 2: Quadrotor visualization tool. The quadrotor appears in the bottom left along with 2 goal points.
We have extended the visualization tool created by RESL that shows the quadrotor’s position in its environment and the goal points it is attempting to reach. Along with TensorBoard, the visualization software allows us to analyze and better understand the quadrotor’s performance. Figure 2 shows the visualization environment below.
Molchanov et al. from RESL learned a unified quadrotor control network to stabilize a quadrotor that hovers at a specified point. Our project extends this work by focusing on path following. Our work lets the quadrotor learn to navigate a given trajectory autonomously.
The final control policy output by our software will be a neural network which both stabilizes the quadrotor and guides it efficiently along specified trajectories. We have not yet attempted to run the control policy on a real quadrotor, but works well on simulation.