Many (mobile) robotic systems greatly benefit from computer vision. Many vision systems require significant computing resources and introduce system latency due to computing time requirements. Furthermore, in fast or jittery moving robots, regular video cameras suffer from motion blur.
In this project, we have equipped a legged robot (Unitree A1, ref 1) with event cameras (ref 2) in its robotic head. During normal walking gait, the stereo event cameras provide visual information about the robot’s environment. We are interested in implementing vision processing algorithms to detect distance to objects and possibly geometrical properties of objects, such as step-size of stairs. The robot then shall adjust its walking gait when approaching stairs.
In this project, we have equipped a legged robot (Unitree A1, ref 1) with event cameras (ref 2) in its robotic head. During normal walking gait, the stereo event cameras provide visual information about the robot’s environment. We are interested in implementing vision processing algorithms to detect distance to objects and possibly geometrical properties of objects, such as step-size of stairs. The robot then shall adjust its walking gait when approaching stairs.
Tasks
- Understand the (processing) differences between video and event cameras in a walking robot task; select and implement a useful vision processing algorithm for legged robot motion.
- Evaluate the performance of the vision task with respect to robot requirements (latency vs. precision)
Process
- Familiar yourself with the existing robotic setup (Unitree A1, added on-board i7/GPU).
- Read existing literature on event cameras and processing algorithms, identify useful stereo vision task and existing algorithm.
- Implement the computer vision algorithm, evaluate performance in closed-loop setting.
- Write report, prepare live demonstration / video.
Expected Outcomes
The Unitree A1 legged robot is setup and waiting for caregivers in the lab at KTH. We will first perform data recordings while operating the robot. Then students select event-based vision processing algorithms for implementation; and evaluate the algorithms on the collected data. When successful, students can run the algorithms on board of the robot for a closed loop system and show / record live performance. An evaluation about the latency and the precision of the vision algorithms will conclude the project.
Prerequisites
ML course, programming in C (high-speed) and python, some experience with (mobile) robots / engineering.
Experience in computer vision is recommended, but not required.
Supervisor
Jörg Conradt, conr@kth.se
Relevant reading
The Unitree A1 legged robot is setup and waiting for caregivers in the lab at KTH. We will first perform data recordings while operating the robot. Then students select event-based vision processing algorithms for implementation; and evaluate the algorithms on the collected data. When successful, students can run the algorithms on board of the robot for a closed loop system and show / record live performance. An evaluation about the latency and the precision of the vision algorithms will conclude the project.
Prerequisites
ML course, programming in C (high-speed) and python, some experience with (mobile) robots / engineering.
Experience in computer vision is recommended, but not required.
Supervisor
Jörg Conradt, conr@kth.se
Relevant reading
- https://www.unitree.com/a1/
- G. Gallego, et al., "Event-Based Vision: A Survey" in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 44, no. 01, pp. 154-180, 2022.
doi: 10.1109/TPAMI.2020.3008413
https://www.computer.org/csdl/journal/tp/2022/01/09138762/1llK3L5znva