How a roboticist turned a global pandemic into a chance to learn ROS
For today's spotlight, we invited Aditya Kamath, a robotics systems engineer, to talk to us about his latest project – a robot platform to help the next generation of roboticists learn ROS. As someone who’s spent years tinkering with prototyping tools, computers, and electronics, Aditya had an obvious solution for countering his lockdown boredom – building (multiple!) robots from scratch. We discuss the exciting milestones he’s reached, and his future plans to help aspiring roboticists follow in his footsteps.
I am Aditya Kamath, an Embedded / Robotics Systems Engineer based in Eindhoven, the Netherlands. I currently work as an Embedded Systems Consultant, designing embedded software for complex machines and mechatronic components. I keep my passion for robotics alive by tinkering around with robotic builds, microcontrollers / single-board computers, and rapid prototyping tools like 3D printing.
I earned my Bachelor’s in Electronics and Communication Engineering from MIT Manipal in India (2014). Then, I received a Master of Science (MSc) in Systems and Control and a Professional Doctorate in Engineering (PDEng) in Mechatronic Systems Design from TU Eindhoven (2019). Despite studying many different things, the common denominator in my education and extracurricular experiences has always been the field of autonomous robotics.
With all my spare time during the COVID-19 lockdown last year, I decided that I wanted to build some robots to experiment with ROS. I also saw this as an opportunity to learn more about 3D SLAM and depth and point cloud processing. The first several months, I focused on building a few Jetson Nano-based ROS robots using some rapid prototyping tools. For one, I used the NVidia JetRacer, an autonomous AI racecar with Ackermann steering, with custom add-ons. For another, I used the NVidia JetBot, an open source robot with differential drive geared towards educational purposes, again with custom add-ons.
Then at the start of this year, I was able to get a LIDAR and an OAK-D camera. With these exciting new supplies, I decided to use everything I had learned the previous months to design and implement my own ROS 1 robot platform from scratch. I named the project AKROS, a working name that combines my initials and ROS – until I come up with something more original.
The robot with the new LIDAR and OAK—D camera attached.
All the software for AKROS (ROS Noetic) runs on a Raspberry Pi 4. It has a mobile base with omni wheels and four-wheel drive, with each wheel powered by a motor that computes the robot's odometry using encoders. The mounted RPLIDAR A2 provides a 2D laser scan, while the OAK—D camera provides RGB images, stereo depth images, and spatial object detection outputs using the on-board DepthAI module. The Intel Realsense T265 tracking camera provides simultaneous localization and mapping.
I do all my development using JupyterLab – the server runs on the Raspberry Pi, and I open my workspace from any browser on the network. However, this method limits you to the terminal, so for tools like RViz, I have to install and configure ROS on my Windows laptop. After that, I still have to go through three to four additional steps to launch RViz, which unfortunately cannot run directly on Windows. On top of all this, it's easy to forget setting the necessary environment variables to connect to the remote ROS master. The whole process is very annoying and manual.
Once I started playing around with Foxglove, I thought it was a really convenient way to visualize ROS topics and messages from a remote desktop. I love that it works on Windows, because that means I don’t have to go through the process of launching ROS from Windows, setting environment variables, and finally launching RViz. Foxglove takes care of all this for me!
A demo of AKROS's SLAM and object detection capabilities, visualized in Foxglove.
Using Foxglove on Windows is as easy as using Rviz on a Ubuntu laptop. Now, I can run my ROS launch files from the JupterLab terminal in my browser, then open up Foxglove to start visualizing everything I want – no more complex setup process.
While this robot fulfills my personal learning goals, I want to take AKROS and develop it into a product that could be used to teach robotics or to perform research in applications like control systems, computer vision, and AI. In fact, I am currently in talks with university students and professors to understand their requirements for a ROS robot platform. Eventually, I want to design a polished product for these end-users, instead of a hacky prototype for just myself.
At this point of time, I am still in the development process. I have two main short-term goals – to set up the ROS navigation stack with the RPLIDAR’s laser scans and Intel Realsense tracking camera’s odometry, and to set up 3D mapping with the OAK-D camera. It's been an iterative process – I recently had to completely redesign my navigation module to accommodate my tracking camera.
Aditya spent quite some time iterating on his navigation module - adding components, fine-tuning its assembly, redesigning the frame - before mounting it on the rest of his robot.
The best experience you can get is by building your own robot – it’s also much more fun than doing an online course or working with a simulation. There are lots of (free and open source) resources out there that will teach you the fundamentals step by step, and cheap robot kits are abundant online. By building your own physical robot, you learn a lot more about robotics than just how to program.
To keep up with Aditya's latest work, check out his project updates on his Twitter, Instagram, and blog.
If you're a university professor or student interested in collaborating with Aditya for his AKROS project, you can get in touch with him directly on Twitter.
This interview has been edited and condensed for clarity.