Best Practices for Recording and Uploading Robotics Data

Get data off your robots and into the cloud more efficiently
Adrian MacneilAdrian Macneil ·
5 min read
Best Practices for Recording and Uploading Robotics Data

Image courtesy of Blue River Technology.

To get robots to market, it's not enough to have a working prototype – to successfully ramp it up into a reliable (and profitable) fleet, your team must also understand how that prototype senses, thinks, and acts at scale.

Observability-driven development is critical to transitioning your prototypes to production. Deciding how you record and upload data is the first step in building a robust observability stack that will bring your robots to market successfully.


When thinking about logging data at scale, there are some best practices that you can follow to set your team up for success.

Set up a standardized logging pipeline

MCAP is the standard for recording multimodal robotics data. In addition to being the default storage format for ROS 2, MCAP supports other encoding formats like Protobuf, FlatBuffers, and more. Whether your robot is recording images or videos, text logs or point clouds, This container file format can store heterogeneous data streams in a single file for easy organization.

In short, avoid the “junk drawer” approach – i.e. recording different types of data in different encoding formats to a single folder on your robot. This leaves the question of how to interact with this data up to each teammate who comes across it. It encourages duplicate parallel work, and can further fragment development processes.

Use self-contained files

Files should contain all the information needed to faithfully reproduce the robot’s state at that point in time in the future. This allows you to simply open the relevant file to review an incident from, say, three months ago. There is no need to build an older version of the code, track down supplementary information, or install a laundry list of dependencies that may be outdated or deprecated.

This includes latched topics like ROS’s /tf_static – they should be published at the start of each file so that the rest of the file can be interpreted and visualized correctly. For ROS users, rosbag record has a --repeat-latched flag that is on by default and helps with this.

Self-contained files also simplify post-processing workflows. You can process incoming files in parallel, without worrying about whether they’re in a particular order or with a particular group of related files. Everything needed to reproduce a certain state is already included in the file itself.

Split files by time or topic

Split data files by recording time – 1 minute per file is a common standard. To decide on your own time limit, we recommend reverse-engineering it from your ideal file size. Uploading a 1GB file can be quite simple, but doing the same for a 500 GB file is much more cumbersome and error-prone.

You can also split files by topic for easy management – this can look like recording lightweight telemetry data to one file, and heavyweight sensor data to another, via a separate process.

In addition to making files easy to organize and share, splitting files makes it easy to atomically upload and delete them as on-robot disk space becomes limited. Set up a ROS node or cron job that periodically checks available disk space to selectively delete unimportant data and free up recording space.

Uploading data to Foxglove affords many of these optimizations out of the box. Data uploaded to the platform is indexed and partitioned for optimal access and streaming – by device, timestamp, and even individual topic.

Use compression to conserve space

Compress files to maximize disk space on your robots during recording. As with any data compression, beware of the tradeoffs – it can save disk space on your robot, but also increase overhead like CPU and memory usage.

Be sure to use chunk compression (also available as a rosbag record flag), as it preserves the index that lets you extract summary data from recordings efficiently. Foxglove provides this out-of-the-box for any data uploaded to the platform. Putting a file through a wholesale compression tool like gzip, on the other hand, will obliterate that index and potential performance gains.

Compressing data within individual messages is another option. If you don’t need raw images for a particular camera, for example, saving the messages as JPEG images or H.264 videos can save valuable recording space. Foxglove supports storing and visualizing these and other compressed data types.


After recording data, the next problem to tackle is uploading it to a central repository for your team to collaborate on.

As any roboticist can tell you, connectivity is always a constraint when getting data off your robots and into the cloud. Robots may record over 1GB of data per second, but a site may have only 10 to 100 Mbps of upload bandwidth. Other sites may be lucky if they have internet at all.

Wherever your robots operate, we have some recommendations for offloading their data strategically.

Understand your bandwidth constraints

Are your bandwidth limits specific to a site, a robot, or certain parts of the day? Do you have to deal with data caps, or are you paying per GB for your LTE? Knowing the answers to these questions can help you understand how much bandwidth you can work with, and help you reserve overhead for operations like remote assistance or teleoperation.

While you can use hard drive swaps to overcome bandwidth limitations, this can add its own complexity. Whether you’re shipping the drives off-site, or swapping them with team members in one location, you may still have to figure out ways to eventually get that data into the cloud.

Be selective about what to import and save

Get clear on what data your team needs access to – in both the short and long term – and set up corresponding data retention policies early.

It costs money to store data. It may be feasible to hold onto everything your robots record when working with 1 or 2 prototypes, but discerning between data you must keep and data you can throw out will help you manage growing data storage costs once you get to production. Figure out if you can upload specific segments or topics to cut down on the volume of data you’re sending over your network – Foxglove allows you to selectively import data from your robots and the edge.

Also, set custom data retention policies for any site housing your uploaded recordings. Storing lightweight telemetry data indefinitely may be fine, but make opinionated calls on more heavyweight data. If team members are only accessing data for a couple weeks after it's been recorded, retaining it past a month is hurting your bottom line.

Stay tuned

As robots continue to move out of labs and into our daily lives, it’s becoming clearer than ever that observability is absolutely critical to getting robots to market. Robotics teams that continue building robots without a comprehensive observability strategy risk getting left behind.

To successfully deploy your robots for real-world applications at scale, your team must prioritize deliberate and strategic decisions about how your robots capture data – and how your team imports it for analysis. Foxglove was built to help standardize these processes for scaling companies – you can download the app to start exploring our observability platform, or schedule a demo with the Foxglove team for questions and support.

Read more:

Best Practices for Processing and Analyzing Robotics Data
data management
Best Practices for Processing and Analyzing Robotics Data

Manipulate and understand the data your robots collect

Adrian MacneilAdrian MacneilAdrian Macneil
6 min read
Why Building a Working Robot Doesn't Guarantee Commercial Success
Why Building a Working Robot Doesn't Guarantee Commercial Success

The critical role of observability in robotics

Adrian MacneilAdrian MacneilAdrian Macneil
7 min read

Get blog posts sent directly to your inbox.

Ready to try Foxglove?

Get started for free