Best Practices for Processing and Analyzing Robotics Data

Manipulate and understand the data your robots collect
Adrian MacneilAdrian Macneil ·
6 min read
Best Practices for Processing and Analyzing Robotics Data

Image courtesy of Dexterity.

In the rapidly evolving world of robotics, observability-driven development is the only way to equip your robots for production. It’s no longer enough to have a working prototype – robotics teams must also understand how their robots sense, think, and act at scale in order to get them to market successfully.

Recording and uploading data strategically is the first step in preparing your robots for the real world. Knowing how to best process and analyze that data throughout the rest of development – with observability in mind – will have a huge impact on how quickly you can iterate on and ultimately deploy your fleet.


After your robots have collected data, you may want to manipulate those recordings before any team member interacts with them. If your team is using proprietary messages to store sensor data, for example, you could transform the data into standardized schemas for visualization. You may also want to calculate some summary statistics to help teammates understand whether a particular time range or recording is worth investigating in detail. Foxglove provides a variety of ways to transform and manipulate data for visualization – like user scripts, message converters, and topic aliases.

Whatever the use case, we recommend some best practices for processing your robots’ data once it’s been uploaded.

Post-process in the cloud

Post-processing data in the cloud – instead of directly on the robot – provides several valuable benefits. For one, moving tasks off your device saves valuable space and bandwidth for recording and uploading data. If you have some purely stateless nodes on your robot – i.e. nodes that take in one message and spit out another – you can free up on-robot space by running this same code in the cloud to generate the same deterministic data.

Cloud processing also allows you to unlock ETL (Extract, Transform, Load) workflows for your team. Not only can you gather data from multiple sources and send it to third-party systems like a data warehouse or time series database, you can also prepare it for downstream workflows like AI and machine learning training.

Separate data into categories

If you are generating synthetic topics – whether that’s in a browser with user scripts, or in the cloud with a post-processing step – use a consistent prefix to keep these topics separate and differentiated from source data. This same idea can apply to other data categories, like production data vs. simulation data.

Use parallel processing

Recording self-contained files in the standard MCAP file format makes processing data much easier to scale. With self-contained files, you can simply set up a worker pool to scale post-processing tasks in parallel. There’s no need to worry about processing files in a particular order or mixing and matching individual files into particular groups – data files contain all the necessary information to interpret them.

Schema evolution

A common problem in robotics is handling message schemas that evolve over time.

Versioning your robot code and post-processing code together guarantees that any code within these individual versions will always work seamlessly. For example, you can make sure to run the v1.3 version of the robot in the cloud when running a v1.3 post-processing step.

But in the event that your team is not deploying the robot very often, versioning the robot and post-processing code together can cause its own set of issues. In this case, versioning them individually, and tracking their compatible versions separately, may be worth the extra effort.

There's no silver bullet or single solution. Whatever your team’s situation, the priority should always be backwards compatibility with your post-processing code.


To iterate on your robots’ performance, you must be able to effectively replay the multimodal data they record. This includes analyzing data from both a high-level view (potentially across days of recording) and at the micro level (for frame-by-frame debugging).

Use an observability platform

Without relying on observability tooling, your robotics team will have to split their time between their expertise – building cutting-edge robots for production – and an unfortunate necessity – building and maintaining developer tools that support their daily workflows.

Using web-based tools like Foxglove can help you offload the latter responsibility. It can also save your engineers an incredible amount of time, since they can stream data directly from the cloud instead of downloading individual files to work with them. It can also empower non-technical team members across your organization to find the information they need themselves, since you don't need to set up a complicated ROS environment to access and review Foxglove data.

Discover points of interest

Foxglove allows users to highlight points of interest in their data – e.g. shutdowns and errors, hard brakes, the start and end of various tasks – using events. Events help engineers navigate large volumes of data efficiently and streamline incident triage workflows.

Foxglove offers two ways to annotate data with events. You could load production data for visualization, then manually step through it to add events directly on the playback bar. If your team has a predetermined list of noteworthy events to track (e.g. robot crashes, shutdowns, etc.), you could also set up post-processing rules that use the Foxglove API to programmatically add events. Either way, you can use events to populate a “root cause” bucket that QA engineers can then go through to debug or escalate. By helping the team agree on which issues to prioritize, this organized process can dramatically streamline analysis and accelerate development.

Plan for different types of analytics

Analyzing robotics data isn’t all about visualization – it can also include business insights, time series aggregations, and text logs.

Business insights include information like overall rates of task completion or success, and are often stored in SQL data warehouses like BigQuery or Snowflake. Time series aggregations include means, medians, and/or percentiles for key metrics. They’re designed to help uncover meaningful outliers, and are often stored in time series databases like Prometheus or InfluxDB. And finally, text logs provide a quick way to store unstructured data and track errors as they arise. They’re often stored in databases like ElasticSearch to enable full-text search.

To fully harness the power of your robots’ data, you must proactively plan for these pipelines from Day One. Foxglove users can use webhooks to trigger notifications when data resources are added to their organization, making it easier than ever to trigger downstream workflows based on recordings being imported, events being added, or devices being created.

Try observability-driven development with Foxglove

We need observability-driven development to help robots move out of labs and into our daily lives. Whether recording, uploading, processing, or analyzing our robots' data, we need to think strategically about how we can best help our prototypes integrate safely with the real world.

Create a free Foxglove account to see how observability-driven development can accelerate every phase of your team's workflows. Check out our docs and tutorials for resources on how to get started, or join our Slack community to ask questions and submit feedback.

Read more:

Announcing Webhooks
data management
Announcing Webhooks

Run automations powered by your robotics data

James SmithJames SmithJames Smith
5 min read
Best Practices for Recording and Uploading Robotics Data
Best Practices for Recording and Uploading Robotics Data

Get data off your robots and into the cloud more efficiently

Adrian MacneilAdrian MacneilAdrian Macneil
5 min read

Get blog posts sent directly to your inbox.

Ready to try Foxglove?

Get started for free