Announcing: The Audio panel.

Hear (and see) what your robot hears.

announcement

Foxglove now supports real-time synchronized audio playback, alongside waveform rendering.

Whether you’re debugging microphone input, monitoring audio topics, or building audio-reactive feature, by publishing audio using the RawAudio schema and selecting the topic on the new Audio panel, you can inspect live or recorded audio streams directly in your layout—both visually and audibly.

To start using live audio visualization, publish messages that conform to the foxglove.RawAudio schema. This message type carries decoded mono audio in the form of PCM samples, along with metadata like sample rate and channel count. Then, in Foxglove, add the new Audio panel. Select the topic, and you’ll instantly see the waveform updating in sync with what you hear.

Read the RawAudio schema docs and check out this Foxglove SDK example to get started.

When you’re replaying data the waveform view reflects the full range of available audio. You can zoom, scroll, and inspect exactly what was captured at any point in time. This makes it easy to correlate audio events with sensor data, logs, and other telemetry.

Monitoring live microphone input.

Let’s say you’re working on a voice-controlled robot. You need to verify that audio is being captured, that it sounds correct, and that the signal is coming through cleanly.

Now, you can see the waveform move in real time while hearing the actual audio. If the mic is dead, too quiet, clipped, or noisy, it becomes obvious right away. No extra tools needed. Just publish the messages and watch (and listen) directly in Foxglove.

This is equally useful for debugging remote systems, validating audio pipelines, or inspecting audio topics in bag files.

Fast, high-resolution rendering.

Designed to stay smooth and responsive, even with large or high-frequency audio streams—whether you’re viewing short clips or exploring multi-minute recordings, interaction stays fast. No need to downsample or preprocess data just publish raw audio and Foxglove handles the rest.

Try it out.

To try the feature without setting up your own audio stack, drag and drop this test mcap file onto the Audio panel in your Foxglove org, then join our community to let us know what you think.

Real-time audio introspection is just one part of Foxglove’s mission: to provide the most performant and comprehensive platform for developing Physical AI. Whether you’re working on perception, control, simulation, or full-system integration, Foxglove helps you understand your robots more deeply, debug faster, and build reliable autonomy with confidence.

Read more

Start building with Foxglove.

Get started