Announcing: Maps in the 3D panel.

Announcing map support in the 3D panel!

announcement

You can now render 2D raster map tiles from external tile servers directly beneath your 3D scene elements, aligning them to a top-down planar view using global location fixes linked to local frame transforms.

Until now, while working with both local frame transforms and global location fixes you had to use the 2D Map panel alongside the 3D panel. That split might have made it difficult to relate a robot's camera feeds, point clouds, and detections to their true positions on a real-world map. The new tiled map layer in the 3D panel bridges that gap by letting you visualize your 3D scene on top of street or satellite imagery.

Anchoring 3D data to the real world.

Robotic systems often fuse global position estimates, like GNSS or fused SLAM outputs, with local transform trees. The 3D panel already supports rendering frame-transformed data like sensor outputs, meshes, and markers positioned according to the transform tree. What it lacked was the ability to anchor these local frames within a globally recognizable environment.

The 3D panel can now fetch and display raster map tiles, aligning them with frame-transformed data using the appropriate world frame, such as map, utm, or any ENU (East-North-Up)-aligned fixed frame. This makes it easier to visualize spatial relationships between 3D data and real-world infrastructure, especially for outdoor autonomy stacks, unlocking several workflows that were previously difficult or impossible.

For instance, you can now debug GPS-located 3D models overlayed on real-world roads, view live point cloud or detection data over satellite imagery during field tests, or inspect SLAM and localization outputs in complex environments. This feature is particularly helpful for teams working on autonomous ground vehicles, aerial robots, and other systems that rely on accurate spatial context in outdoor deployments.

Map layer config and frame alignment.

To enable the tiled map layer, open the 3D panel settings, add a "Map Layer" the “Custom layers” section. By default, you’ll have access to street maps from OpenStreetMap and satellite imagery from Esri. Teams on Foxglove's Team or Enterprise plans can also configure custom tile servers, making it possible to integrate private mapping infrastructure or pre-rendered semantic layers.

Make sure your root (fixed) frame is ENU-aligned, and that you have a LocationFix or NavSatFix topic which includes a reference to a valid frame ID in the TF tree. Note that if your data is not set up this way, you may need to restructure the tree or otherwise adjust frames using a user script. See the 3D panel docs for more details.

Once enabled, your robot’s local 3D data will appear grounded on a real-world map, improving situational awareness and simplifying debugging across field deployments.

Powering Physical AI with visual context.

This is another step toward our goal of enabling the most advanced, scalable, and geospatially aware visualizations in Foxglove, as we work to build the most performant and comprehensive platform for developing Physical AI.

Whether you’re focused on perception, control, simulation, or full-system integration, Foxglove helps you understand your robots more deeply, debug faster, and build reliable autonomy with confidence.

Sign up here to get started for free today.

Read more

Start building with Foxglove.

Get started