The future of Physical AI is already in motion.
“There will be more robots than humans by 2040.” -Elon Musk
Adrian Macneil, CEO and Co-founder of Foxglove, kicked off Actuate—the developer conference built for the doers, builders, and visionaries shaping the future of Physical AI—agreeing with Elon Musk that there will indeed be more robots than humans by 2040 and that Foxglove is at the center, fueling robotics development like never before.
From hospital hallways to construction sites, and from household chores to defence autonomy, the talks at Actuate spanned the vast and rapidly evolving landscape of Physical AI. This recap covers the key takeaways from each talk, providing a snapshot of where robotics is today and where it’s heading.
Jim Fan, Senior Research Scientist, NVIDIA
Jim Fan framed the central challenge of general-purpose robotics as passing the “Physical Turing Test”—a system’s ability to interact with the world so effectively that it’s indistinguishable from a human.
He walked through NVIDIA’s efforts in scalable simulation, synthetic data generation, and massive accelerated training. These components are enabling a new class of generalist agents that can adapt to previously unseen physical tasks, making progress toward robotics that are not just task-specific, but world-adaptive.
Vivian Chu, Co-Founder & Chief Innovation Officer, Diligent Robotics
In high-stakes, human-dense environments like hospitals, small failures can cascade into big problems. Vivian Chu unpacked how Diligent Robotics is addressing real-world navigation challenges for humanoid robots.
Her talk detailed sensor and perception design for reading elevator buttons and swipe access panels, as well as how Diligent’s robots navigate highly dynamic, unstructured environments with human-safe behavior modeling and planning strategies.
Kyle Vogt, CEO, The Bot Company
Kyle Vogt, formerly of Cruise and Twitch, shared lessons from the frontlines of commercial robotics. From scaling fully autonomous vehicles to founding a new company focused on affordable household robots, he emphasized the importance of fast iteration, real-world feedback loops, and cost-effective design in achieving sustainable robot deployment at scale.
Jason Ma, Co-Founder, Dyna Robotics
Jason Ma introduced a new layer to the robotics stack: foundation reward models. These large-scale models allow for reusable, general-purpose reward shaping, critical for accelerating reinforcement learning in physical domains.
He highlighted work on scalable policy learning and discussed how Dyna Robotics is using foundation models to generalize across tasks, agents, and environments—reducing the brittleness that plagues many RL-based systems today.
Vivek Bagaria, Autonomy Lead, Matic Robots
Rust is becoming a tidal wave across robotics and Matic is proving why. Vivek Bagaria shared how adopting Rust across Matic’s entire software stack has paid dividends in performance, safety, and maintainability.
He detailed how the team built drivers, planning, control, and testing infrastructure in Rust, and offered insights into the tooling, language features, and migration strategies that made it work in a high-throughput consumer robotics environment.
Daniel Fullmer, Principal Software Engineer, Anduril Industries
Daniel dove deep into how Anduril uses Nix to tame the complexity of deeply embedded systems. With fully declarative builds and reproducible infrastructure, Nix enables their team to version-control everything from toolchains to firmware.
Daniel shared practical lessons from managing NixOS-based builds across multiple hardware platforms and how that’s led to better debugging, fewer integration surprises, and tighter control of deployment artifacts.
Felipe Polido & Ethan Keller, Reframe Systems
Reframe is redefining the boundary between design and manufacturing. Felipe and Ethan showcased their direct-to-robot workflow, which eliminates intermediate translation layers between CAD and robotic execution.
From magnetic fixturing to vision-based adaptability, they walked through how their system has built over 1,000 feet of structural walls—blurring the line between software, hardware, and production-scale deployment in modular construction.
Hosted by Roman Shtylman, CTO and Co-founder at Foxglove
This panel tackled one of robotics’ most debated questions: should you adopt ROS or build your own stack?
The discussion—featuring Guillaume Binet (Copper Robotics), Austin Schuh (Blue River), Emerson Knapp (Polymath), and MacCallister Higgins (Pipedream)—highlighted practical tradeoffs in system complexity, ecosystem leverage, and long-term maintainability. The consensus? There’s no universal answer—only context-driven decisions.
Vinny Senthil, Senior Software Engineer, Chef Robotics
Coordinating multiple robots to plate ingredients sounds simple—until you require five nines of accuracy. Vinny Senthil detailed how Chef Robotics built a production-ready multi-agent system for high-throughput food assembly.
The system blends classical CV, Siamese neural networks, mesh networking, SLAM, and KNN-based tracking to deliver robust performance. So far, their robots have served millions of bowls—demonstrating that robotic orchestration is not just possible, but scalable.
Karthik Lakshmanan, Head of Perception, Zipline
Zipline’s drone delivery platform has flown 100+ million autonomous miles. But the hardest part is often the last 100 meters.
Karthik shared how Zipline tackles precision delivery to suburban backyards using simulation-fueled ML, real-time perception, and redundant safety systems. He emphasized how local autonomy must adapt to the wildly variable edge cases posed by power lines, tree cover, and customer preferences.
Heidi Schubert & Ankita Joshi, Cobot
Autonomy doesn’t scale without safety. Heidi and Ankita introduced the concept of “co-piloting”: embedding safety engineering into every stage of autonomy development.
They outlined how operational data informs both safety and feature roadmaps, and how pairing engineers with safety teams, rather than separating them, builds more resilient, deployable systems. Their approach puts human oversight at the core of Physical AI development.
Dennis Siedlak, CTO, Slip Robotics
Dennis unpacked how Slip Robotics closes the autonomy feedback loop quickly and at scale. With fleets operating in logistics environments, Slip needs fast root cause analysis, performance tracking, and regression detection.
He showcased how Slip combines Foxglove visualizations, internal pipelines, and real-time observability to continuously improve autonomy in production environments, without breaking SLA requirements for uptime.
Vibhav Ganesh, Director of Engineering, Shield AI
Shield AI’s Hivemind SDK distills a decade of experience building autonomous drones into a reusable, production-grade development kit. Vibhav demonstrated how it enables rapid iteration through tight fly-fix-fly cycles while meeting military-grade certifiability.
He showed how Foxglove helps visualize multimodal runtime assurance and how the SDK supports collaborative heterogeneous teams—pushing the boundaries of aerial autonomy in contested and communication-constrained settings.
Learn more about Hivemind and how How Shield AI transformed mission-critical development with Foxglove here.
Whether you’re building generalist robots, deploying domain-specific systems, or managing fleets across the globe, one thing is clear: Physical AI is no longer theoretical. It’s operational, deployable, and accelerating fast.
Catch the Actuate 2025 day 2 recap tomorrow at the same time.