XRlabs at VivaTech Paris: Mixed Reality + Low-Compute AI for Safer, Smarter Surgery
- XRlabs
- Aug 22
- 2 min read
Updated: Sep 10
PARIS, 18 JUNE 2025 -- XRlabs joined Viva Technology in Paris, the annual global rendezvous for startups and leaders shaping what’s next, to share how mixed reality and on-device AI can help surgeons plan, rehearse, and guide complex procedures with greater precision. In a fireside Q&A moderated by Professor Stéphanie Debette, Executive Director of the Paris Brain Institute, XRlabs founder Ali Haddad (neurosurgeon, London) outlined a practical path from today’s 2D imaging to patient-specific 3D surgical rehearsal and intraoperative visualization.
From 2D imaging to deep surgical vision
On stage, Ali described the longstanding gap between flat scans and the 3D anatomy surgeons navigate in theatre, and why XRlabs exists: to convert CT/MRI into interactive holograms aligned to the patient for consent, rehearsal, and guidance.
“We’re stuck with 2D images,”
he noted,
“so we use advanced augmented reality and AI to give surgeons the visualizations they need at the point of care.”
He added that the platform continuously “watches and learns” from operations to enable guidance, and, in time, autonomous assistance.

Adapting to brain shift—live, in the OR
Professor Debette pressed on a classic challenge: how to stay accurate when tissue moves during brain surgery. Ali explained XRlabs’ approach:
"using intraoperative ultrasound sweeps, reconstructing them in 3D, and deforming the preoperative model so guidance reflects what’s actually happening. The goal is simple: keep the surgical map honest as reality changes."
Low-compute AI, built for accessXRlabs emphasized portability and cost: models and rendering run efficiently on lightweight hardware.
“It’s completely small… accessible, taken around the world,”
Ali said,
so surgical teams can benefit from deeper vision without data-center budgets or high-bandwidth infrastructure.

Early clinical use across neurosurgery
In London, XRlabs is in use across aneurysm, brain tumor, and spine workflows—starting in the most unforgiving tissue to prove millimeter-level precision before expanding to other specialties.
Why it matters
Clarity for clinicians: Patient-specific 3D rehearsal builds a shared mental model before the first incision.
Real-time accuracy: Live ultrasound updates the map to match intraoperative reality.
Global reach: On-device, low-compute AI lowers cost and complexity, helping more teams access advanced guidance.
What’s next
XRlabs is partnering with clinical teams to scale extended-reality surgical rehearsal and intraoperative visualization into pathways where precision is mission-critical—while training computer-vision models that learn from real procedures to support guidance in everyday practice.
Learn more at https://xrlabs.ai.
Source XRlabs
Press Contact press@xrlabs.ai

