The stack outlives any single headset
The most durable decisions happen below the device layer: OpenXR, WebXR, glTF, OpenUSD, identity, and low-latency infrastructure. Hardware changes quickly; portable formats and runtimes compound.
A comprehensive reference on the hardware devices, development SDKs, 3D engines, authoring tools, and infrastructure platforms that constitute the technical ecosystem of the Augmentiverse — updated for 2026.
This page now does more than list products. It separates durable infrastructure from short-lived device hype, shows where open standards actually matter, and highlights which toolchains are realistic for different kinds of teams.
The most durable decisions happen below the device layer: OpenXR, WebXR, glTF, OpenUSD, identity, and low-latency infrastructure. Hardware changes quickly; portable formats and runtimes compound.
For most teams, browser-based AR, ARKit or ARCore pilots, and OpenXR-capable passthrough devices remain the fastest way to validate spatial UX before betting on lightweight all-day glasses.
Enterprise collaboration, browser-native publishing, premium spatial computing, and asset-pipeline authoring each need different tools. The next section gives concrete starting paths.
If you are deciding what to build with first, these are the most defensible starting points on the current Augmentiverse landscape.
Start with WebXR, glTF assets, and Three.js or Babylon.js. Use smartphones and headset browsers first so the experience remains URL-addressable, easy to share, and less dependent on app-store friction.
Prioritise OpenXR-capable headsets, Unity-based deployment, and shared anchoring services. HoloLens, Magic Leap, and Android XR class devices remain more realistic than consumer glasses for high-reliability field workflows.
Treat glTF and OpenUSD as the center of gravity. Author in Blender, Maya, 3ds Max, or Omniverse, validate before publishing, and keep the asset layer portable so content can move across runtimes over time.
The Augmentiverse requires a complete, interoperable stack from physical display to infrastructure. Each layer below links to the tools covered on this page.
The fastest-growing hardware segment in the Augmentiverse ecosystem. Global XR device shipments grew 44.4% year-over-year in 2025, driven almost entirely by smart glasses rather than VR or MR headsets (IDC, March 2026). Meta holds approximately 82% of smart glasses shipments (Counterpoint Research, H2 2025).
Meta's flagship consumer AR glasses — the first Ray-Ban model with a built-in display. Features a small color display in one eye with a companion Neural Band wristband using EMG sensors for invisible gesture input. The best-selling smart glasses on the market, with over 7 million Ray-Ban Meta units sold in 2025.
The best true AR display glasses for spatial computing enthusiasts. The proprietary X1 chip handles all display processing and 3DoF tracking entirely on-device, without needing external software. Creates a massive virtual display floating in space — up to 100-inch equivalent. Connects to any USB-C device including smartphones, laptops, and gaming consoles.
Standalone AR glasses from Snap with one of the widest fields of view in the consumer category (46°). Features hand tracking, gesture recognition, and full Snapchat Lens integration. Currently developer-only via subscription. A critical platform for social and creative AR content creators, used to build and prototype next-generation AR experiences.
The first standalone AR glasses with binocular full-colour Micro-LED waveguide displays, powered by Qualcomm Snapdragon XR2. Supports real-time translation, smart assistant functions, and navigation. One of the most technically capable standalone AR glasses available to consumers, with built-in processing independent of a smartphone.
An open-source AI AR glasses platform weighing just 39g. Uses a small projector to display text and simple graphics, powered by the Noa AI assistant for real-time question answering, translation, and object identification. As an open-source platform, developers can build and publish their own apps — making Frame a creative playground for AI hardware innovation.
Apple's unreleased smart glasses, confirmed by Bloomberg's Mark Gurman as on track for a late 2026 or early 2027 debut. Display-less design at launch (camera + microphones, AI-powered, no display) to directly rival Meta's Ray-Ban glasses. Apple is leaning into premium materials — acetate frame — with what it internally calls an "icon" design goal. Expected deep Apple Intelligence integration.
Mixed Reality headsets are the primary platform for the Augmentiverse's core use cases — persistent spatial holograms anchored to the physical world, with full environmental awareness. These devices map their physical surroundings and render digital content that stays precisely in place.
The reference device for spatial computing. Apple's second-generation Vision Pro, unveiled October 2025 with a price reduction (~€300 below the original). Runs visionOS 26, which adds AI-generated spatial scenes, 180°/360° immersive media support, and Apple Intelligence integration. Dual-chip (M2 + R1) design delivers standalone power with lag-free sensor processing. Eye, hand, and voice control with no physical controller required.
The most mature and widely deployed enterprise MR headset. Optical see-through holographic displays with over 2.5K light points per radian holographic density. Advanced hand and eye tracking, voice input, and flip-up visor for switching between physical and holographic modes. The reference platform for industrial AR: surgery, manufacturing, field service, and training. Deeply integrated with Microsoft Azure and Azure Remote Assist.
Magic Leap 2 is more powerful and lighter than HoloLens 2 at just 260g, with an AMD Zen 2 quad-core processor and RDNA 2 GPU. Its best-in-class 70° field of view is the widest of any enterprise AR headset. Includes a unique active dimming capability, allowing digital content to be more clearly visible in bright environments. Bundles controllers with hand and eye tracking. Runs Magic Leap OS (Android-based).
Samsung's first spatial computing headset, launched October 2025 at $1,800. Runs Android XR — Google's new headset platform built on OpenXR 1.1, making it the first major consumer headset with deep OpenXR integration at the OS level. A rich sensor stack (eye, hand, voice, six world-facing cameras, depth sensor) with Wi-Fi 7 and Bluetooth 5.4. Positioned as the most accessible sub-$2K MR headset with flagship-grade specs.
VR occupies the far virtual end of the Reality–Virtuality Continuum — the philosophical opposite of the Augmentiverse's AR-first approach. However, VR headsets remain important Augmentiverse tools for training simulation, content development workflows, and as stepping stones to MR. Modern VR platforms also increasingly support passthrough MR modes via camera-based see-through.
The market-leading standalone VR headset, now with full-colour MR passthrough enabled by depth sensors and cameras. Snapdragon XR2 Gen 3 processor. The most widely used platform for WebXR development and testing. Supports OpenXR, making it a key Augmentiverse development target for passthrough MR experiences.
Sony's second-generation VR headset, requiring a PlayStation 5. Features 4K HDR OLED displays, eye tracking for foveated rendering, and adaptive trigger haptics. Primarily a gaming platform but increasingly relevant for interactive media and simulation applications. Now also supports PC VR via adapter.
High-fidelity PC VR headset with 130° field of view and 144Hz refresh rate. Unique Knuckle controllers track individual finger movements. The reference platform for high-end PC VR development via SteamVR. Widely used for Augmentiverse content development and prototyping before deployment to lighter platforms.
Development SDKs provide the foundational APIs for detecting physical surfaces, tracking movement, anchoring content, and delivering experiences to devices. The most important open-standard SDKs are OpenXR and WebXR — covered in the Standards section. Below are the major platform-specific and cross-platform tools.
Apple's flagship AR framework, deeply integrated with iPhone, iPad, and Vision Pro hardware. LiDAR depth sensing on Pro devices enables high-precision surface detection and environment occlusion. visionOS 26 added shared world anchors, 90Hz hand tracking, and expanded camera access APIs. Supports SceneKit, RealityKit, Unity, and Unreal Engine.
The most accessible AR platform in 2026, running on 87%+ of active Android devices. The Geospatial API uses 15 years of Google Maps and Street View data to place AR content at exact real-world coordinates — enabling true location-anchored Augmentiverse content. Terrain and rooftop anchors, Streetscape Geometry API, and AI-powered scene classification. Completely free with no per-call charges.
The cross-platform solution to AR fragmentation — a unified API that works across ARKit (iOS/visionOS), ARCore (Android), and OpenXR. Unity 6 (6.3 LTS, December 2025) powers everything from mobile AR to Apple Vision Pro via PolySpatial. QR code and marker tracking added in version 6.4. Write once, deploy to iOS, Android, and visionOS. The most widely used AR development framework by developer count.
The only open-standard, browser-native path to AR without app installation. A W3C Candidate Recommendation shipped in Chrome, Edge, Firefox, and Samsung Internet. The WebXR AR Module enables immersive-ar sessions for smartphone and headset AR. Enables the Augmentiverse's vision of URL-addressable spatial content — no app store required. See the Standards section for full coverage.
The gold standard for enterprise and industrial AR with market-leading marker-based and markerless tracking. Achieves millimetre-level precision for assembly guidance, maintenance, and quality inspection. Works with Unity and Unreal Engine. Particularly strong for model-target tracking — recognising and tracking 3D objects like industrial machinery. Widely deployed in manufacturing, healthcare, and training.
The most widely used social AR creation platform, enabling designers and developers to build AR filters, lenses, and experiences for Snapchat and Spectacles. Used by millions of creators globally. Features body tracking, face mesh, hand tracking, and world AR. Critical for understanding how Augmentiverse content will be consumed by mass-market audiences — Snap processes over 6 billion snaps per day.
3D engines are the creative and rendering heart of the Augmentiverse — the platforms where developers build the spatial experiences that run on AR glasses and MR headsets. Engine choice determines cross-platform reach, visual fidelity, and performance.
The dominant engine for AR and spatial computing by developer count. Unity 6 (6.3 LTS, December 2025) powers everything from mobile AR apps to Apple Vision Pro experiences via PolySpatial. AR Foundation 6.x provides a single API across ARKit, ARCore, and OpenXR. By far the most widely used engine for Augmentiverse content creation, with extensive community, documentation, and asset library support.
Epic's engine provides photorealistic rendering for high-end AR experiences, with strong C++ integration and industry-leading visual fidelity via Nanite and Lumen systems. Particularly strong for enterprise AR in architecture, manufacturing visualisation, and product design. The AR Template simplifies project setup with optimised rendering pipelines. Supports OpenXR natively.
The primary rendering libraries for WebXR-based Augmentiverse experiences. Three.js is the most widely used web 3D library; Babylon.js provides a higher-level framework with built-in physics and AR features. Both integrate with the WebXR Device API to deliver browser-native AR without installation. Essential for anyone building the web-accessible layer of the Augmentiverse.
The Augmentiverse requires high-quality 3D assets — models, scenes, animations, and textures — authored in open formats (glTF, OpenUSD) for maximum interoperability. These tools sit at the top of the content creation pipeline.
The leading open-source 3D creation suite. Native glTF 2.0 export, OpenUSD support, sculpting, rigging, animation, physically-based materials, and rendering. The most accessible professional-grade tool for Augmentiverse content creators. Blender's glTF export is one of the most compliant and feature-complete implementations available. Free, open-source, runs on all major platforms.
Industry-standard 3D modelling and animation packages, widely used in film, games, and enterprise spatial computing. Both export to glTF and USD formats. Maya is particularly strong for character rigging and animation; 3ds Max for architectural visualisation. Standard tools at large enterprises building Augmentiverse content for industrial training and product design.
The industry-standard suite for physically-based material authoring. Substance Painter for texture painting; Substance Designer for procedural material creation; Substance Sampler for photogrammetry-to-material workflows. All produce PBR materials fully compatible with glTF 2.0's PBR material model. Essential for creating visually accurate Augmentiverse assets that render consistently across devices.
NVIDIA's real-time 3D collaboration and simulation platform built entirely on OpenUSD. Enables multi-user simultaneous editing of shared 3D scenes, high-fidelity path tracing, and physics simulation. Critical for teams building large-scale Augmentiverse environments — digital twins of factories, cities, or facilities. A founding AOUSD member; Omniverse is the reference implementation for OpenUSD in production pipelines.
Apple's native authoring environment for visionOS spatial experiences. Uses Reality Composition Format (USDA/USDZ) and integrates directly with Xcode. Visual scene assembly, physics, particles, and audio. The fastest path to publishing on Apple Vision Pro. Reality Composer Pro works with Reality files that can be shared as USDZ — Apple's spatial content format built on OpenUSD, ensuring interoperability with the broader ecosystem.
The official Khronos reference tools for glTF asset validation and preview. The glTF Validator checks assets against the full glTF 2.0 specification and all extensions — essential quality control before deploying Augmentiverse content. The glTF Sample Viewer is the browser-based reference renderer; the Khronos glTF Viewer (iOS app) enables on-device preview in AR. Both are open source under Apache 2.0.
Real-time AR requires infrastructure designed to minimise latency between action and response. The Augmentiverse's persistent, spatially anchored content demands network architectures that keep compute close to the user — and platforms that manage spatial anchoring at cloud scale.
Azure Spatial Anchors enables persistent, shared spatial content that can be discovered by multiple users across HoloLens, iOS (ARKit), and Android (ARCore). Anchors are stored in the cloud and recalled across sessions — exactly what the Augmentiverse's persistent-presence principle requires. Deep integration with HoloLens 2 and Unity. Used by enterprises for shared MR collaboration.
Google's Geospatial API uses 15 years of Street View and Maps data to place AR content at precise real-world coordinates globally — without requiring manual anchor placement. Terrain anchors place content on real ground geometry; rooftop anchors target building rooftops. Geospatial depth sensing extends up to 65 metres. This is the closest existing infrastructure to the Augmentiverse's vision of globally addressable spatial content.
The ETSI Multi-Access Edge Computing (MEC) standard enables compute nodes at mobile network edges — physically close to the user's device. This reduces round-trip latency from 50–150ms (centralised cloud) to under 5ms — essential for real-time spatial tracking and rendering. AWS Wavelength, Azure Edge Zones, and Google Distributed Cloud all implement MEC-compatible infrastructure. The physical network layer of the Augmentiverse.
How current and near-future devices score against the three Augmentiverse commitments: AR-first display, open standards support, and persistent spatial presence.
| Device | Category | AR-first display | OpenXR support | WebXR support | Spatial anchoring | Open standards | Price |
|---|---|---|---|---|---|---|---|
| Apple Vision Pro 2 | MR Headset | ✓ See-through | ◑ Via 3rd party | ✓ visionOS 26 | ✓ World anchors | ◑ USDZ / OpenUSD | ~$3,199 |
| Microsoft HoloLens 2 | MR Headset | ✓ Optical waveguide | ✓ Native | ◑ Via browser | ✓ Azure Spatial | ✓ OpenXR / glTF | ~$3,500 |
| Magic Leap 2 | MR Headset | ✓ Optical + dimming | ✓ Native | ◑ Limited | ✓ Spatial mapping | ✓ OpenXR | ~$3,299 |
| Samsung Galaxy XR | MR Headset | ✓ Camera passthrough | ✓ Android XR native | ✓ Android XR | ✓ ARCore anchors | ✓ OpenXR 1.1 | ~$1,800 |
| Meta Quest 3 | VR + passthrough | ◑ Camera passthrough | ✓ Native | ✓ Browser | ◑ Limited | ✓ OpenXR | $499 |
| Xreal One Pro | AR Glasses | ✓ Optical display | ◑ Via Android | ◑ Limited | ◑ Basic anchoring | ◑ Partial | ~$699 |
| Ray-Ban Meta Display | AR Glasses | ◑ Monocular display | ✗ Proprietary | ✗ No | ✗ No | ✗ Closed platform | $799 |
| Snap Spectacles Gen 5 | Developer | ✓ 46° FOV display | ✗ No | ✗ No | ◑ World tracking | ✗ Snap-specific | $99/mo |
| Smartphone (ARCore/ARKit) | Mobile | ◑ Camera overlay | ◑ Partial | ✓ WebXR in browser | ◑ Session-limited | ✓ ARKit/ARCore | Existing |
Key hardware and platform milestones expected in the near-to-medium term. The most significant inflection point remains mass-market AR glasses with comfortable all-day wearability — projected around 2028–2030.
Continue exploring