About Forerunner AI
From a single photo
to a world you can walk through.
Forerunner AI is a frontier research lab building the full stack of generative spatial intelligence — from on-device vision models and real-time 3D capture to serverless GPU reconstruction and immersive delivery on iPhone and Apple Vision Pro.
01
Mission
Every photograph contains a world. We build the AI that reconstructs it. Point your camera at any space — a room, a building, a landscape — and our pipeline extracts geometry, depth, and lighting to produce a photorealistic 3D Gaussian splat you can orbit, explore, and walk through.
Our platform runs an on-device vision-language model for instant scene understanding, offloads heavy reconstruction to serverless GPUs, and delivers optimized 3D worlds to iPhone, Apple Vision Pro, and the web in seconds. One capture, every platform.
We believe the next computing platform is spatial. Our job is to make it as easy to create a 3D world as it is to take a photo.
02
The Pipeline
CAPTURE
AR-guided orbital capture with automatic keyframing. The app detects your subject, tracks camera pose in real time, and tells you when coverage is complete. On-device segmentation and object detection run locally via CoreML — no cloud round-trip needed for scene understanding.
RECONSTRUCT
Frames are uploaded to serverless GPUs where multiple reconstruction paths run in parallel — single-image Gaussian splat synthesis in under a second, multi-view 3D reconstruction with semantic segmentation, and dense monocular SLAM for room-scale environments. The best output is selected automatically.
DELIVER
Generated worlds are compressed and optimized for each target platform. Metal-based Gaussian splat rendering on iOS, RealityKit immersive spaces on Vision Pro, and WebGL for browser sharing. One pipeline, instant delivery everywhere.
03
Research
We publish our work openly. Forerunner research spans mixture-of-experts architectures, reinforcement learning dynamics, neural 3D reconstruction, and spatial AI — tested on models with hundreds of billions of parameters.
Our published work includes the first causal analysis of how reinforcement learning reshapes expert routing in production MoE models, with gate-swap interventions across 229B and 671B parameter architectures revealing that routing specialization imposes a measurable tax on general-domain performance.
VIEW PUBLICATIONS04
Technology
Gaussian Splatting
Photorealistic 3D reconstruction from ordinary photos. Our pipeline generates, trains, and renders 3D Gaussian splats optimized for real-time exploration on mobile.
On-Device Vision AI
A bundled vision-language model runs entirely on-device for instant scene understanding, object detection, and semantic segmentation — no cloud latency.
Serverless GPU Inference
Heavy reconstruction runs on serverless GPUs. Multiple reconstruction paths execute in parallel and auto-select the best result.
AR Orbital Capture
ARKit-powered guided capture with real-time pose tracking, automatic keyframing across elevation bands, and intelligent coverage detection.
Spatial Computing
Native RealityKit rendering on Vision Pro, Metal-based splat rendering on iPhone, and WebGL delivery for the browser. One pipeline, every platform.
MoE Research
Production-scale research on mixture-of-experts architectures with hundreds of billions of parameters, informing efficient, specialized AI across the pipeline.
05
Company
FOUNDED
2024
HEADQUARTERS
Mountain View, CA
CONTACT
go@forerunner.ai
Build with Forerunner AI
Interested in our research, want a demo, or looking to partner — we'd love to hear from you.