Niantic Spatial SDK V3.15 now supports Quest 3 and Quest 3.
The SDK provides developers with advanced mixed reality capabilities. Centimeter-actuated outdoor VPS, mesh for long distance live scenes, semantic segmentation.
This is all possible as Meta has now gained access to Quest 3 and Quest 3 pass-through cameras since the beginning of this year, allowing Niantic to take advantage of this to run the computer vision models that have been developed for around ten years.
Niantic sells Pokémon Go to Saudi Arabia to fund the spatial AI transition
Niantic Spatial splits the AI arm from Pokemon Go and sells the latter to Saudi Arabian investment groups.
This release comes five months after Niantic, best known as the developer of Pokemon Go, and is essentially divided into two parts. The Niantic Games business, including Pokemon Go, was sold to the Saudi Arabian scope, but the spatial technology side of the business was spun into a new company called Niantic Spatial.
This is a breakdown of mixed reality abilities, and the Niantic Space SDK is provided to quest developers.
Centimeter Accurate VP
Everyone knows what a Global Positioning System (GPS) is. It is a central part of modern life, and it is the way we navigate the world, and inclusion in smartphones has created new software categories, on-demand transport and delivery. However, GPS is usually accurate to about 1 meter under ideal conditions. And in an urban environment where buildings block the signal, this can drop to a few dozen meters as you hopelessly see a small blue dot on your screen bouncing around the neighborhood.
Google illustrations of VPS.
On the other hand, a visual positioning system (VPS) is a software system that uses computer vision to determine its location by identifying unique visual patterns in a real-time view of a camera and comparing it to existing high-fidelity 3D maps in the world.
Therefore, VPS only works in areas where sufficiently sustained, dense physical geometry is 3D mapped. However, within these regions, accurate locations can be localized with centimeter accuracy.
Google Maps has a foot navigation VPS feature for six years, leveraging Google’s Street View data, which Google has made available to smartphone app developers as part of Arcore.
Quest 3’s Niantic Space SDK
However, VPS is definitely much more interesting when used with mixed reality headsets and AR glasses. Niantic’s VPS also runs on smartphones, but uniquely it now supports Meta’s Quest 3 and Quest 3 and Magic Leap 2.
Niantic’s VPS maps cover over 1 million locations built using scans from players in games such as Pokemon Go and Scaiverse. Additionally, Niantic claims that VPS offers “industry-leading accuracy” and provides 3D meshes to scanned public locations.
The first 10,000 VPS API calls each month are free, with the remaining price being around $0.01 per call.
Mesh from the live scene
Quest 3 and Quest 3 scan the room and generate a 3D scene mesh that allows the Mixed Reality app to use it to interact with the physical geometry and reskin the environment. However, there are two major issues with Meta’s current system.
The first problem is that you need to run the scan in the first place. This adds a lot of friction to it by simply launching directly to the app, depending on the size and shape of the house, for about 20 seconds to a few minutes, or for walking around or walking around.
Another problem is that these scene mesh scans only represent the moment when they performed the scan. If furniture is moved, or objects have been added or removed from the room since then, these changes will not be reflected in mixed reality unless the user manually updates the scan. For example, if someone was standing in a room with you during a scan, their figure will be burned into the scene mesh.
Developers solved the biggest problem in mixed reality in Quest 3
The Lasertag developers have implemented continuous scene meshes in Quests 3 and 3S, eliminating the need for a room setup process and avoiding that issue.

Back in May, I highlighted how Lasertag developer Julian Triveri used Meta’s depth API to implement meshes for consecutive scenes in quests 3 and 3. As mentioned in that article, Triveri has created source code for his technique on GitHub for other developers to use, plaguing plans to lead Triveri to his offer.
However, the depth API only works to about 4 meters. Meanwhile, the live mesh on the Niantic Spatial SDK includes long distance mesh support.
Quest 3’s Niantic Space SDK
Uses Niantic’s proprietary computer vision algorithm to incorporate views from pass-through cameras to build meshes.
Quest 3’s Niantic Space SDK
The Niantic approach is much better suited for outdoor use and works well in conjunction with a VPS.
Semantic Segmentation and Object Detection
The Niantic Spatial SDK also allows you to identify and label objects and surfaces in real time.
Quest 3’s Niantic Space SDK
Object recognition is similar to Quest Passthrough Camera Access Developer Samples, but segmentation appears to be more advanced.
Quest 3’s Niantic Space SDK
What is the price and what is next?
Niantic’s VPS feature is priced per API call, but the Niantic Spatial SDK’s on-device computer vision feature is unlimited, but you will need to pay around $0.10 per month per month per active user (MAU).
The live mesh of the Niantic Spatial SDK is something we expect to see many Quest 3 Mixed Reality apps being adopted, and its VPS feature makes the Quest 3 suitable for public outdoor experiences, although the hardware is not designed for that.
Niantic says it is “continuing to introduce new features such as expanding support across additional full-fledged devices, improving performance, and increasing occlusion and understanding permanent scenes,” and note that developer feedback will shape what’s coming next.