What is ARKit 3 and why is it a breakthrough AR technology?
Apple has recently revealed a brand new tool at its annual WWDC developer conference. ARKit 3 is the latest version of the company’s set of developer tools for creating AR applications on iOS.
ARKit was first introduced in 2017 as a suite of tools that enable developers to build AR apps easily. Read on to find out what ARKit 3 is about, how it differs from its previous versions, and what value it may bring to businesses.
What’s new in ARKit 3?
Here’s how ARKit 3 works. By using computer vision, the tools can understand that position of people in the particular scene. And by knowing where the person is, the system can accurately composite virtual objects with regard to the real people in the scene. These objects can be rendered in front of or behind that person. It all depends on which is closer to the camera.
How does ARKit 3 differ from its preceding versions?
In the previous versions of ARKit, virtual objects would be shown ‘on top’ of the person in the scene, regardless of how close they were to the camera. That could lead to breaking the illusion of augmented reality, often showing conflicting depth cues.
By knowing where the person is located in the scene and how they are moving, ARKit 3 is able to track a virtual version of the person’s body and then use it as input for the AR app. This type of body tracking capabilities can also be used to translate the user’s movements into the avatar animation or to enable interacting with objects in the scene.
Here’s a great example of some amazing stuff developers can create with ARKit 3:
ARKit – key advantages
An innovative approach to people occlusion
ARKit 3 offers developers a new way to deliver more realistic experiences to users. AR content can realistically pass behind and in front of users in the real world. As a result, AR experiences are more immersive. On top of that, ARKit allows to bring in green screen-style effects to almost any environment.
Here’s an example of how people occlusion works:
New ways of capturing motion
What exactly is motion captures? It offers developers a way to use the user’s movement and position in real time as the input for creating AR experiences. Here’s how it works: ARKit 3 can capture a person’s motion in real time with a single camera. It can understand the body position and movement as a series of bones and joints – that’s why it can use both the motion and poses as an input to the AR experience. All in all, ARKit 3 places users at the center of AR.
Simultaneous use of the front and back camera
Developers can now use face and world tracking with the help of the front and back cameras. That way, users can engage with AR content in the back camera view and use just their face.
Another new feature of ARKit 3 that makes it a must-have for developers looking to create an unforgettable AR experience. The live collaborative sessions between several people allow building a collaborative world map that accelerates the process of developing AR experiences. It also enables users to enjoy shared AR experiences – for example, multiplayer games.
Apple showed off this feature in the demo of Minecraft Earth on stage, be sure to watch it!
Apart from these features, ARKit 3 offers other improvements, for example, detecting up to 100 images at a time, detecting planes with machine learning, improved 3D-object detection, and many more.
Together with ARKit 3, Apple also announced the release of RealityKit and Reality Composer. These tools were designed to help developers create rich and engaging AR experiences easily.
So while it might seem that real-world AR implementations are few and far between, these new tools are bound to make AR more accessible and widespread soon.