Apple released several new augmented reality tools and technologies for software makers during its annual WWDC conference this week. These technologies could be vital if Apple releases an augmented reality headset or goggles in the next few years.
Apple has never confirmed plans to launch augmented reality hardware, but it could reportedly announce a headset as early as this year. Facebook, Snap and Microsoft are also working on devices that can understand the world around them and display information in front of the user’s eyes.
To be successful with an augmented reality device, Apple will have to find solid reasons for people to use it, and that boils down to useful software, just as apps like Maps, Mail, YouTube, and the mobile browser Safari helped. stimulate adoption of the original iPhone. Getting developers to join in creating augmented reality software now increases the chances that one or more “awesome apps” will be available at launch.
Apple didn’t spend much time on augmented reality in its WWDC keynote on Monday, but it announced several updates during the more technical parts of the conference that show it remains an important long-term initiative for Apple. CEO Tim Cook has said that AR is “the next big thing.”
“From a high level, this year, and perhaps even next year’s WWDC event, will be a lull before a storm of Apple innovation,” wrote the Loup Ventures founder and longtime Apple analyst, Gene Munster, in an email this week. “Apple’s continued intense development related to new product categories around wearables and augmented reality transportation is now out of sight today.”
What Apple announced
During the week-long conference, Apple briefed its developers on its rapid improvement tools that can make 3D models, use a device’s camera to understand hand gestures and body language, add quick AR experiences on the web. , a standard strongly endorsed by Apple for 3D. content and an intriguing new sound technology that is like surround sound for music or other audio.
Here are some of the augmented reality announcements Apple made and how they are paving the way for its grand ambitions:
Capture of objects. Apple has introduced application programming interfaces, or software tools, that will allow applications to create 3D models. 3D models are essential to AR, because they are what the software places in the real world. If an app doesn’t have a precisely detailed file for a shoe, then it can’t use Apple’s machine vision software to place it on a table.
Object Capture is not an application. Instead, it is a technology that allows a camera, like the iPhone camera, to take multiple pictures of an object and then stitch them together into a 3D model that can be used within the software in minutes. Previously, accurate and expensive camera setups were required for detailed object scanning.
Eventually, third-party developers like Unity, one of the leading AR engine manufacturers, will include it in their software. For now, it is likely to be widely used in e-commerce.
RealityKit 2. Object Capture is just one part of a significant update to RealityKit, which is its suite of software tools for creating AR experiences. In addition to object capture, there are many small improvements to make life easier for app creators in RealityKit 2, including improved rendering options, a way to organize images and other assets, and new tools for creating player-controlled characters within. of augmented reality scenes. .
New Apple City Navigation feature on Apple Maps.
ARKit 5. ARKit is another set of software tools for creating augmented reality experiences, but it’s more focused on figuring out where to put digital objects in the real world. This is the fifth major version of Apple’s software since it first came out in 2017.
This year includes something called “location anchors,” which means that software makers can schedule augmented reality experiences tied to map locations in London, New York, Los Angeles, San Francisco, and some other United States. In a developer video session, Apple said it is using the tool to create AR address overlays in Apple Maps, a potentially useful scenario for a head-mounted AR device.
AI to understand hands, people and faces. While Apple’s machine learning and artificial intelligence tools are not directly tied to augmented reality, they represent skills that will be important for a computer interface that works in 3D spaces. Applications can call up Apple’s Vision Framework software to detect people, faces, and poses through the iPhone’s camera. Apple’s computer vision software can now identify objects within images, including text on signs, as well as the ability to search for things within photos, such as a dog or a friend.
Combined with other Apple tools, these AI tools can apply effects similar to Snap filters. A WWDC session from your year is even about how you can identify how a hand is posing or moving, laying the groundwork for advanced hand gestures, which are a big part of the interface on current AR headsets like Microsoft Hololens.