How will we use Apple’s AR glasses — and with what UI?
[ad_1]
When they appear, Apple will want to ensure the experience of using Apple glasses is as natural and inevitable as using any other Apple product. And we’re beginning to reach a position where we can speculate at how these things will work.
What they aren’t
Let’s get something out of the way first. Apple’s AR glasses were never going to be devices you controlled with some fiddly remote, even an iPhone.
Apple will want the user experience to be as friction-free as possible and will iterate and improve that experience over time. Everything about Apple’s history tells us that it will want to build an interface that delivers a natural sense of connection.
It will want to create a brand-new language of human interface design. It’s possible the plan when it comes to Apple glass is to create the most human interface of all, with you at the center of the system itself.
How Apple thinks
For this opinion, I’m guided by how Apple thinks and by a series of recent rumors. For example, it was recently claimed these glasses will host multiple cameras and be in part controlled via gesture and movement.
How might that work?
To get some sense of how Apple thinks about user interface design, consider three things:
- GUI: Apple made the graphical user interface controlled by keyboard and mouse mainstream. This is the human interface that makes the most sense when working on a computer. Every computer is now controlled like this.
- Touch: If you were present when Steve Jobs introduced the iPhone in 2007, you will recall his argument that the most logical user interface for the smartphone wasn’t a stylus, but your finger. Every smartphone is now controlled by touch.
- Digital Crown: Apple introduced Digital Crown with Apple Watch. It gives users a physical interaction with their device that also echoes classic watch design. That moving part feels natural and inevitable as a result. Not every smartwatch has this — yet — but Apple leads the industry.
Also consider Apple’s extensive catalog of accessibility designs and its equally extensive work using Siri, both of which offer profound improvements to many users.
Making a future that feels human
At their core, each of these user interfaces reflects Apple’s determination to create ways of working with technology that feel completely natural. Former designer Jony Ive often used to discuss his company’s quest for such inevitability.
Even when the company fails to quite get it right, (Touch Bar, for example) it will iterate and improve its systems until it does create an interface so simple users just flow with it.
That quest means Apple will try to achieve this with its glasses. It will not want to create a product you need an engineering degree to use; nor will it want to create a gimmick.
It will want the user experience to be smooth, seamless, as if things were always this way.
So, what’s inevitable in eyewear?
I think that kind of profound sense of inexorable purpose means you start with the obvious and build from that core experience. So, when you wear ordinary glasses, what do you do?
I think most of us look.
We use our eyes, move them about, blink, stare, and focus on different things. We look nearby and we look far away. We read. We watch. We pay attention. Sometimes we even like to stare idly into space and listen to another deadline whooshing by.
Those are the things we do. We also use spectacles to improve our vision, which also seems possible in these creations.
So how do these familiar actions translate into a user interface for glasses?
Here are some basic assumptions:
- The glasses will be smart enough to recognize the direction of your gaze.
- They will recognize what item or items you are looking at.
- They will know if you are focusing on a distant object, or on something close.
- They may discern the difference between the pages of a book and a film on TV.
- They aim to enhance the experience of whatever it is you are looking at.
What might they do?
Imagine you are on vacation in a country with a different language. You look at a sign in the distance.
Sensors in your glasses will identify the direction and focus of your gaze, while outward facing sensors will explore that object and seek to improve your experience of looking at it. In the case of that sign, the glasses may zoom in to make the sign clearer, and perhaps automatically translate the words it carries. That translation may then be provided in some form of overlay on the lenses, seen only by you.
Unpack what took place during that task and you see it consists of multiple processes:
- Identifying where you look
- Recognizing the focus of what you are looking at.
- Determining distance, focus, need.
- Augmenting what you see by zoom.
- Augmenting what you see through translation.
All these operations are supported by a vast quantity of machine learning and vision intelligence on the device, which means built-in processors are certain to be in place on these devices.
That’s just one example of what may happen.
How would these tasks run? Automatically, or on command?
How do you command glasses?
We don’t really command eyewear. Mostly we just put our glasses or goggles on and take them off again.
Where are the control interfaces in the existing exchange?
While it seems inescapable some commands will be made using the stems of the glasses (like the stems of AirPods), how many commands can you commit to that way? Not many, I think, which suggests certain modifying actions.
The need for modifying actions suggests additional control interfaces, perhaps including eye direction, blinking, gesture, voice, and touch. As each of these interactions may add complexity to what is going on, users will need some way to track the commands they are making. One way in which this could work might be via a discreet control interface, like a virtual Clickwheel, presented on your lens.
In use, you might tap your glasses stem twice to enter control mode, point or gaze at an object to focus on it, and then scroll through available commands using the on-lens Clickwheel via touch, gesture, or eye movement.
That kind of system would support complex commands.
Another approach might be to use gesture. Clenched fist, point, open hand, move hand left, move hand right – all very Minority Report – and based on Apple’s existing work and the Apple Vision framework.
All these approaches (alone or combined) would provide the kind of complex UI developers need to build complex applications for these devices.
Apple will, of course, want its glasses to support third-party applications.
That need means it must work towards providing a user interface equally as capable as using a mouse and keyboard or Multitouch. I believe Apple wants to create a platform opportunity with these things (it was recently claimed they would be independent devices that do not require an iPhone), which means they must host their own advanced set of controls.
Inherently, the UI must feel so utterly inevitable that once you start wearing them you soon forget how you ever lived without them.
It will be interesting to see how Apple’s own UI designers have approached these challenges when these new solutions are expected to ship in 2022.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Copyright © 2021 IDG Communications, Inc.
[ad_2]
Source link