2

The end of screen-based interfaces

 1 year ago
source link: https://uxdesign.cc/the-end-of-screen-based-interfaces-zero-ui-ux-design-future-change-caf1f3defb75
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

The end of screen-based interfaces

The zero-UI and why it is crucial to the future of design.

A blurred picture of a girl with the Digital word written on the front.

Image by Doni Haris on Pexels.

For better or worse, a large amount of design work these days is still visual. This makes sense since the most essential products we interact with have screens.

The television introduced us to the world of screens in 1938 for the first time. Ever since, our world has been flooded with Computers, iPods, smartphones, tablets… and more and more types of screen-based devices.

Hence, today not a minute goes by without interacting with a screen.

The Internet of Things, coined by Kevin Ashton in 1989, is surrounding us with smart devices. In 2020 more than 10 billion devices were already connected to the Internet — and by 2025 this amount is expected to double up to 20 billion.

So, knowing that smart devices can hear our words, anticipate our needs, and sense our gestures, what does that mean for the future of design, especially as those screens go away?

Let’s discover together what this so-called Zero-UI stands for. 👇🏻

What is Zero-UI?

Zero User Interface, or Zero-UI, is an increasingly popular concept first coined by designer Andy Goodman, formerly at Accenture Interactive’s agency.

It isn’t really a new idea. I bet you are already familiar with it.

Have you ever used an Amazon Echo, talked to your iPhone using Siri, or skipped a song double-tipping your AirPods? Then, you’ve already used a device that could be considered part of this so-called Zero-UI concept!

A gif simulating Siri answering to the query Siri show me something new.

Gif by Apple.

It is all about getting away from the touchscreen, and interfacing with the devices around us in more natural ways. This includes different fields such as haptics, computer vision, voice control, and even artificial intelligence.

Why do we need this transition?

To understand the need for this transition, let’s take a look at how we currently communicate with technology. Most of us interact with our devices daily through a Graphical User Interface (GUI).

A GUI is a type of interface that allows users to interact with electronic devices through graphical icons and visual indicators. It can be a displaying screen — for computers — or a touchscreen for any kind of phone or tablet. Thus, users are still required to use a mouse and keyboard combination or tapping and swiping to transmit information.

If you look at the history of computing, starting with the jacquard loom in 1801, humans have always had to interact with machines in a really abstract, complex way. — by Andy Goodman.

Gif showing different devices and how we currently interact with them.

Image by almigor in Dribble.

To be fair, interfaces have come a long way from their humble origins, but they still have yet to provide the best experiences for those who use them. We download an endless amount of apps and click through too many screens in an attempt to perform daily tasks.

Luckily, designers and developers are addressing the issue to bring forth some interesting changes to help out with this problem. Just like computers evolved from being used via running code in a terminal to having a nice and intuitive graphical interface, the next natural step is having no interface at all.

Today, machines still force us to come to them on their terms, speaking their language. The next step for electronic devices is to finally understand us on our own terms, in our own natural words, behaviors, and gestures.

This is exactly where Zero-UI appears. It is aimed to allow more natural interactions when compared to screen-based devices. At the helm of this transition are both gesture-based and voice-recognition user interfaces.

According to Dharmik, the gaming world has been one of the first to adopt gesture controls as a way of providing a more natural user experience. Think about the Wii console of Nintendo — first launched in 2006 and already containing gesture-based controllers — or later models such as PlayStation Move and Microsoft Kinect. You can still watch the revolutionary launching Wii advertising below! :)

Launching advertising of Wii.

Voice recognition is another common Zero-UI feature found in our daily life. During the 2000s, Google launched Google Voice Search. However, it was not until Alexa was first released in 2014 that voice recognition experienced a commercial explosion. Ever since more than 312 million Alexa devices have been sold. Furthermore, Amazon is expected to overpass this amount by 2025: 320 millions are expected to be sold only in the coming three years.

The world seems to have fallen for the charm of this so-called Zero-UI and this is not likely to change in the future.

How will Zero-UI change design?

According to Andy Goodman, Zero-UI represents a whole new dimension for designers to wrestle with. He compares the designer’s leap from UI to Zero-UI as comparing just designing for two dimensions to having to think about what a user is trying to do in any possible workflow.

Instead of relying on clicking, typing and tapping, users will now input information by means of voice, gestures, and touch. Interactions will move away from phones and computers, and move into physical devices which we will communicate with instead.

The most important — and revolutionary — part of this concept is that it can be used for cities, homes, and even ecosystem along with personal devices.

So, this technology does — and will — create a massive effect on the entire society.

Different Types of Zero-UI

There are different ways to communicate with current technology instead of relying on a visual screen and that can be used to achieve this desired Zero-UI.

I. Voice Recognition and Control

It’s a process where software or device identifies the human voice, understands some instruction, and performs a particular action accordingly. When a user asks a question or gives a statement to the software, the tool recognizes the users’ query and reacts to it.

1*Q8DK-8Bd-2xnbH4kZlJ7LA.png

Image by Brandon Romanchuk on Unsplash.

The best examples of voice recognition and control are Siri and Amazon Echo.

II. Haptic Feedback

Haptic Feedback facilitates users with vibration-based feedback. Even though we are already used to it when interacting with our phones, it is an important part of wearable products such as fitness trackers and smartwatches as they enable users with notifications. It is an important feature for current game controllers as well — you can notice someone is attacking you before even seeing it on screen.

A man using a playstation controller.

Image by Karolina Grabowska on Pexels.

In the coming future, it will be available with smart clothing as well.

III. Ambient

Ambient devices create a bridge between digital and physical space. It works on the principle of glanceability where there is no need to open notifications and applications. Ambient device-based interactions are connected and offer browserless experience. The best example would be controlling our house devices using Alexa or a Google Home.

A Google Home with the lights on.

Image by John Tekeridis on Pexels.

IV. Gesture-Based User Interface

It allows users to incorporate more property of motions and physical space rather than just performing button-based commands. It is one of the most natural ways of interaction. This concept was first adopted by the gaming world. Best examples of the gesture-based user interface are Microsoft Kinect, Think Wii, and PlayStation Move.

Google’s Project Soli.

Google has also released a gesture control product Project Soli. It is a sensing technology that detects touchless gesture interaction using a small radar.

V. Context Awareness

Contextually aware apps and devices facilitate users with more simplified physical and digital experience by anticipating their needs. It eliminates all additional layers of interaction.

The AirPods are one of the best examples. Introducing sensors into a device or location data, we can design next-generation contextual experience that offers more implicit interaction rather than explicit.

A person using the AirPods.

Image by Wendy Wei on Pexels.

These are some of the most common — and already working — ways to communicate with technology. Upcoming times will introduce the breakthrough devices that contain these outstanding capabilities and even more.

Zero-UI will rely on Data and AI

Whereas interface designers right now live in apps like InDesign and Adobe Illustrator, the non-linear design problems of Zero-UI will require vastly different tools and skill sets.

“Designers will have to become experts in science, biology, and psychology to create these devices… stuff we don’t have to think about when our designs are constrained by screens.”— by Andy Goodman.

One clear example would be designing a TV controller. Depending on who is standing in front of that TV, the gestures it needs to understand to do something as simple as turn up the volume might be radically different: a 40-year-old might twist an imaginary dial in mid-air, while a millennial might jerk their thumb up.

As we move away from screens, a lot of our interfaces will have to become more automatic, anticipatory, and predictive.

What’s after Zero-UI?

Zero-UI is the ultra-modern version of artificial intelligence. Soon Google Assistant, Siri and Alexa will become the memories of the precious past of the tech world. Zero-UI is meant to allow users to experience more human-like interactions.

Looking to the future, the next big step will be for the very concept of the ‘device’ to go away. — by Sundar Pichai, Google C.E.O

Just feel free to share your thoughts and experiences in the comments!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK