8

How Multimodal Design is Evolving

 3 years ago
source link: https://uxplanet.org/how-multimodal-design-is-evolving-45d35353568e
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

How Multimodal Design is Evolving

Most bots that we interact with today, whether they’re chatbots or voice assistants, are scripted. In other words, they follow a certain tree structure, unlike AI-powered assistants.

Context is King

AI assistants are evolving at a rapid pace. In one of my previous articles, which outlines the fundamentals of designing a VUI, we explain how the intent triggers an assistant to start a relevant interaction with the user. However, we should broaden the term “intent” since no verbal communication is required from the user to set an AI assistant in motion. Modern assistants sense what is happening to the user or to his or her environment and will take the appropriate action.

Take a running app for example. Many of us go jogging while listening to music in combination with a training program. During your run, someone is telling you to run for five minutes, take a one minute break, and repeat. However, after clicking on certain options first, to help the app better understand your health and estimated stamina, this would ideally happen automatically. Simply fire up the app and start running. The app would start recognising your physical endurance on to go and start registering. Additionally, depending on its user base and training, it will already be able to give you advice on when to slow down for example. Additional input, such as heartbeat, would further help the assistant in supporting your workout.

This is an example of how a user does not have to press any buttons or talk to the app. The only effort the user has to focus on is the workout.

Image for post
Image for post
Your phone or smart device will automatically understand your physical health without any input from the user needed. Copyright: StockSnap

Two-Way Communication

Despite AI assistants evolving and developing a better sense of what is going on with the user and what is happening around the user, the ability to communicate with your assistant is still key. We don’t like and don’t want to lose control. Even though it’s when systems are automatic, there should always be an option to regain control.

When we think about the running example, the user should still be able to give feedback for example.

We’re still stuck on visual control

As much as we can already do through voice user interfaces, it’s still awkward for most users in many cultures and many feel that the assistant won’t be able to fulfil the request. When it comes to ordering a customised laptop, then voice would definitely not be the best option, even though it could fit into the customer journey.

Small requests still offer big opportunities

For smaller tasks, assistants can be of great benefit and still offer a lot of opportunities for companies. Restaurants offering a voice-based menu that consist solely of simple meals or repeat orders, send messages, transferring small payments, ask for specific summarised news items, and so on.

Let the Assistant do the Work

In many cases, it makes more sense to be alerted or informed about certain information without having the ask for it or checking your phone. The assistant could tell you if there’s a traffic jam on your daily commute or inform you about news or relevant events around you.

Car and home assistants are some of the most popular at this point. This is due to the fact that you need to keep your hands at 9 and 3 o’clock on the wheel. As for home assistants, they just make life easier. Walking into the bathroom when asked if the shower should get warmed up, the radio should start playing, the A/C (e.g. Ambi Climate), curtains, … or simply the TV.

Image for post
Image for post
Using voice controls on phones, speakers and remotes have become the norm. Copyright: Yucel Moran

These are very basic and don’t take into account how the user feels, perhaps he or she just came back from the gym, feels tired or hungry. A smartphone can’t pick up all these signs, even though smartwatches for example gather a lot of information about the user that could be used in conjunction with your assistants. This could greatly improve the overall experience and his or her life thanks to AI assistance.

Conclusion: Assistance and Alerts

We often think about the positive aspects of AI assistants. However, an AI assistant can be much more useful to the user and even life-saving. A great example is a user being fined while crossing a street in Malaysia, or, as the case in many countries, using your phone while driving. Even though these measures will definitely help a portion of the public to be more careful about conducting such behaviour, many users still won’t pay attention.

Image for post
Image for post
Copyright: Saad Sharif

Since smartphones are connected to maps, traffic and some even traffic lights, a user that is focused on his phone can be alerted when she or he is nearing a road. Especially when it’s a red light, telling the customer to be careful and stop. In this instance, a voice alert is ideal to alert the user.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK