Find Us OIn Facebook

5 Tips That Can Help You Master Your Voice User Interface Design


VUI design is in full swing and steadily progressing and is known today. Computerized assistants, for example, Alexa from Amazon, Apple Siri, Google Now, and Cortana from Microsoft, naturally advance to become the most accessible sound on the market.

Since Echo's journey, Amazon's voice partner tool in December 2014, to date, nearly 8.2 million tools have been sold, and voice search continues to evolve. MindMeld's 2016 Internet Trends Report says 60% of people started using Voice Search a year ago, and in the past six months, 41% of people have started.

An expectation from BCC Research says that with an annual growth rate of 12.1%, the global market for voice recognition innovations of $ 104.4 billion each year in 2016 will reach $ 184.9 billion in 2021.

This wave is driven by mechanical advancement and deep perception, allowing designers to build frames of exceptional gender accuracy such as speech and language recognition and image review.

Microsoft said in 2016 that the latest speech recognition framework has equalized the human transcriber to distinguish human speech.

The rate of driving voice innovation, it changes the way we connect our tools. Considering all things, some of the usual UX plan technologies speak for themselves - which include customer research, character creation, customer flows, prototypes, usability tests, and iterative architecture - some interface variations. Voice users should note.

If you want to start your first task of configuring the audio user interface, here are five basic tips that will help you -

Conversation - Speak against authorship


It is essential to ensure that the audio user interface understands common speech and should familiarize yourself with a wide range of different data sources.

Authoring and speaking the same thing is unique. Rather, use some slogans or use complete sentences or questions.

Imagine Sunday morning when you were writing "a casual informal breakfast" on your phone. An overview of every POI will appear on the screen. However, when we speak with phoneme management, you will have to claim in a way like "Alexa, what are the best places to have an informal breakfast nearby?"

Ensure that the machines are equipped to receive a large number of different orders and respond to them simply to be successful.

Make recognition instinctively


No one jumps over the opportunity to learn about a hundred orders to perform specific tasks. Be careful not to create a breathtaking environment that is not easy to understand and that puts an excess of effort into being natural.

The machines should be equipped to remind us and become more useful with every use.

For example, you're approaching the head tool, which looks,

"Alexa, could you give me nicknames at home."

No doubt, where is your house?

"You know where my house is!"

"I'm sad, you'll have to do it again."

This scene makes the meeting frustrating for the customer and is neither satisfactory nor fruitful.

However, if the framework contains data about where you live, an overview of a large number of addresses is quickly given. There is likely to be a short acoustic reaction with a visual component such as guides and bearings. Sending a meeting like this is satisfactory and fast. The natural architecture, as with the graphical interface or graphical interfaces, should be well implemented by the authors.

Response - customer needs analysis


The loudspeaker recognizes the loudspeaker, and the loudspeaker understands the tool. The authors should always be aware of possible speech checks, phonemic weakness, and every component that could affect association, such as an intellectual problem. Even the language, the complement of the voice, or the tone of the voice affect the way the instrument is broken up.

As a planner, you need to know where and how to use the structure and sound so everyone can use it, no matter how they speak or how they match.

Think about environmental factors in the customer’s case

Saying something on your phone with a crowded and noisy train out of sight is a model, why it is important to realize the impact of different circumstances on the type of interface you have built. If the primary application is driving, then this is an exceptional decision at this point - the hands and eyes of the customers are busy, but their voices and ears are not. If you use the app somewhere in a noisy place, it is smarter to require a visual interface, because overwhelming excitement makes speech recognition and hearing more difficult.

If your application is comfortable at home and in an open motion, it is necessary to give an alternative to switching between audio and visual interface.

Entrance - bilateral association


In a typical discussion, one of them says to the other, that he/she loves the discussion by making head gestures, a smile, and some different feelings. It must also be sent to customers communicating with your device. It is essential to take this into account in your structure so that the customer can control, operate, and focus on his tool.

The executive must always inform clients of what is going on. Additionally, it is essential to respect the way your client understands that the CEO is aware of a non-intrusive way. You can improve it with a sound effect or a light bulb as you wish.

Voice user interfaces are changing the way we use and collaborate with innovation. This configuration field is quickly deployed, which is incredible for any UI / UX architect to move forward.

Do you have any other advice on structuring the audio user interface? Give it to us.

Post a Comment

Previous Post Next Post