主播大秀

What If Our Machines Had Emotions?

Take part in our user survey: how do you feel about Alexa?

Published: 31 January 2019

主播大秀 R&D have launched a user survey to find out how people feel about the concept of machines being able to express emotions. We hope the survey will help us understand how people in the UK are 鈥╱sing voice technologies in their daily lives and the kind of emotions that people experience while interacting with current voice assistants (e.g. Alexa, Google Assistant or Siri). We鈥檙e looking to gather responses from a broad range of voice device users and non-users.

(Update: The survey has now closed and we have removed the links to it from this post.)

Why Now?

In early 2018 it was reported that , with that figure to 12.6 million by 2019. With devices like Amazon Echo and Google 主播大秀 featuring high on the gift list for many of us last Christmas, that figure may well be considerably higher. So, many of us are now living with voice-driven devices, both in our homes and built into our mobile phones, but why is this important to the 主播大秀?

A lot of 主播大秀 content is well suited for consumption over smart speakers (, 主播大秀 News, 主播大秀 Sport, 主播大秀 Weather, 主播大秀 Sounds) and  to improve the delivery, discovery and navigation of this content for voice-driven devices. As these devices become more widespread, more of us are using them to consume content and as a public service broadcaster, it鈥檚 our duty to ensure that we鈥檙e providing great interactive experiences and new forms of content that are optimised for the wide range of devices our audiences use.

Here at 主播大秀 R&D, we are always looking to the future and since 2016 our Talking with Machines project has been exploring new forms of voice-interactive content. This work led to the development of Orator, a set of tools for writing and playing interactive stories on voice devices, now used and extended by the 主播大秀 Voice team for products such as the CBeebies Alexa skill. Orator was originally created for R&D鈥檚 work on The Inspection Chamber - ask Alexa to ! Further work led to our recent release of interactive drama The Unfortunates, which you can try for yourself on 主播大秀 Taster or by .

As the Talking with Machines work continues (exciting stuff happening 鈥 watch this space!), a few of us started thinking about the sorts of experiences that could be possible if voice devices were able to express their own emotional states. We鈥檝e seen moves in this direction with Amazon鈥檚 (SSML), which gives Alexa鈥檚 voice some expressive elements such as emphasis and intonation, however 95% of communication is non-verbal and it鈥檚 body language, facial expressions and non-verbal vocalisations (e.g. hesitation, laughter and intakes of breath) that speak volumes in human communication. This leads us to wonder about non-verbal expression for emotional machines.

We know what you鈥檙e thinking - machines don鈥檛 have emotions, right?

Well what if they did鈥

Imagine a smart speaker that gets embarrassed when it can鈥檛 find things; tired after a busy day; saddened by bad news; excited about visitors; or is feeling cold?

Perhaps emotionally expressive devices could provide more useful cues for users - if so, this opens up a wide range of possible applications. It could provide a supportive form of interaction to assist with isolation in the elderly and help with awareness around looking after ourselves and living well (鈥淎lexa鈥檚 getting tired鈥 oh my - look at the time! I best be off to bed鈥). It could assist with the frustration that we sometimes feel when using technology, for example if we can see that a machine is working hard to complete a task for us or if it fails to understand a command, it might encourage some patience on our part and stop us from wanting to throw our device out the window!

It could also be a useful indicator that a child has spent too much time with their tech - Apple addressed this with their tool but Janet Read, professor of child computer interaction at suggests that if computers were to behave more like humans:

鈥淢aybe the computer could have a hissy fit, or it could slow down, or stop interacting or be naughty. That kind of interaction could be more helpful to a child鈥檚 development because it reflects our own instincts and behaviours. If the computer decides that 20 minutes is enough, or that we seem too tired to play, it could just shut down 鈥 and, in doing so, help us to learn what the right time to switch off feels like.鈥

As voice devices get better at natural language processing and sentiment analysis, we鈥檒l see new applications for smart speakers emerge that move beyond simple command and control (e.g. using our voice as a remote control for radio) towards an intelligent system that we鈥檒l be able to talk with in a way that is more natural, like human conversation, and more responsive and adaptive to user interaction.

A recent study found that . At 主播大秀 R&D we believe that along with a shift to more human-like interactions, voice devices and other systems will get to a point where they can accurately sense mood and emotion and respond accordingly. Imagine a voice assistant that was able to read how you were feeling and change its behaviour, tone of voice or functionality to better suit your mood, in the way humans do 鈥 what if Alexa could give fast and to-the-point answers when you鈥檙e in a rush; interact cheerfully and playfully when you're feeling upbeat; and be soothing, restrained and low-key when you鈥檙e feeling tired or blue.

We think this is a really interesting concept and are excited about exploring this area with users. Your responses and insights will help inform and shape our work on Emotional Machines. The survey results will feed into a series of workshops we鈥檝e got coming up where we鈥檒l be exploring how machines might express different emotions through the modalities of gesture; sound; light and colour.

Following on from the workshops we鈥檒l be building some prototypes and will be taking them into the wild to run some user tests so watch this space for updates!

  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the 主播大秀. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: