主播大秀

Talking with Machines

How do you design an interface for a device with no screen?

Published: 1 January 2016

Exploring the potential of devices with conversational and spoken interfaces.

Project from 2016 - present

What we're doing

Spoken interfaces appear to be emerging as a class of device and service which manufacturers are committed to and people are actually using.  is an obvious example, but  and  are also gaining traction, and Google , its own competitor to Alexa and Siri.

These devices represent an opportunity for a kind of personal, connected radio which the 主播大秀 would be well placed to explore - we鈥檙e already a familiar voice in homes across the UK due to , putting us in a unique position to explore possibilities for engaging listeners in a two-way spoken conversation. Audiences gain by having well thought-out content on their devices which can inform, educate and entertain alongside the inevitable slew of commerce-driven applications.

Talking with Machines is a project which will . We hope to learn enough to support other devices of this type and build a platform for generic support for these kinds of device.

Alongside this practical work, we鈥檒l be experimenting with prototypes and sketches in hardware and software to explore the types of interaction and content forms that these devices allow. There鈥檚 also the intriguing possibility of developing a prototyping method based on humans roleplaying the part of the device, since interactions with these devices should resemble a natural human conversation. We have already done some work on a similar method for Radiodan, which we may look to build upon.

Talking with Machines has a few goals:

  • To develop a device-independent platform for supporting spoken interfaces
  • To build knowledge in R&D (and on to the larger 主播大秀) around spoken interfaces:
    • conceptual models, how to think about spoken applications
    • software development patterns
    • UX and interaction design patterns for spoken interfaces
    • what kinds of creative content work well for speech-based devices, and ideas around how to structure creative applications for this context

 

Why it matters

There鈥檚 links from this project to a few streams of work happening in R&D. As we start to understand the speech-to-text challenges and move towards building our own engine, there鈥檚 a lot of opportunity to work with our IRFS section鈥檚 Data team who are working on similar projects. In a more general sense, there鈥檚 work around  that we could use and push forward. We currently have a PhD intern who will be working on modelling voices of 主播大秀 talent from large amounts of  content, which could be interesting to play with in the context of spoken interfaces.

There鈥檚 also potential overlaps with discovery work around finding 主播大秀 media and personalisation, choosing what to watch/listen and even structured stories (e.g. interrogating a news story). One of the stranger (but fun sounding) suggestions we鈥檝e had is a Socratic dialogue simulator for the !

The interactive radio aspects of these devices resembles work done in our North Lab on Perceptive Media and Squeezebox, and there鈥檚 a lot we can learn from .

There is a lot of interest in conversational UI and bots across the 主播大秀, but this interest tends towards text-based, messenger-type interfaces. This project focuses on spoken interfaces, while learning from and contributing towards more general conversational UI work happening in the wider 主播大秀.

The number of devices and platforms in the wild is expected to grow and it鈥檚 not hard to imagine a future in which an entirely new voice-driven platform opens up, either on mobile or specific hardware. And there鈥檚 potentially a large number of possible users: anyone who has access to a device which allows for a spoken interface and can play audio.

Our goals

The short-term goal is to prototype services we could offer and we hope the stream of work will drive development of a platform designed to provide support and applications for speech-driven devices in general. Once we鈥檝e got a good, solid prototype we would like to develop standalone applications (or add capabilities to a core platform) based on earlier exploratory work and develop support for other speech-driven devices.

We're also hoping to develop a set of UX tools and techniques to help us think about and design voice UI.

Project Team

  • Henry Cooke

    Henry Cooke

    Senior Producer & Creative Technologist
  • Andrew Wood

    Andrew Wood

    Designer
  • Tom Howe (MEng)

    Tom Howe (MEng)

    Technologist
  • Anthony Onumonu

    Anthony Onumonu

    Principal Software Engineer
  • Sacha Sedriks

    Sacha Sedriks

    Head of UX, Internet Research & Future Services
  • Joanna Rusznica

    Joanna Rusznica

    UX Designer
  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the 主播大秀. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: