en Technology + Creativity at the 主播大秀 Feed Technology, innovation, engineering, design, development. The home of the 主播大秀's digital services. Mon, 05 Jul 2021 11:10:05 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) /blogs/internet Beat the Bot - use your voice to challenge our sport bot Mon, 05 Jul 2021 11:10:05 +0000 /blogs/internet/entries/e17af97c-925b-4526-ab38-e9d79c9d02c4 /blogs/internet/entries/e17af97c-925b-4526-ab38-e9d79c9d02c4 Prabhjit Bains Prabhjit Bains

For two years, we've been creating the 主播大秀's first synthetic voice. Computer-generated, it's helping us as a public service broadcaster to dip our toe in this new technological space. The voice is designed to be used across a wide variety of 主播大秀 outlets, reflecting our core editorial and brand values.

Once we made the voice, we began looking for opportunities to test it with our audiences. Not only did we want to showcase our synthetic voice, we also wanted to explore whether we could use it to begin conversations with audiences. How does speech recognition fare with the wide range of accents across the UK? Can we see a future where audiences could have a conversational relationship with the 主播大秀? A quiz was the perfect opportunity to start to test these questions out.

Play Beat the Bot! Name relegated Premier League teams using speech recognition.

The 主播大秀's brilliant line-up of presenters and on-air talent will always be at the heart of our content. But we think there's also a role for a synthetic voice to augment them. A synthetic voice could power interactive quizzes with almost limitless questions and challenges. It could improve the accessibility of existing content. And it could help create new content individually personalised to our users.

James Fletcher, Editorial Lead, Synthetic Media and Conversational AI

Sports Quizzes

Quizzes are participation experiences in their purest form. And we know our 主播大秀 Sport audiences love doing really obscure and competitive quizzes, the harder the better...

The surge in DIY Zoom quizzes during the pandemic may have fizzled out, but a whole host of TV quiz shows with high production values have taken their place. Amidst quiz fever, we started to think about making a quiz that showcased our new synthetic voice and allowed audiences to participate using their own voice too.

Beat The Bot

So we created Beat the Bot, a web-based voice quiz where you have to guess the names of all the Premier League teams that have ever been relegated. You play in turn against the bot, with no room for error. The bot is never wrong.

A screenshot from 主播大秀 Sport's Beat the Bot voice game, available on 主播大秀 Taster. We've pixelated the answers, so no cheating!

Beat the Bot is a testbed for launching the first of many voice-enabled experiences that audiences can engage with through their browser on their desktop or smartphone using their in-built microphones. We also made sure that audience privacy is un-compromised, which is key in creating a safe environment for more experiences like this in the future.

Jamie Chung, Executive Product Manager

Does using your voice make a quiz more engaging?

We wanted to test a hunch that using your voice would give the quiz more jeopardy and deepen engagement. When it works well, using your own voice provides a frictionless experience. By reducing the effort needed to type answers and correct spellings, gameplay can happen at a natural pace. However, if the speech recognition doesn't understand your accent and you are saying the right answer, a voice quiz can be more frustrating than a text-based one. Using your voice to complete a quiz may be a novel experience for many of our users; given that this is the first voice web quiz for the 主播大秀, we had to overcome some design challenges.

One of the biggest UX challenges when designing Beat The Bot was indicating to the user when it was their turn to speak. If a user isn't sure when the microphone is listening and speaks too soon or too late, it can spoil their chances of winning. So we used visual clues in the interface to alert the user when it's their turn to speak and a circular countdown timer that slows ebbs away, telling the user how much time they have until the microphone will close.

Paul Jackson, UX Designer

What did people think?

After two weeks of being live on 主播大秀 Taster, we had a completion rate of 67%, and 65% of people played it more than once. The mixed success of speech recognition has had an impact on playability for some people; we're aware this is an area that needs improving before voice quizzes can be rolled out on a larger scale. But the numbers of users retrying the game demonstrate that the format works. Once we can improve the speech recognition for a broader range of British accents, the format could be repeated for a wide range of quizzes across the 主播大秀.

]]>
0
The future of in car listening: opportunities and choices Wed, 26 Feb 2020 10:32:36 +0000 /blogs/internet/entries/d89a088e-6b12-48e1-ba5b-1cc4cbdd20bd /blogs/internet/entries/d89a088e-6b12-48e1-ba5b-1cc4cbdd20bd Eleanor van Heyningen and Asha Knight Eleanor van Heyningen and Asha Knight

Eleanor van Heyningen, Chief of Staff to the Chief Technology and Product Officer and Asha Knight, Distribution Manager in Digital Partnerships, explain how in-car radio listening is evolving and the 主播大秀's approaches to this. 

Earlier this month at the in Geneva, we spoke about the future of listening in cars. As one of the hottest topics in the radio world at the moment it seems like a good moment to set out how the 主播大秀 is thinking about this. In summary:

  1. Radio listening in-car is really important for the 主播大秀 and it’s audiences;
  2. We are completely committed to maintaining it for the next generation; and
  3. It’s absolutely critical to work in partnership to achieve this aim.

Radio is at the heart of the 主播大秀’s offer. We have 10 network radio channels, 6 Nations radio channels and 40 local channels listened to by a total of 33.5m people in the UK every week. In 2018, we launched 主播大秀 Sounds – the digital home for all audio from the 主播大秀. Sounds now has over 3m weekly users and making sure Sounds is widely available and easily accessible at home, on the go and in cars is a top priority.

A lot to play for...

A large proportion of the time audiences spend with the 主播大秀 is in the car. Roughly a third of all radio listening takes place in the car, which represents around 13% of all time spent with the 主播大秀 by our UK audiences.

Encouragingly, since 2012 there has been a 17% growth in UK in-car radio listening, with other types of audio like streaming music and podcasts also seeing similar growth. For about half the time spent in cars, we’re not listening to anything. Of course some of this will always remain ‘silent’ because of very short journeys or the difficulty of reaching agreement between parents and children about what to listen to! But it shows that this is not a saturated space – there is a lot to play for.

We can’t take these conditions for granted. From the significant gaps between the time younger audiences spend with live radio compared to others, to the connected car and big tech’s role in it – the market is changing. Although the enduring popularity of radio in car gives us reason to believe that there is still a lot we can do to retain and grow our audiences, the changing market means that broadcasters are not the only ones in the game.

Eleanor and Asha presenting at the Digital Radio summit (courtesy EBU)

Strength in-cooperation

We’re constantly talking to audiences and learning from other broadcasters and car companies about how in car listening habits are developing. But, however much we know about audiences, it‘s clear that no broadcaster alone can do everything to meet modern audiences’ needs when it comes to in car listening.

Without some protection, broadcasters will lose the essential benefits that provided the foundation for the pre-digital market: prominence for radio in the infotainment space; editorial control over what and how content is delivered; direct attribution back to the content makers’ brand and – for commercial broadcasters – the ability to receive revenue directly from advertising.

In addition, in the emerging, fractured market broadcasters are at risk of losing probably the most important key to success in the digital world: the ability to gather and use audience data.

The best way to counter these risks is to work together as an industry in delivering high quality hybrid radio experiences into cars, direct to a fully connected dashboard.

Hybrid radio moves seamlessly from broadcast to IP, allowing the listener to enjoy the best available signal quality and stay tuned-in whether they are receiving DAB, FM or IP. It has the potential to allow listeners to enjoy on demand content as easily as live by providing easy links into apps like 主播大秀 Sounds. As we transition gradually to all IP-world, we need to take audiences with us.

Radioplayer

The 主播大秀 is a shareholder in a joint venture, , that aims to do just that. Radioplayer has been around since 2010. There are four UK shareholders, including the 主播大秀 and since 2014 it has licensed its technology to consortia of broadcasters in other territories.

Investing in Radioplayer is key in our aim to preserve radio in cars and build on demand in addition to linear while maintaining a direct relationship with our audience. Every country and every broadcaster is different, but we believe that we are all united by three core needs that Radioplayer meets:

  • Providing a flexible, easy to scale, easy to customise metadata delivery system that has the potential to deliver the best API for radio, making it a unique one-stop-shop for car companies who want to offer their customers a great radio experience
  • Complementing, not competing with broadcaster apps with a simple listening and discovery service that will develop to support our long-term strategies
  • Offering the chance to work together to secure the future of radio in car, in turn bringing a better chance of preserving the essential benefits mentioned above: prominence, attribution, data.

We are actively encouraging our fellow broadcasters to talk to Radioplayer about how they can get involved.

Working together for listeners

We want to cater for people whether they are long-time radio devotees, first-time digitally-native car-owners or even very young passengers in the back seat. Whatever avenues we explore, we want to do it in co-operation with our fellow European broadcasters. We’re confident that linear radio remains a strong force, but we also know that we need great digital products that have amazing content and are intuitive to use.

Voice assistants, for example, have huge potential in the car, and the 主播大秀 recently announced plans to launch a digital voice assistant this year. We’re incredibly excited about the potential of this and other technologies, but we understand that consumers will only get the benefit of them if we work in close partnership with companies all along the supply chain.

We can provide more choice which is free from commercial and political influences in a way that respects listeners’ privacy and protects their data. These are characteristics that we want to preserve in-car, but we’ll struggle to do so if we are blocked from managing our own audience data, prevented from playing back our content within our own product or forced only to use voice assistants that don’t given prominence to our content.

We are encouraged by signs that tech companies are thinking about inter-operability. We also want to open up more conversations with car manufacturers to understand their needs and ensure that, however they develop, accessible radio in a connected dashboard will be central to their offer.

Of course, our resources aren’t limitless – far from it. There are more pressures on the licence fee than ever before and we face tough choices about where we can invest and grow. We intend to work pragmatically with our broadcasting colleagues, car and technology partners, striving for standardisation and seeking a level-playing field for cooperation.

There should be no doubt that ensuring a thriving, innovative future for radio is a high priority for the 主播大秀, and making sure people can listen in cars in both traditional and new ways is a big part of that. The only way we can achieve that long-term success is to work together as a united radio industry, in close cooperation with both tech and car companies, guided – always – by the best interests of audiences.

]]>
0
主播大秀 Voice + AI: An insider's perspective Thu, 24 Oct 2019 10:42:53 +0000 /blogs/internet/entries/9e4ce480-2eca-4fa5-be25-1af0e12befc6 /blogs/internet/entries/9e4ce480-2eca-4fa5-be25-1af0e12befc6 Tallulah Berry Tallulah Berry

I'm not the same journalist I was a year ago 

If you had told me then that I would soon find myself doing a job that involved talking to a small cylinder all day, I probably wouldn’t have believed you. But here I am, surrounded by smart speakers. I’m getting on a first name basis with voice assistants like Alexa and Siri, learning all about voice technology and artificial intelligence.

Why? I’ve been working on a project to help better deliver 主播大秀 News via voice assistants. This is part of a wider Voice + AI project led by executive editor Mukul Devichand, to help the 主播大秀 operate in the best possible way as millions of people embrace this technology.

Everyday I ask questions like “If our audience could have a conversation with the 主播大秀 about what’s going on in the news, what would that be like?”. It’s a mixture of blue sky thinking, design sprints, workshops, audience testing, prototyping and swathes of post-it notes … all with a big dollop of fun on top.

I thought I’d share some observations from my journey so far.

Alexa, give me 主播大秀 News

The main project I have been involved in is the launch of a more interactive version of 主播大秀 News.

For now it’s available via Amazon Alexa: to hear it just say “Alexa, give me 主播大秀 News.” The listener can skip stories by saying next if they’ve heard enough, they can go back, pause, or ask for more information.

Fun behind-the-scenes fact for you, this product was originally given the (now legendary) nickname “Skippy” by the team at 主播大秀 News Labs because of the ability to skip through stories. Shout out to their brilliant software engineer, Lei He, who first started piloting Skippy back in 2017 and stayed with us in Voice + AI until recently to see it through.

In terms of structure, the service has two layers: main stories on top with deeper dives attached. And this is where it gets really interesting, when you ask for more information you get a richer piece of 主播大秀 content – think expert analysis or an exclusive interview. This is appealing because it would be pretty impossible to listen to all of the audio the 主播大秀 makes each day, so someone has done the work for you. Our research has shown that people don’t always want to or can’t interact because they’re busy doing something else, so we’ve designed the service to also suit someone who just wants a passive listen. Say nothing and you will just get the main stories.

News on a new platform

A team of journalists has been put together from within the main 主播大秀 newsroom to make the linear and interactive 主播大秀 News briefings. We know that people are using their smart speakers in their homes and so we have tried to adapt our tone and style to match this personal setting. We made a conscious decision to move away from broadcasting at people and try to use everyday natural language.

A recent workshop exploring what conversations between the audience and 主播大秀 News might sound like

During an early round of user testing one person said that being able to control the experience made it feel like a personalised news bulletin with minimal effort. We were very happy with that, but we also know that this voice space is still in its infancy and we’ll continue to listen to audience feedback and innovate as we go.

Some key learnings so far:

  • The new tone has fans. One person said that it is “more in line with the casual relationship you might have with a voice assistant at home”. Thumbs up. Of course, there’s a fine line between friendly and over-familiarity.
  • We need to pay attention to how we write for an interactive audio service. We had some users who would see the headlines right at the start as a menu and would then ask to jump to a story. After trying out quite a few things, in the end we settled on three brief headlines, just a line or two each. We now use what I call the ‘X, Y, Z format’ - e.g. “Today we’ve got X and Y, but first Z.”
  • Audiences change the way they interact with the service at different points in the day. In the morning they want a shorter, snappier experience, whereas in the evening they are likely to have more time to dig deeper into stories.

Settling In 

I’ve really enjoyed the tales from our journalists about settling in to the new medium. In order not to be bad new neighbours they’ve been coming up with stealthy ways of listening back to their Voice content quietly. There’s the romantic technique, which involves hunching over the speaker murmuring sweet nothings to it and, what I like to call, the casino roller, where you hold the device up to your face and whisper at it as if blowing on dice for luck. They’ve also been dubbed by some in the Newsroom as “the team with the voice machine”.

Discussing how to write for Voice platforms

Learning the Lingo

As a reporter I covered a lot of technology stories but I wasn’t prepared for all the acronyms and unfamiliar terms our Design + Engineering teams had up their sleeves. It was like arriving on a new planet and not speaking the language. I still get thrown sometimes. The other day I got an email invite to a spike workshop and accepted without being entirely sure what I was agreeing to. Nothing sinister as it turns out, a spike is a time-limited investigation into whether some software would work for us, and, after being mocked by our Voice journalists the other day for using the word ‘deck’ instead of ‘presentation’, I reckon I’m now almost fluent.

Testing with students at University College London

Perhaps the most cutting edge sign at the 主播大秀? Made with a DIY sticker

You're a journalist right? What do you do?

A developer asked me this a few months ago about my role within 主播大秀 Voice + AI and it stayed with me because it’s actually something I’ve been asking myself in a broader sense - what does it mean to be a journalist in a world where AI Assistants are a point of access for our work? Are we all going to need to learn how to code in order to stay in work? Considering how both the internet and social media have rocked the way we communicate and consume media, it’s mind-boggling to consider the impact something like conversational AI might have on our lives in 10, 20 or even 50 years time.

I don’t have the answer to the question above. No one does. But I’ve decided not to worry about it. And in my humble opinion, neither should any journalists out there.

Being resilient is in our DNA. Whatever the tools or the platform - notebook, microphone, thumbs for breaking news on social media - we are all storytellers.

 

This post originally appeared on the 主播大秀 News Labs website. to find out more about what 主播大秀 News Labs does. 

]]>
0
主播大秀 Sounds on Alexa: new ways to find and navigate content Thu, 25 Jul 2019 09:43:44 +0000 /blogs/internet/entries/d58dcb23-5298-4907-a914-d84c981ed789 /blogs/internet/entries/d58dcb23-5298-4907-a914-d84c981ed789 Kate Goddard Kate Goddard

主播大秀 Voice + AI is a relatively new team whose job is to ensure that audiences can access 主播大秀 content and services using devices like smart speakers and phones that have a voice assistant on them. We launched the 主播大秀 Alexa skill back in December 2017, when it first became clear that audiences wanted to be able to access 主播大秀 audio content on their smart speakers. Since then, adoption of smart speaker technology has ramped up - 24% of audiences have told us through a recent online survey that they have access to a smart speaker at home. And we’ve seen a great appetite for voice content – we have now delivered in excess of 265 million live and on-demand streams to our audiences.

Initially, the 主播大秀 skill offered live radio streams and podcasts to listeners, but we soon added the full range of on-demand radio programmes that are available on 主播大秀 Sounds. We are continually inspired by feedback from audience members, and have been busy improving features and functionality on the skill since launch.

Continue listening across Alexa and 主播大秀 Sounds app and website

Today we are adding a major new feature to the skill. Listeners will be able to pause and resume podcasts and on-demand programmes seamlessly between the 主播大秀 skill and the 主播大秀 Sounds app and website. If you get halfway through listening to a podcast on your phone using the 主播大秀 Sounds app during your commute home, you can resume it on Alexa once you get back into your house. All you need to do is ask for the podcast or on-demand programme you were listening to. For example, say “Alexa, ask the 主播大秀 for The Infinite Monkey Cage”. If you’ve been listening on Alexa, and want to pick up on your phone, all you need to do is scroll to the “Continue Listening” section of the 主播大秀 Sounds app or website, and click on the programme title.

This feature will be available to anyone who has linked their 主播大秀 account to their Alexa account - you can do this by clicking on the Settings button for the 主播大秀 skill in the Alexa app.

Continue Listening is just the latest in a number of changes we’ve made to the skill since it first launched.

Accessing all your favourite content

You can easily find 主播大秀 Sounds radio stations, podcasts and on-demand programmes simply by saying the name of the station or show. For example, say “Alexa, ask the 主播大秀 to play six music” or “Alexa, ask the 主播大秀 for Woman’s Hour”. You can also ask for any of 主播大秀 Sounds Music Mixes like “Handpicked by 6 Music” or “The Takeover Mix”.

Finding out track and artist information

It is now possible to ask Alexa for information about the track that’s currently playing on our live radio stations, just by saying “Alexa, ask the 主播大秀 what’s playing?”. 主播大秀 radio is a popular way for people to discover new music and this will make it much quicker and easier to find out what it is you are listening to.

Using our audio player controls

In response to many audience requests, we have worked hard to allow easy navigation between and within podcast and on-demand programmes. To navigate between episodes in a series, just say “Alexa, next” or “Alexa, previous”. If you miss something when listening to an on-demand episode and you want to go back, just say “Alexa, ask the 主播大秀 to rewind 30 seconds”. Or if you want to skip forward to find the section you are interested in, just say “Alexa, ask the 主播大秀 to fast forward 5 minutes”.

We hope that these new features will make it easier for you to quickly and easily play and control your favourite 主播大秀 Sounds content using your smart speaker. We’ll continue to listen to your feedback - it’s vital for helping us prioritise new features to build and release.

]]>
0
Voice + AI: View from the Netherlands Fri, 14 Jun 2019 11:18:01 +0000 /blogs/internet/entries/ad28bab6-3a86-4f7b-bdb0-02ae3d96f299 /blogs/internet/entries/ad28bab6-3a86-4f7b-bdb0-02ae3d96f299 Dan Whaley Dan Whaley

Last week, a few of us from the 主播大秀 Voice + AI team joined up with voice innovation teams from across Holland’s media organisations, public and private.

Organised by and hosted by the day started with presentations from four broadcasters, each sharing our journey so far with voice.

In the afternoon we ran a creative session looking into the future of voice interaction beyond the current generation of smart speakers.
It was great to share experiences and identify shared challenges.

Here’s 10 things we learned:

  1. has been experimenting with voice news despite not having a history of radio or audio production. They’ve had great success with bringing known news anchors on to flash briefings on Alexa and Google Assistant.
  2. are planning a new play together game for smart speakers based around detective theme. Not giving any more away but we’re super excited to check in and see how this performs.
  3. is apparently as popular as Amazon’s Alexa enabled devices in the Netherlands; but Google is still king. That’s almost the opposite to the UK market.
  4. did some early experiments with WhatsApp as a channel for delivering audio news. It caught on; user’s value it as a service and it’s a great channel for immediate feedback.
  5. , the radio and podcast streaming service from Talpa has a ; they’ve got some cool easter egg’s like “annoy the neighbours” which tunes you into hardstyle station!
  6. The Dutch drink a milk drink known as at lunch times; it’s surprisingly good mixed with orange juice.
  7. There was a significant shared interest in trying to use voice to tackle and across our group; two independent teams came up with ideas to address this in our future focused creative sprint.
  8. Lots of thought around voice interactions for mobile, especially in car. Spotify’s has triggered much discussion.
  9. Commercialisation remains pretty elusive in voice; thoughts centre around playing ads within audio streams but it doesn’t feel innovative!
  10. Discovery and engagement is also a major challenge; we think social chat integrated with task based dialogues and expressive may be key to changing that.
]]>
0
The spoken web has huge potential in India. But can we get this next chapter of the internet right? Mon, 20 May 2019 13:39:19 +0000 /blogs/internet/entries/1351f952-2d83-4631-87a6-2664ec60045d /blogs/internet/entries/1351f952-2d83-4631-87a6-2664ec60045d Mukul Devichand Mukul Devichand

Mukul Devichand, editor of the 主播大秀's voice services, explains how this new medium could have a large role to play in the 2019 Indian election. This piece was originally published on the Indian news website Scroll.in

Here is a romantic idea: imagine an internet where there is no text, no pictures, nothing to click on, only sound. Where the barriers of language disappear and ordinary people – including those without English literacy, speaking Hindi or any Indian dialect into their mobile phones – can simply use their voices to unlock trustworthy information. This is the idea of the “spoken web,” and it has long had a particular appeal in India.

The term “spoken web” seems to have been coined at the Massachusetts Institute of Technology in the early 1990s but it was here in India that it found most admirers. Ten years ago, it was India-based researchers at IBM who first developed a working concept of the spoken web.

“People will talk to the web and the web will respond,” imagined Dr Manish Gupta of IBM in India, in 2009. His team created HSTP or Hyperspeech Transfer Protocol, similar to the HTTP of web pages but with the idealistic aim that ordinary Indians could use simple voice commands like “next” and “back” on their mobile phones to access trustworthy information and services.

Despite a noisy scene of satellite TV news, audio as a platform for information has a mixed history in India, with limits on radio news.

Potential of the spoken web

Back in 2009, I was a 主播大秀 radio journalist posted to India from the UK, based in the same bureau where my colleagues were then as now broadcasting audio to rural audiences, popular over shortwave.

Even back then, I remember seeing immense potential for audio journalism in the idea of a “spoken web”. In a country where the “next 500 million” internet users are speakers of vernacular languages, a spoken internet would have enormous power for the multilingual masses. Back then, I imagined the adoption of mobile phones, across India, would immediately bring with it a spoken internet in many tongues.

 

But it did not really happen.

Instead, mobile internet connectivity brought with it a range of other developments. The 2014 Lok Sabha elections were seen as the first “social media” poll. I covered that aspect too, looking at the positive aspects but also some of the darker edges, the trolling and cyber-bullying and manipulated hashtags that have become a feature of political life across the spectrum.

In the years since, messaging platforms like WhatsApp have taken off. Five years later, the Election Commission of India is so worried about inaccurate information spreading online that it is working directly with social networks; and sites such as Facebook are removing political news for fear that their algorithms are being manipulated.

But in the cacophony of the 2019 Lok Sabha poll all around us, the “spoken web” may yet have its moment.

The key will be to make sure it develops with values we can all stand by.

Now’s the time

Why now? The change, still embryonic but potentially a huge disruption to the way the internet works, has been the emergence of voice assistants, such as Siri, Alexa and Google Assistant, to access internet services. They are run by big tech firms and built into phones and smart speakers, with one in five UK homes, for example, already owning smart speakers.

Assistants are built on a unique technology known as Natural Language Processing, a capability based on machine learning (and therefore referred to by many as a form of Artificial Intelligence), which allows computers to understand not only what people say – in potentially any language – but also what they mean.

A new medium has been created, for two-way audio. And now, just like the early days of social media, online video, or many other changes, content makers have a new way to reach the public. For this Lok Sabha’s elections, you’ll hear briefings by various media companies being supplied to smart assistants.

I’d be remiss if I didn’t mention that we at the 主播大秀 have one too. I’m proud to say the first interactive audio briefing we have done anywhere in the world is in Hindi and for this poll, available to users of the Google Assistant who simply say “Talk to 主播大秀 elections”.

For us, this is just an early trial of what we think a “spoken web” service might look and feel like, the technology is still new and we hope to learn along with the audience.

But like all other parts of the internet, even at this early stage, there is much at stake. The original dreamers behind a “spoken web” saw it as a valued space which ordinary people could access and which improved their lives. Issues like trust, knowing who you as a user are talking to and the provenance of information, will be key. Given the transformative impact of the old text-based internet and if the social web on politics and society, we need to get this next chapter right.

]]>
0
The 主播大秀's first year in voice Wed, 30 Jan 2019 11:56:07 +0000 /blogs/internet/entries/46df9fa0-d9f3-4c71-9672-d672de247190 /blogs/internet/entries/46df9fa0-d9f3-4c71-9672-d672de247190 Mukul Devichand Mukul Devichand

“You will be watching the birth of a new art.” With these words, the 主播大秀’s Deputy Director General explained to the public that it was entering a nascent, uncertain, even frightening technological frontier.

The corporation had decided to launch services for an exciting new platform that threatened to disrupt the medium of radio forever. They’d done this, despite the fact that only 20,000 households had the relevant devices to access these services.

The year was 1936 and the 主播大秀 entered this new platform – known as television – far ahead of the global curve, transmitting mainly variety shows (imagine a sort of vintage One Show) twice a day from a London hilltop.

“We do not pretend to have passed the experimental stage,” Vice-Admiral Sir Charles Carpendale explained to the British public in the Radio Times. “Our engineers are still learning, and so are the men and women responsible for the creative work of planning and performing programmes.”

Eighty-two years on, and we’re learning again. Over the past year, I’ve been the 主播大秀 editor leading a similar “new art” alongside my master engineer colleague Andy Webb.

We jointly lead new teams that are tackling the 主播大秀’s latest technological frontier: Voice assistants, and the related so-called Artificial Intelligence programming that powers them.

Simply put, “主播大秀 Voice + AI” refers to our offers for smart speakers and in future other voice-controlled media devices such as TVs, phones, in-car assistants and even microwaves. These devices are finding their way into a growing number of homes in the UK and worldwide.

Our focus is on what happens when people talk to devices using ordinary human language, instead of using buttons or dials, to get the 主播大秀 services they want.

Just like the emergence of television, this is an uncertain new technology that might well signal a major shift in the media landscape. And, if the era of social media, fake news and filter bubbles has taught us anything, it’s that it’s important to master new technologies so that the principles of public service broadcasting endure.

We’ve been at it a year, more or less - we started getting teams together in late 2017. Although I should say we built on great work by 主播大秀 teams such as Newslabs and the work 主播大秀 R&D did on the interactive sci-fi drama, The Inspection Chamber.

It’s important to say that, just like our 1936 forebears, it’s early days and we don’t pretend to have passed the experimental stage.

But a lot has happened in a year.

18 million news summaries and 265 million streams

A lot of the past twelve months has been spent growing what must surely be the most exciting team in the new medium – because we get to combine voice technology with the 主播大秀’s creative mission and public values.

We’ve set up centres in London, Manchester and Glasgow, where software developers work alongside creative producers and designers. If the work sounds interesting to you, watch this space for future hires.

We’re just getting going – but the good news is that we’ve already managed to provide value for many more homes than the 20,000 reached by the theatrical turns and variety acts of early 主播大秀 television.

At the core of the Voice + AI offer we have created so far is the idea the 主播大秀’s existing content should be accessible to those license fee payers who choose to use their voice to get it. Just say our name and we are there.

The key to this was launching a series of 主播大秀 apps for smart speaker platforms. Our offer is currently most developed on Amazon Alexa, with plans to extend our services on other platforms such as Google Assistant, and far beyond.

We first launched the 主播大秀 “skill” (as Amazon calls them) in December 2017, focussing on live radio and podcasts. We’ve been improving the experience all year – integrating some of the great new 主播大秀 Sounds audio content, for example, and (as of a month ago) adding in the vast library of on-demand 主播大秀 radio programmes.

Audiences started strong and kept growing. We have served 265 million audio streams on Alexa-enabled devices over the past 12 months.

Our early focus has also been on one of the 主播大秀’s core missions: impartial, accurate news.

主播大秀 News bulletins are now available on all the major voice assistants - Amazon Alexa, Google Assistant and Apple Siri – in audio and video briefings. Just ask them for 主播大秀 News. Over the year, we estimate having served close to over 20 million briefings.

But there’s potential to do much more, and we’re looking at how both news content and technology can be more native to Voice + AI. Watch out in 2019 for upgrades to the 主播大秀 News experience on voice platforms.

A new storytelling medium

The 主播大秀’s core mission has been the same for nearly 100 years, but the exciting thing about working here is always finding new and creative ways to deliver it.

We’ve tried to take on the philosophy of people like Leslie Mitchell – who even back in 1936 realised that television would require a new way of doing things from radio. As presenter of the 主播大秀’s first TV show, for example, he realised it would be unrealistic to remain anonymous as radio presenters then did.

In place of interviews with musical interludes, we in Voice instead used 2018 to create chances for children to talk to Duggee, the Go Jetters and Waffle The Wonder Dog. These and other beloved Cbeebies characters are part of the 主播大秀 Kids offer on smart speakers – just ask for 主播大秀 Kids.

Starting with the youngest audiences, we’re testing out different patterns of voice-activated play – like quizzes, games or musical experiences. We want to see what kinds of conversations people actually want to have with the 主播大秀.

But we’re also giving kids – and parents – the chance to hear a Bedtime Story. The 主播大秀 Kids skill has wonderful stories from the Cbeebies show, read by talents such as Tom Hardy and Dolly Parton.

Our kids offer is still very experimental but growing fast. Other things we’ve tried out this year involved the summer of sport.

People could say “主播大秀, take me to the world cup” – or “the tennis” – or “the tour” – and be transported to regular updates from Russia, Wimbledon’s Centre Court or the Tour De France.

During the Edinburgh Festival, we offered people the chance to ask for a “Late Night Laugh” and to enter the land of sleep with up and coming comic talent.

These relatively small experiments will inform further learning exercises over the coming year, as we slowly begin to understand the platform better.

A technological frontier

Behind the experiments, deeper questions remain, which we’ll strive to tackle in 2019.

As the UK’s – and world’s – largest public service broadcaster, what does it mean to have a conversation with the 主播大秀?

How can we be sure to harness Voice + AI technology to provide the cohesive, impartial and editorially rich services that people will rightly expect from us?

How can we address public concerns over privacy and personal choice in these new environments?

I hope 2019 will see us growing and deepening our relationship with audiences using Voice and AI platforms. As Carpendale wrote in 1936: “We in the 主播大秀 are keen to push forward as soon as is practicable, and in so doing justify the confidence placed in us.”

]]>
0
Introducing the 主播大秀鈥檚 first voice experiences for children Mon, 03 Sep 2018 09:57:59 +0000 /blogs/internet/entries/0ad470e5-a626-4a54-8bc3-fe45a4d1b3d9 /blogs/internet/entries/0ad470e5-a626-4a54-8bc3-fe45a4d1b3d9 James Purnell James Purnell

Earlier this year, about how we’ve been exploring new areas for interactive voice content, involving some of our most familiar brands and characters for children. It’s part of our plans to invest in content that informs, educates and entertains children in response to the changing ways they consume media. confirmed that Smart speakers are fast-becoming part of this mix, with 11% of UK households owning at least one device and almost a third of those owning at least two devices.

Today we’re launching the new 主播大秀 Kids Skill for Alexa devices, which features our first set of children’s experiences for smart speakers. The Skill gives our youngest audiences new and exciting ways of interacting with some of our best-loved characters like Justin Fletcher, Andy and the Go Jetters.

The challenge has been to re-imagine these very visual properties for Voice but equally, because they have been so well conceived and the audio world of these properties is instantly recognisable, they lend themselves well to the medium and we’re really pleased with the results.

The three games all offer a range of ways to play along that are unique to voice platforms. They involve elements of play that we know children love, like dancing, music and quizzes, but also help children to listen and identify sounds, to pay close attention, learn facts… and of course, have some fun.

For example,the Go Jetters game lets children train at the Go Jet Academy by taking Ubercorn’s Funky Facts Quiz. This teaches children something new, helps them focus and gives them a chance to play for the ultimate prize – the chance to become an honourary Go Jetter.We’re massive Go Jetters fans in my household so the challenge is going to be making sure the adults let the children have a go…

The 主播大秀 Kids Skill for Alexa devices is the latest in our experiments with Voice formats, led by Executive Editor Mukul Devichand and Head of Products Andrew Webb. The team they’re assembling works across disciplines, bringing producers, designers and software together.

If you’re interested in other examples of our work you can read about the approach we’re taking and the areas we’re exploring, and the launch of our first 主播大秀 voice service here.

]]>
0
The 主播大秀's new voice service explained Wed, 14 Mar 2018 15:39:50 +0000 /blogs/internet/entries/e402b324-e0d2-48c9-9b36-d944d0fa2a39 /blogs/internet/entries/e402b324-e0d2-48c9-9b36-d944d0fa2a39 Jonathan Murphy Jonathan Murphy

They're one of the new must-have digital products - smart speakers, voice technology or intelligent personal assistants - whatever you want to call them, the likes of Alexa and Siri are here to stay. The 主播大秀 is and I caught up with the man in charge, Mukul Devichand, to find out what it's all about. 

Mukul Devichand, Executive Editor, Voice

So what about the future - what do we have to look forward to in the coming months?

Mukul Devichand, Executive Editor for the 主播大秀's voice services.

]]>
0
An experiment with voiceprints Wed, 02 Aug 2017 06:30:00 +0000 /blogs/internet/entries/ea9e1c3b-d588-4ff8-bfd0-3685bdcba456 /blogs/internet/entries/ea9e1c3b-d588-4ff8-bfd0-3685bdcba456 Cyrus Saihan Cyrus Saihan

The 主播大秀 has worked with Microsoft to build an experimental version of 主播大秀 iPlayer that uses artificial intelligence to allow individuals to sign in to 主播大秀 services using their unique voiceprint and to talk to their TV to select what they want to watch.

Whether it is David Hasselhoff talking to his car in Knight Rider or Iron Man talking to his virtual assistant JARVIS, the idea of being able to talk to computers and for them to be able to understand who we are as individuals has been a science fiction fascination for decades.

We may not be at JARVIS levels of artificial intelligence just yet, but artificial intelligence and voice interaction are fast-developing technologies that are already available to consumers. These technologies could also have interesting use-cases in TV so for our experiment, we wanted to explore how your TV – just by hearing the unique sound of your voice – could give people a more intuitive and more personal service in the future.

Our voiceprint experiment

The ability of humans to communicate with each other by talking is one of our species’ most unique traits. As the technology around us continues to evolve, it is interesting to consider how we might soon be talking naturally with the range of digital devices that have become such an important part of everyday life for many.

With voice controlled interfaces such as Amazon’s Alexa, Apple’s Siri, Google’s Assistant and Microsoft’s Cortana starting to gain popularity, there is a good chance that in some situations, speaking to a computer will be the main way that we interact with many of our digital devices. 

Talking to your TV

Just like the fingerprints of your hand, you have a voice that is totally unique to you. In our experiment, by recognising the individual characteristics of your voice (tone, modulation, pitch etc), processing that information and then matching it to a sample of your voice stored in the cloud, artificial intelligence software checks that you are who you say you are and then signs you in, without you having to type anything.

Once the computer has been trained, the next time that you want to sign in, instead of having to type in your user name and password, you just have to say your name and a phrase.

What we have built here is only a proof of concept and we are still at the very early stages for voice interfaces. Our experiment focussed on getting the basics right – creating a working internal prototype that allows you to sign in using your voiceprint. Once signed in, you can see all of the editorially curated programmes and personalised recommendations that you normally would.

As well as letting a user sign in to 主播大秀 services using their unique voice instead of a password, our internal prototype also gives a user the option to select what they want to watch by talking to their device. For example, saying “主播大秀…show me something funny” brings up a selection of comedy programmes. If you say “主播大秀…what’s going on in the world?” the 主播大秀 News channel turns on and starts playing. Saying “主播大秀… put Eastenders on for me” starts playing the latest episode.

Our experiment presents users with a selection of comedy programmes when they ask the 主播大秀 to 鈥渟how me something funny鈥

What could the future hold?

Whether watching a football match or a quiz show, most of us have at some point shouted at our TV, perhaps half expecting it to hear us, know who we are and respond to us – in the future, we might find that it does!

As the technology advances, voiceprints and artificial intelligence could enable even greater levels of personalisation. For example, if you’re watching a programme on your tablet on your way back from work then, later on, when you’re settling down on the sofa, your TV could ask you if you wanted to carry on from where you left off. You might respond “No thanks, is there anything new I might like?” and be offered some suggestions.

If we look further into the future, when artificial intelligence and machine learning have advanced sufficiently, you could end up in a conversation with your TV about what’s available to watch now, whether you like the sound of it or not, whether there’s something coming up that you’re interested in, and what you like to watch when you’re in a certain mood. All the time, your TV service would be learning about your preferences and getting smarter about what to suggest and when.

There could be interesting scenarios in a typical family setting too. Just by listening to the voices in the room, your TV could automatically detect when there are multiple people in the living room, and serve up a relevant to all of you in the room. When your children leave the room to go to bed, 主播大秀 iPlayer might hear that the children are no longer there and then suggest a different selection of content for you and your partner. All of this personalisation could happen without anyone having to press a button, sign in and out or change user profiles.

This was an internal experiment, designed to help us better understand how emerging technologies could impact the media industry and provide us with an opportunity to improve the experience for our audiences in the future. It’s an area that we are keeping a close eye on and adds to some other internal projects that we are working on.

We are always looking out for ways that we can work with the market to deliver new types of content to our audiences in new ways – if you have any innovation ideas that you think our audiences could potentially benefit from, do get in touch in the comments section below.

]]>
0