Ö÷²¥´óÐã

Intelligent Video Production Tools

Using computer vision and machine learning to unlock creative potential.

Published: 1 January 2017

We investigate how applying the video processing techniques of artificial intelligence, including machine learning and computer vision, can create new and innovative production tools.

Project from 2017 - present


What we are doing

We investigate and develop tools to process, analyse and understand video – normally in real time. We aim to take the latest academic research and industrial techniques and translate them across to solve problems in the world of broadcasting.

In the past, we’ve used various computer vision techniques to investigate camera and object tracking, scene geometry and image analysis. These tools were used in Piero, our sports graphics system which won a Queen’s Award for Enterprise. They have also featured as part of our Biomechanics project and other sports analysis tools we’ve developed. You’ll see some of these tools at work whenever you watch the analysis on Match of the Day and we continue to improve and support them with additional features and developments.

Over recent years, rapid improvements in the field of artificial intelligence, and machine learning in particular, have revolutionised computer vision and much of our current work takes advantage of these developments. Recently we have been experimenting with ML-based , developing tools to recognise animals in images and to classify the type of activity taking place in a video. We've recently been collaborating with our CloudFit Production team to see if we can use our tools to process and analyse media that they record and manage.

We work closely with partners both inside and outside the Ö÷²¥´óÐã to develop our tools. These broadcast companies and production teams help us to crystallise technical innovations into practical tools that will be genuinely useful for them.


Why it matters

Many production teams are increasingly stretched by time and budget constraints. There is a pressure to produce programmes that provide value for money for broadcasters while at the same time there is a demand to create ever more content for new digital platforms and more innovative content to meet audience demand. Yet much of a production team’s effort can be spent on relatively low-level and time-consuming mundane tasks such as logging rushes and transcribing interviews rather the more creatives tasks needed to tell great stories.

Over the last few years developments in computer vision, and now machine learning, have made it much quicker and easier to apply these techniques to media. Our work investigates how we might be able to take advantage of this to aid the production process. We look to help with current production processes, developing tools to speed them up and free staff for more high-level work – but also seek to enhance existing workflows with new tools that offer and enable new creative options.

There are opportunities to assist production teams to work faster and better; to help them offer the audience more without requiring extra effort.

Project Team

  • Robert Dawes (MEng)

    Robert Dawes (MEng)

    Senior Research Engineer
  • Hannah Birch (CEng MIET)

    Hannah Birch (CEng MIET)

    Research Technologist
  • James Withers (MEng)

    James Withers (MEng)

    Graduate R&D Engineer

Project updates

  • Immersive and Interactive Content section

    IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: