Technology + Creativity at the 主播大秀 Feed Technology, innovation, engineering, design, development. The home of the 主播大秀's digital services. 2021-10-04T15:51:46+00:00 Zend_Feed_Writer /blogs/internet <![CDATA[Fighting misinformation: An embedded media provenance specification]]> 2021-10-04T15:51:46+00:00 2021-10-04T15:51:46+00:00 /blogs/internet/entries/5d371e1b-54be-491f-b8ee-9e354bafb168 Charlie Halford <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p09xqbl3.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p09xqbl3.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p09xqbl3.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p09xqbl3.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p09xqbl3.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p09xqbl3.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p09xqbl3.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p09xqbl3.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p09xqbl3.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>For the last few years, the 主播大秀 has had a project running in its technology division looking at technology solutions to various problems in the domain of news disinformation. Part of that effort, called <a href="https://www.originproject.info/">Project Origin</a>, is working to make it easier to understand where the news you consume online really comes from so that you can decide how credible it is. You can find some history on this in <a href="/mediacentre/articles/2021/project-origin-one-year-on">Laura聽Ellis' excellent聽"Project聽Origin: one year on"聽blog</a>.</p> <p>Part of Project Origin has been working in collaboration with major media and tech companies, most recently, with the聽<a href="https://c2pa.org">Coalition聽for Content Provenance and Authenticity聽(C2PA)</a>,聽which it helped form. This group recently released <a href="https://c2pa.org/public-draft/">a draft聽version of an embedded media provenance specification</a>. This spec tackles the problem of missing, trusted provenance information in images / video / audio consumed on the internet. For example, where a video of elections in one country from 10 years ago is being presented as video from recent elections in another.聽This is聽an overview of聽how that specification is intended to work.</p> <h2 data-usually-unique-id="942583191341384741354902">Embedding</h2> <p>The C2PA specification works primarily by defining mechanisms for embedding additional data into media assets to indicate their authentic origin. An essential aspect of this data is聽"assertions"聽- statements about when and where media was produced. The embedded information is then digitally signed so that a consumer knows who is making the statements.</p> <p>While the C2PA specification also includes mechanisms for locating this provenance data remotely聽(e.g.聽hosted somewhere on the internet), I'll focus on the use case where all data is embedded directly in the asset itself.</p> <h2 data-usually-unique-id="555473361838427318559989">Data聽model</h2> <p>The C2PA specification uses a few different mechanisms for embedding and storing data. Embedding is done with <a href="https://www.iso.org/standard/73604.html">JUMBF</a>,聽a container format,聽and structured data storage is done with a combination of <a href="https://www.w3.org/TR/json-ld11/">JSON-LD</a>聽and聽<a href="https://www.rfc-editor.org/rfc/rfc8949">CBOR</a> (which聽is a binary format based on JSON).</p> <p><strong>Container - The</strong><strong>聽</strong><strong>"Manifest</strong><strong>聽Store"</strong></p> <p>Similar to <a href="https://wwwimages2.adobe.com/content/dam/acom/en/devnet/xmp/pdfs/XMP%20SDK%20Release%20cc-2016-08/XMPSpecificationPart3.pdf">XMP</a>,聽the C2PA specification defines聽several聽embedding points in a selection of media formats to place a聽"Manifest聽Store" in JUMBF format, which is the container for the various pieces of provenance data. Once you've identified where and how a manifest store is embedded in your favourite media format, most of the specification is format-agnostic.</p> <blockquote> <p><strong>What is JUMBF?</strong></p> <p>JUMBF聽(JPEG聽universal metadata box format) is a binary container format initially designed for adding metadata to JPEG files, and it鈥檚 now used in other file formats too. It is structurally similar to the <a href="https://en.wikipedia.org/wiki/ISO/IEC_base_media_file_format">ISO聽Base Media box file format</a>, an extensible container format that is used for many different types of media files. JUMBF聽"superboxes"聽are boxes that only contain other boxes. JUMBF聽"content聽type" boxes contain actual payload data, the serialisation of which should match the advertised content type of the box. All boxes have labels, which allow boxes to be addressed and understood when parsing. C2PA uses JUMBF in all the media formats it supports to provide the container format for the Manifest, Claims, Assertions, Verifiable Credentials and Signatures.</p> </blockquote> <p>Each piece of embedded provenance data is called a聽鈥淢anifest鈥.聽A manifest contains a part of the provenance data about the current asset, or the assets it was made from. Because an asset might have been created from multiple original sources or have been processed multiple times, we will often need to store several manifests to understand the complete history of the current asset.</p> <p>Manifests are located in the聽"Manifest聽Store", which is a JUMBF superbox. The last manifest in the store is the聽"ActiveManifest", which is the provenance data聽about the current asset聽and it's the logical place for validation to start. The other manifests are the data for the聽"ingredients"聽of the active manifest - i.e. the assets that were a part of the creation of the active manifest. This is one of the key features of C2PA: each asset provides a graph of the history of editing and composition actions that went into the active asset, exposing as little or as much as the asset publisher wants.</p> <p>Each manifest within the store is again its own JUMBF superbox. A manifest then consists of: a聽"Claim",聽an聽"AssertionStore", a聽"<a href="https://www.w3.org/TR/vc-data-model/">W3C聽Verifiable Credentials</a>" and a聽"Signature".聽Manifests are signed by an actor聽(the聽鈥淪igner鈥)聽whose credential identifies them to the user validating or consuming them.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p09x3btj.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p09x3btj.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p09x3btj.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p09x3btj.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p09x3btj.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p09x3btj.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p09x3btj.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p09x3btj.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p09x3btj.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Diagram of a Manifest box, without any VCs</em></p></div> <div class="component prose"> <p><strong>Assertions</strong></p> <p>Assertions are the statements being made by the signer聽of a manifest. They are the bits of provenance data that consumers of that data are being asked to trust, for example,聽the date of image capture, the geographical location, or the publisher of a video.<br />In the spec, each assertion has its own data model. Some are published as聽"Standard聽Assertions" in the spec, some are adoptions of existing metadata specifications such as聽<a href="https://www.cipa.jp/std/documents/e/DC-008-2012_E.pdf">EXIF</a>,聽<a href="https://iptc.org/standards/photo-metadata/">IPTC</a>聽and聽<a href="https://schema.org/">schema.org</a>,聽and it is expected that implementers will extend the spec by defining their own as well.</p> <blockquote> <p><strong>Media metadata isn't new</strong></p> <p>For example, the EXIF standard is nearly universal in digital photographs, used to record location and camera settings. The fundamentally new thing that C2PA does聽is allow you to cryptographically bind that metadata聽(with聽hashes) to a particular media asset and then sign it聽with the identity credential聽of the origin of that data, ensuring that the result is tamper-proof and provable.</p> </blockquote> <p>Assertions are contained in their own JUMBF Content Type Box in the assertion store superbox聽and are serialised in the format defined in the spec for that assertion. The C2PA-defined assertions are stored as CBOR, while most adopted assertions from other standards are JSON-LD.</p> <p>Here's an example of an聽"Action"聽assertion聽(in聽<a href="https://www.rfc-editor.org/rfc/rfc8949#name-diagnostic-notation">CBOR聽Diag</a>) which tells you what the signer聽thinks was done in creating the active asset:</p> </div> <div class="component code"> <pre class="code__pre br-box-subtle"><code class="code__code">{ "actions": [ { "action": "c2pa.filtered", "when": 0("2020-02-11T09:00:00Z"), "softwareAgent": "Joe's Photo Editor", "changed": "change1,change2", "instanceID": 37(h'ed610ae51f604002be3dbf0c589a2f1f') } ] }</code></pre> </div> <div class="component prose"> <p>And here's an EXIF one (in JSON-LD) that contains location data:</p> </div> <div class="component code"> <pre class="code__pre br-box-subtle"><code class="code__code">{ "@context" : { "exif": "http://ns.adobe.com/exif/1.0/", }, "exif:GPSLatitude": "39,21.102N", "exif:GPSLongitude": "74,26.5737W", ... }</code></pre> </div> <div class="component prose"> <p>The one critical assertion is the binding, something that binds the claim to an asset. In fact, the spec requires one. This ensures that claims are not applied to any asset other than the one they were signed against. This is important in helping to ensure that the consumer can trust that the C2PA data wasn't tampered with between the publisher and the consumer. There are currently two types of "hard bindings" available, a simple hash binding to an area of bytes in a file or a more complex one intended for ISO BMFF-based assets, which can use their box format to reference specific boxes that should be hashed.</p> <p><strong>Claim</strong></p> <p>The claim in a manifest exists to pull together the assertions being made, any "redactions" (removals of previous provenance data for privacy reasons), and some extra metadata about the asset, the software that created the claim, and the hashing algorithm used. Assertions are linked by their reference in the assertion store and a hash. The claim itself is another JUMBF box, serialised as a CBOR structure. This is the thing that is signed, and it provides a location to find the signature itself.</p> <p><strong>Signature</strong></p> <p>The signature in a manifest is a <a href="https://datatracker.ietf.org/doc/html/rfc8152">COSE</a> CBOR structure that signs the contents of the claim box. COSE is the CBOR version of the JOSE framework of specs, which includes JWT/JWS. The signature is produced using the credentials of the signer. The signer is the primary point of trust in the C2PA Trust Model, and consumers are expected to use the signer's identity to help them make a trust decision on the claim's assertions.</p> <p>The only currently supported credentials for producing the signature are x.509 certs. The specification provides a profile that certificates are expected to adhere to (including key usages such as 鈥渋d-kp-emailProtection鈥, which is a placeholder). The specification does not include any requirements on how validators & consumers assemble lists of trusted issuers, as it is expected that an ecosystem of issuers will develop around this specification. Instead, it simply requires that validators maintain or reference such a list of trust anchors. Alternatively, they can put together a trusted list of individual entity certificates provided out-of-band of the trust anchor list.</p> <h2>What now?</h2> <p>This is an overview and omits both the detail required to produce C2PA manifests and the breadth of some of the other components of the specification (e.g. ingredients, the use of Verifiable Credentials, the concept of assertion metadata, timestamping etc). I'd love to produce a worked example of how to extract and validate a C2PA manifest from an asset; watch out for that in the future. I will highlight an <a href="https://github.com/numbersprotocol/pyc2pa">open-source implementation of C2PA available in Python</a>, and I know of other implementations in the works, too.</p> <p>At the 主播大秀, we can't wait for this specification to develop and gain adoption. We'd love to see it supported in production and distribution tools, web browsers, and on social media and messaging platforms. We really think it can make a difference to some of the <a href="/news/world-asia-india-47878178">harms done by mis- and disinformation</a>.</p> </div> <![CDATA[Technology weapons in the disinformation war]]> 2020-11-17T09:46:16+00:00 2020-11-17T09:46:16+00:00 /blogs/internet/entries/b46596c7-2d4a-47c9-81ff-414fa52cc947 Laura Ellis <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p08yq6m8.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p08yq6m8.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p08yq6m8.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p08yq6m8.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p08yq6m8.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p08yq6m8.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p08yq6m8.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p08yq6m8.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p08yq6m8.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Photo by Camilo Jimenez on Unsplash</em></p></div> <div class="component prose"> <p>You鈥檙e browsing social media and you see something that doesn鈥檛 look right. What do you do? You can search for information about the source or the material itself but you may not find the answer. What if there was a technology-based solution that could help 鈥 some kind of signal that would reassure you that what you鈥檙e seeing hasn鈥檛 been tampered with or misdirected?</p> <p>This question has been posed by many organisations and individuals in the last couple of years as the scourge of disinformation has grown. Now <a href="https://www.originproject.info">Project Origin</a>, a collaboration involving the 主播大秀, the CBC/Radio Canada, Microsoft and The New York Times is working on a solution.</p> <p>Essentially, we are seeking to repair the link in news provenance that has been broken by large-scale third-party content hosting. What do we mean by broken provenance? Most large social media platforms have features such as verified pages or accounts but outside of these, there are countless re-posts of content that was originally published by another person or organisation. In some cases, this content is simply re-uploaded and shared. In others, a re-upload is accompanied by some new context. Users also modify the content - for humour, for brevity, and in some cases, with malicious intent.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p08yq6pp.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p08yq6pp.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p08yq6pp.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p08yq6pp.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p08yq6pp.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p08yq6pp.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p08yq6pp.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p08yq6pp.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p08yq6pp.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Photo by Marvin Meyer on Unsplash</em></p></div> <div class="component prose"> <p>Our objective is to derive signals with respect to content coming from publishers or originators to allow consumers to be reassured about its source and the fact that it has not been manipulated. It鈥檚 a huge task and we鈥檙e very much aware that others are doing excellent work in this space, as well as in the wider disinformation sphere. The Content Authenticity Initiative, for example, has carried out some excellent work, focusing, in the first instance, on securing the provenance of images from the point of capture.</p> <p>We鈥檝e divided the problem into three main areas - giving the content item an identifier, finding a way to allow it to take that identifier with it on its journey and safely storing the information that will allow it to be checked.</p> <p>Firstly, each digital image, video or audio file is represented by a very specific sequence of bits, so specific that we can safely identify even the smallest differences from the content that was originally produced. These sequences of bits are, understandably, enormous, but thankfully, we can lean on a concept called cryptographic hashing as a way of allowing us to represent them as a short string through secure hash algorithms. We can be confident that there is effectively a zero probability that two pieces of content share the same hash.</p> <p>To know who generated the content hash, we need another tool 鈥 a key. Public-private asymmetric keys are in common use on today鈥檚 internet 鈥 helping us carry out e-commerce amongst other things. They allow a publisher to digitally sign a document which is linked to a piece of content 鈥 containing for example data about the content and the hashes that represent it - by creating something we call a manifest. Again, maths is our hero here with some complex cryptography ensuring that only the person with the private key could have signed the manifest and this can be verified using the corresponding public key.</p> <p>The way a browser on a PC knows that this signature is bona fide is via a piece of standard internet functionality provided by a Certificate Authority 鈥 a trusted third party that checks the public key it鈥檚 being offered belongs to the right party.</p> <p>Finally, at the heart of a provenance system we need a way of maintaining a reliable and consistent database of manifests. For Origin we plan to use the Microsoft Confidential Consortium Framework (CCF) as the heart of the manifest and receipt storage. The Provenance System built around this to deal with the various media registration and queries will be based on Microsoft鈥檚 AMP (Aether Media Provenance)</p> <p>Unlike the permissionless blockchain solutions made famous by cryptocurrency, CCF is a 鈥榩ermissioned鈥 system. These are sometimes called 鈥榞reen blockchains鈥 since they do not need to consume large amounts of energy to determine consensus 鈥 there is enough trust between parties controlling the system to allow the nodes to act on a much simpler basis.</p> <p>To sum up, we are developing a machine-readable way of representing data about a content item in a way that allows a publisher to tie or 鈥榖ind鈥 the specific content item to the data and have it stored safely for future retrieval by a user.</p> <p>So what鈥檚 next? On the technology front we鈥檙e determining how to ensure that the content, its manifest and the cryptographic binding 鈥 the signed hashes and certificates that link the content you have to the details - are all conveyed together. We're also working on what to do when data is not present or has been altered. What happens for example if content has been clipped or transcoded in a useful and legitimate way?</p> <p>We鈥檙e also keen to determine how this kind of technology can help in a wider media and technology community where there are many tools operated by a range of different organisations. An important element of our work has been trying to understand the APIs or common interfaces that might be standardised so a single device can discover and query different systems - including those used for content creation. And we鈥檙e launching a formal standards effort to define APIs and systems specifications for media provenance across the whole media ecosystem.</p> </div> <![CDATA[Can synthetic media drive new content experiences?]]> 2020-01-29T10:02:23+00:00 2020-01-29T10:02:23+00:00 /blogs/internet/entries/b81f12d4-39b7-4624-86ab-01647d2800ec Ahmed Razek <div class="component prose"> <p>'Deepfakes' have rightfully grabbed negative media attention, but is there a creative and editorially solid opportunity to exploit the underlying technology? 主播大秀 Blue Room's Ahmed Razek has been experimenting with this controversial technology.</p> <p>Deepfakes - the ability to manipulate video with malicious intent - continues to be a <a href="/news/business-51204954">technologically troubling force</a>. A <a href="/news/technology-49961089">recent report</a> by cyber-security company Deeptrace highlighted that from 14,698 Deepfakes found online, 96% were sexual in nature with women overwhelmingly the victims. In the battle against online disinformation, Deepfakes are thankfully still a side-line issue though there are troubling signs ahead. Last year <a href="/news/technology-48405661">doctored footage</a> of Nancy Pelosi, speaker of the House of Representatives, sounding drunk, spread virally across social media causing significant reputational damage. Despite the many articles rebutting the content - the damage was done - a lie can travel halfway around the world before the truth can get its boots on. Strictly speaking, the fake Pelosi video isn't an example of a Deepfake; it's more like a shallow fake - a new term in the misinformation lexicon that describes doctored video produced with basic technology. Due to its simplicity in its creation, some researchers argue that the spread of shallow fakes poses a higher risk to the world of online disinformation than Deepfakes.</p> <p>With any application of technology, it is all about the intent. I've been exploring whether the same audio-visual synthetic technology used to create Deepfakes can be harnessed to deliver content in new innovative ways. This experiment built on our learning from a synthetic media demo of 主播大秀 presenter Matthew Amoriwala reading a news item in several different languages 鈥 you can see the results <a href="/blogs/aboutthebbc/entries/bee04c43-d896-4e36-8f02-244cb0db1c08">here</a>.</p> <p>In preparation for the 主播大秀's 2019 annual <a href="/blogs/internet/entries/e351b992-24c5-46b9-93d0-adc8f7363951">Media, Tech & Society conference</a>, the 主播大秀 Blue Room (the 主播大秀's internal consumer technology lab) was challenged to build a prototype that both highlighted the advances of synthetic media and demonstrated a scalable audience proposition.</p> <p>Currently, one of the more popular user interactions on voice-enabled devices like the Amazon Alexa is asking about the local weather. Understanding this, we asked ourselves what could be a synthetic video response to a weather query from a celebrity personality look like? And what editorial issues would be raised?</p> <p>Weather is a useful area to prototype as the content is factual and generally not a contentious content area. Considering that voice-enabled screens like Amazon Echo Show or Facebook Portal are increasingly making their way into people's homes, it won't be too long before we are met with a digital avatar responding to a query.</p> <p>To create this experiment, we partnered with colleagues from 主播大秀 World Service who provided the editorial treatment for the piece and AI video synthesis company Synthesia, who provided the technical AI expertise.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p081ny9v.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p081ny9v.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p081ny9v.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p081ny9v.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p081ny9v.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p081ny9v.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p081ny9v.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p081ny9v.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p081ny9v.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>We asked presenter Radzi Chinyanganya to read to the camera the names of 12 cities, numbers from -30 to 30 and several pithy phrases to explain the temperature. The finished script sounded like this:</p> <p>"Welcome to your daily weather update, let's take a look at what's been happening. In "x", residents are expecting "x", temperatures are expected to be, on average "x" so if you're heading out, remember to "x."</p> <p>We used the 主播大秀's weather API to fill in the 'x' variable with accurate, up to date weather data from the twelve cities. You may ask at this point, why just twelve cities? To scale a demo such that a presenter can deliver a personalised weather report for any city/town/street in the world, would need advances in synthetic audio technology. When you listen to your sat nav giving you directions or you get a response to your query by a smart speaker you hear synthetic speech. Despite the explosion of investment and research using neural networks to simulate human voices, it is still challenging to replicate voices convincingly. That said, soon you won't be able to tell whether the sound of your favourite celebrity is synthetic or authentic. For our experiment, we decided to use Radzi's real voice, instead of a sub-optimal digital version that would've broken the illusion of the experience.</p> <p>Take a look at <a href="https://blrm.io/synthetic-weather-demo-blog">the demo</a> and see the results for yourself. Select your favourite city and get a personalised synthetic video report based on real-time weather data. 聽Please note this demo only works聽in Google Chrome and other Chromium based browsers such as Brave, Opera and the new Microsoft Edge.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p081nydr.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p081nydr.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p081nydr.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p081nydr.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p081nydr.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p081nydr.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p081nydr.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p081nydr.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p081nydr.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <h4>Safeguarding Trust</h4> <p>Conducting experiments with such contentious technology for a responsible public service broadcaster is tricky. Thorny issues of trust and editorial norms quickly come to the surface.</p> <p>Trust with audiences is foundational to the 主播大秀. It is clear that viewers watching or listening to fake content that, on the surface, appears authentic risks reputational damage. However, that's not to say there are no circumstances where the use of synthetic media could improve the audience offer without sacrificing trust. A lot depends on us being honest and clear with the audience about what they are getting, an editorial principle that the 主播大秀 is used to applying in all sorts of contexts. The use of synthetic media in a news context has, as outlined above, the potential to be de-stabilising, especially in an era of 'fake news'. However, in a different context, like our weather report demo, it is unclear that audiences would be troubled if digital avatars were delivering a weather report. Given the growth of digital assistants and the industry drive for greater personalisation, perhaps there will be an expectation that a video response to a query will be digitally generated.</p> <p>Another factor to consider that may help with trust would be audience markers. Similar to many online chatbots that use robot emojis to convey to the audience that they are speaking to a machine, not a human, it is entirely possible to use similar visual markers to communicate to viewers that a piece of content is computer generated. In this context, with the added safeguards in place, the growth of synthetic visual media seems plausible even for a responsible public service broadcaster.</p> <p>The second and perhaps more intriguing issue that arises when thinking about synthetically generated media is editorial. Take the weather demo, even the most generous critic would concede that it's a bland weather report. The storytelling flair and creativity presenters bring to enrich a piece of content is completely lost in this dispassionate demo. One of the significant challenges in a world of computer-generated media will be working out how to create dynamic, imaginative content in a personalised way. Or perhaps to work out how to use technology to deliver the bits of the presentation that are bland but labour intensive and thereby give our talented storytellers more time and space to create valued content in tandem. That's not to say that bland content is an inevitability - the emerging field of <a href="/programmes/m0009b0q">AI personality designer</a> could perhaps lead to hugely creative synthetic experiences.</p> <p>So, back to our original question, can synthetic media drive new content experiences? Yes 鈥 I believe it can. Currently, the costs to deliver high-grade synthetic video are prohibitively high for ordinary consumers. As the tools become increasingly commoditised, consumers creating quality synthetic experiences at a low price could conceivably unleash a new model of storytelling. You soon imagine a future where a photorealistic human-like digital character can be made to do anything from reading out the football results to delivering a physics lesson.</p> <p>At a time when the world is increasingly troubled by authentic false content, the challenge will be, in short order, to work out how to prepare for this storytelling paradigm shift.</p> </div> <![CDATA[The battle against disinformation]]> 2019-07-17T13:08:12+00:00 2019-07-17T13:08:12+00:00 /blogs/internet/entries/52eab88f-5888-4c58-a22f-f290b40d2616 Sinead O'Brien <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p07h392p.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p07h392p.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p07h392p.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p07h392p.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p07h392p.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p07h392p.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p07h392p.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p07h392p.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p07h392p.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p><em>聽鈥淎ll around the world, fake news is now the poison in the bloodstream of our societies 鈥 destabilising democracy and undermining trust in institutions and the rule of law鈥</em> - Speech by Tony Hall, Director-General of the 主播大秀 - Lord Speaker Lecture - Wednesday 20th March 2019.</p> <p><em> Propaganda, deception, suppression of free speech, have all been enduring issues for every society, but in recent years terms like 鈥榝ake news鈥 and disinformation have been heard in public discourse with alarming regularity. So, what is happening to make it a live issue for news organisations? Can anything be done to push back against the wave of disinformation? What type of interventions are needed? Can ML help tackle disinformation?鈥 </em></p> <p>The latest fireside chat was hosted by 主播大秀 Technologist Ahmed Razek. The panel line-up for the evening featured Sam Jeffers (Who Targets Me?), Dr. David Corney (Full Fact), Magda Piatkowska (Head of Data Solutions, 主播大秀 News), and Jon Lloyd (Mozilla Foundation).</p> <p>Ahmed Razek kicked off by setting out First Draft News鈥檚 <a href="https://firstdraftnews.org/fake-news-complicated/">seven categories of misinformation</a>.</p> <p><strong>The world of misinformation is complicated. Do people actually care about having real news that challenges them? </strong></p> <p>Sam Jeffers feared that we only think about disinformation a bit, not enough. Who Targets Me is trying to normalise people鈥檚 understanding. We see strange things from time to time that deserve explanation. There is a growing community of people being confronted with misinformation. There is a need to help people find trust signals to help them differentiate between trustworthy and untrustworthy content. If we can be more transparent, we can make more of the trustworthy content more trusted. Madga Piatkowska stressed the need for developing data solutions without hurting people. The intent behind publication and content is an important aspect. Satire is not true and 鈥渇acts鈥 are not always facts - not everything is intended to misinform.</p> <p>Jon Lloyd, referring to his advocacy work at Mozilla, thought it is all too easy to fall into the trap of talking about fake news. Disinformation is affecting every aspect of our daily lives now. This is a sociological problem, spanning human rights, political and health arenas, and so on. Companies behind tech need to be looked at closely. The public is coming along with Mozilla on disinformation as a term. In the US, a recent survey showed that people are more concerned about disinformation than terrorism.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p07h3984.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p07h3984.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p07h3984.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p07h3984.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p07h3984.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p07h3984.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p07h3984.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p07h3984.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p07h3984.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p><strong>We are discussing how ML can tackle disinformation. Jon has advocated for one simple tech change - The Guardian鈥檚 data labelling feature. </strong></p> <p>Jon shared his relevant experience of proactive media action in the face of disinformation. The Guardian noticed a lot of traffic on a 2013 article quite suddenly (it was an old article). Traffic was coming from a Facebook group which was posting a lot of Islamophobic content. The Guardian knew that people were not paying attention to the date of the article so they tweaked the metadata to make the date immediately noticeable to the reader. Taking a human-centric approach to do what was in their power - to change what was happening. Lots of blame is reflected on the media for not doing enough. There are more sophisticated threats now, more authentic accounts spreading misinformation. We need more transparency on organic content (user-generated content). It is necessary to work with researchers to set a baseline of what excellent looks like and to assess against that baseline. Jon encouraged Technologists to support transparency efforts to get to excellent.</p> <p><strong>The nature of elections is changing. What do technologists and journalists need to prepare for going forward? </strong></p> <p>Sam thinks that we regulate tightly in the UK. Who Targets Me is interested in people being able to prove who they are, particularly if they are running large amounts of political advertising. Some special cases deserve anonymity but an individual, group or organisation should generally be able to stand behind what they put out. Do people really understand why they see a particular message or content, based on the data collected on them? Democracy is about debate and collective decision - we need to explain modern campaigning approaches and raise faith in how elections are run. Facebook doesn鈥檛 expose information about targeting - what data is used to reach particular people. Social media tools allow for the circumvention of conventional electoral practice.</p> <p><strong>Can the panel share some insight into the fact-checking process? </strong></p> <p>Magda shared observations of 主播大秀 News鈥 work with Reality Check journalists. 主播大秀 News has a role in transparency, in explaining to the audience what happens. Most people don't understand what targeting actually is. It is very important that we do explain. Sam maintains that Facebook is an interesting dilemma as they have done more than other platforms, but take multiple-times more money for this type of advertising. Google and YouTube transparency tools are polluted; they are not clear on how often they are updated and they are messy. 聽</p> <p>David Corney shared useful insights into Full Fact鈥檚 fact-checking carried out by journalists - checking claims by influential people that may be misleading or easily misinterpreted by the audience. The fact-checking journalists publish a fact check; a piece summarising the full story after doing the research that the audience does not have time to do. A smaller communications team checks when these claims are being re-shared.</p> <p>Newspapers are asked to publish corrections but regularly decline the invitation. Full Fact鈥檚 automated fact-checking team is a team of technologists working to support the fact-checkers and the organisation鈥檚 communications team, using ML. Prototype software to do fact checks of full sentences is being developed and refined. Algorithms will find a series of data and check if a claim is true or false for more straightforward claims. Full Fact recently received Google funding to build a better claim detection system. Concrete claims will be stored, labelled and tagged. This will allow a wider range of media and free up fact-checkers. 聽The potential dangers of disinformation are making the 主播大秀 risk-averse, and in journalism, this is a problem as the speed of publication is important. Increasingly, it is not that we have a problem with the process, but that we have a problem with the competition.</p> <p><strong>Recent 主播大秀 News research suggested that, 鈥淚n India, people are reluctant to share messages which they think might incite violence, but feel duty-bound to share nationalistic messages鈥. What does the global view of the impact of disinformation look like? What is the non-western perspective?</strong></p> <p>Magda said that different patterns and preferences are witnessed across the globe. Long journalistic tradition is not the case everywhere. Literacy challenges, accessibility to online content, and ability to scan and consume it are prevalent in certain regions. We must also consider the impact of government and propaganda in certain areas. Jon added that we could end up creating policies that are difficult to enforce on a global scale. Magda thought that we needed to pay attention to where the tech is going to grow, e.g. China, as data will impact the way in which disinformation will spread in those regions. Are scoring systems a viable tool to rate content? David felt that 20-30 years ago when content was primarily garnered from newspapers and TV news, editorial teams acted as gatekeepers. That role has been somewhat demolished by social media and citizen journalists who spread their stories. We need something to point us to the stories that are worth paying attention to. If the algorithm gets it wrong, automation will be damaging.</p> <p><strong>What role do algorithms have to play? </strong></p> <p>Ahmed moved the conversation along to the subject of recommendation algorithms. Sam pondered, 鈥淲hen it strikes you that you see a Facebook ad and you click through and then you are recommended other pages, how quickly can that send you in more radical directions than you were expecting - to some strong content?鈥. 聽Regarding recommendation engines built a while ago, we don鈥檛 really know where the accountability lies. Do we understand people鈥檚 information diets? 聽People are consuming lots of stuff from a particular perspective and wonder how they got there. Magda argued that if you really rely on ML you have to take into account that your algorithm learns from people鈥檚 behaviour. That behaviour is not always good for them; they sometimes have poor information diets. This is when we analyse what it means to be informed by editorial and policy strategy as well as tech.Start simple so recommenders are not too complicated, and so we can assess if we are hurting the audience. If you put more interesting content in front of people, they do engage. Take the audience and journalists on a journey.</p> <p>David thought that algorithms have a tendency to go towards most extreme content. Algorithms do give us relevant recommendations but sometimes get it wrong. Recommendations systems can look to the authority of sources rather than recency. Jon reflected that ultimately we need transparency. Platforms say they are making tweaks and fixes that cannot be proven. We are supposed to take at face value these companies who have profit and expansion at their core. Sam agrees - if a business model is totally dependent on the algorithm, and platforms are optimising for engagement and the scale is huge, switching them off is a massive decision. Magda reminded us that this should be the responsibility on the supply side of the content also.聽</p> <p><strong>In the UK, a recent report on misinformation by the Commons Select Committee suggested that a new category of tech company should be formulated, 鈥渨hich tightens tech companies鈥 liabilities, and which is not necessarily either a 鈥榩latform鈥 or a 鈥榩ublisher鈥欌.</strong></p> <p>There has to be a system, according to Magda. There is no one object of regulation - not platforms, or media, or government. It is the responsibility of the system. All parties have their part to play. Media has a role to educate. Sam restated Who Target Me鈥檚 interest in radical transparency around political advertising. There may need to be a product suite for solving all the different problems. Different tools are required to do different jobs, and a market created in tools to help you to understand.</p> <p><strong> A team of researchers at the Allen Institute of AI recently developed Grover, a neural network capable of generating fake news articles in the style of human journalists. They argue they are fighting fire with fire because the better Grover gets at generating fakes, the better it will get at detecting them. </strong></p> <p>The ability to generate text did not worry David, the problem is getting the content into a platform where people start believing it. The story is not a problem in itself. Madga argued that it depends on intent. Who is behind it, using for good or for bad?</p> <p><strong>There has been some hype around both deep fakes and shallow fakes. A recent example of the shallow fake was the slowed-down video of Nancy Pelosi which made her appear to be disoriented. This video was subsequently retweeted by the President of the United States. There was no ML required here, this was basic video manipulation that has a profound effect. </strong></p> <p>Jon believed that a picture is worth a thousand words. Video even more so. Preparedness is better than panic. We should be more concerned about recommendation algorithms, methods of verification and systems to flag false content. To unfollow YouTube video is a long process. Changing policies is one thing. Responsible behaviour on the part of companies is not a zero-sum game. Sam thought that political video is shallow-fakey anyway. It鈥檚 telling a story via the use of selective information. David advocated that it is worth considering radical options like massive regulation. Magda thought trust will become such a big thing; the brand association with factual content. And she foresaw a decline of the not-so trusted-brands. Jon reflected upon transparency transformation in the food and fashion industry but also recognised that there is no silver bullet. It will require a coordinated effort offline as well as online, and not just tech. While financial incentive remains for companies, it won鈥檛 happen on its own. Sam added that we can use this tech to make good democratic strides forward also.</p> <p><em>Huge thanks to Ahmed Razek and the panel for delivering another engaging fireside chat on a very hot topic. The conversation around fake news, misinformation and disinformation is multi-faceted. As the 主播大秀, we need to keep reminding ourselves and others that the problem is not just about journalism. The impact of misinformation reaches far and wide and needs to be considered from societal, policy, tech, humanitarian and public trust perspectives. And so we, along with other organisations, are taking a deeper look at what is happening in these areas. There is lots of great ongoing work in 主播大秀 R&D, 主播大秀 News, and elsewhere in the organisation. The 主播大秀 provided feedback into the Disinformation and 鈥淔ake News鈥: Final Report (February 2019). Director of the 主播大秀 World Service Group, Jamie Angus, subsequently confirmed that the World Service would take the lead in addressing the 鈥楩ake News鈥 threat making use of its 42 language services, knowledge on the ground and 主播大秀 Monitoring to spot harmful examples and expose emerging patterns. To echo Magda, we must progress in a way that is not harmful to our audience.</em></p> </div> <![CDATA[Tackling misinformation]]> 2019-06-03T13:02:28+00:00 2019-06-03T13:02:28+00:00 /blogs/internet/entries/0c83aee1-fd7b-423c-8383-ca45150b3473 Ahmed Razek, Sinead O'Brien <div class="component prose"> <p>Propaganda, deception, suppression of free speech, have all been enduring issues for every society, but in recent years terms like 鈥榝ake news鈥 and misinformation have been heard in public discourse with alarming regularity. So, what is happening to make it a live issue for a news organisation like the 主播大秀?</p> <p>One significant factor is that a whole range of technologies categorised as Artificial Intelligence and Machine Learning have unleashed a potent range of disruptive capabilities on a previously unimaginable scale, making it possible to create profoundly misleading content including fake audio and video. At the same time the growth of social media means it is now easy to distribute deceptive content to a worldwide audience.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p07c399l.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p07c399l.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p07c399l.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p07c399l.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p07c399l.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p07c399l.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p07c399l.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p07c399l.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p07c399l.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Credit: Getty Images</em></p></div> <div class="component prose"> <p>The problems created by online misinformation are not trivial, and the threats to society are genuine. Take the recent emergence of the<a href="/news/health-47417966"> anti-vaxxers movement</a> where false information about the dangers of life-saving vaccines targeted at a newly receptive and sizeable audience across social network platforms led some parents to put their children at medical risk. Though the dissemination of this material is not illegal, it is undoubtedly harmful.</p> <h4>Big Business</h4> <p>Influencing or subverting democratic norms isn鈥檛 just about being able to manipulate people; it鈥檚 also <a href="/news/av/business-38919403/how-do-fake-news-sites-make-money">big business</a>. There is a lot of profit to be made by telling people what to think, and social media has become the cheapest way to accomplish this.</p> <p>Social media and video hosting services are playing a significant role in circulating misinformation, both on public channels like Twitter and over encrypted messaging services like WhatsApp. There have been worldwide calls for media and technology companies to take more responsibility for content hosted on their platform.</p> <p>In the UK, a recent report on misinformation by the Commons Select Committee suggested that a new category of tech company is formulated, 鈥渨hich tightens tech companies鈥 liabilities, and which is not necessarily either a 鈥榩latform鈥 or a 鈥榩ublisher鈥欌. At the same time, the UK Government plans to consult on its 鈥極nline Harms鈥 White Paper, a joint proposal from the DCMS and the 主播大秀 Office. A public consultation on the plans is currently running for 12 weeks until July 1st 2019.</p> <h4>Regulation</h4> <p>Germany recently implemented the Network Enforcement Act, which forces technology companies to remove hate speech, fake news and illegal material or risk a heavy fine. Notwithstanding freedom of speech concerns, it is not clear that the law is working as intended, despite placing a heavy burden on the platforms.</p> <p>Lawmakers are clearly dissatisfied with the status quo, but it remains unclear what new types of responsibility will be placed on online services as a result. Conjuring up workable law to control what appears online is hard, and any regulation is unlikely to be universally acceptable.</p> <p>Outside of regulation, there is growing consensus around the need for greater media literacy campaigns. It is vital that we teach people of all ages to be critical consumers and sharers of information, especially in the online world. However, it is unclear when the wider society will reap the benefits of such a media literacy program and the health of democracy cannot wait for a younger, more media-aware, generation to grow to maturity.</p> <p>Problems arising from the spread of misinformation are not confined to these online spaces. Last year, <a href="/news/world-asia-india-44856910">mob lynchings</a> across India were fuelled by the spread of disinformation spreading across encrypted messaging apps. The tension between privacy and data security means that harmful content can spread like wildfire without anyone being held accountable. Since then, WhatsApp has restricted forwarding messages to <a href="/news/technology-46945642">a maximum of five people</a>.</p> <p>Removing or reducing the impact of content that contains verifiably false assertions is difficult but tractable. Traditionally, the role of debunking deceptive claims has fallen to competent journalists. Given the mammoth scale of the problem, algorithmic interventions are needed. However, outsourcing the 鈥榟alf-truth problem鈥 solely to algorithms has thus far proven ineffective and exceptionally difficult to handle, in part because cases of misinformation are often not clear cut and rely on careful interpretation.</p> <p>Given these difficulties, the case for public service organisations like the 主播大秀 to take a leading role in the fight against online misinformation is a strong one. Widespread online misinformation strikes at the heart of our public purpose to provide accurate and impartial news. However, the size of the challenge is unprecedented. Our online information ecosystem is polluted.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p07c39f1.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p07c39f1.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p07c39f1.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p07c39f1.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p07c39f1.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p07c39f1.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p07c39f1.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p07c39f1.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p07c39f1.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>For its part, the 主播大秀 is committed to being part of the push back against the wave of misinformation, distraction and deceit that characterises parts of the online world. Over the coming months, the 主播大秀, alongside other organisations, will be looking at a whole raft of practical actions that might be taken to address misinformation across the media landscape. These interventions will sit alongside our continuing editorial coverage and initiatives like the 鈥<a href="/news/topics/cjxv13v27dyt/fake-news">Beyond Fake News</a>鈥 project.</p> <p>Our approach will be cross-disciplinary; connecting researchers, designers, academics, policy makers and technologists with journalists. The impact of misinformation reaches far and wide. This conversation is not just about journalism; it鈥檚 about preserving the information that underpins society, it鈥檚 about policy, technology, humanitarian organisations and public trust.</p> <p>Neither the 主播大秀 nor its partners will entirely solve the problem of misinformation, online or offline, but we are doing our part to ensure that trustworthy information derived from competent, honest and reliable sources continues to flow freely across society, giving audiences around the world a space where they can find news reports they can rely on.</p> </div> <![CDATA[Seeing isn't always believing]]> 2018-11-15T10:15:00+00:00 2018-11-15T10:15:00+00:00 /blogs/internet/entries/814eee5b-a731-45f9-9dd1-9e7b56fca04f Ahmed Razek <div class="component prose"> <p>Much has been written about the <a href="http://www.bbc.co.uk/blogs/internet/entries/7e49f841-85af-4455-a8b0-c16e6279176c">societal impact of AI</a> but there has been far less penned about its creative potential.</p> <p>This blog post will focus on an AI experiment conducted in support of the <a href="/news/topics/cjxv13v27dyt/fake-news">主播大秀鈥檚 鈥楤eyond Fake News鈥 season</a>.</p> <p>Our experiment took inspiration from <a href="https://www.youtube.com/watch?v=AmUC4m6w1wo&t=13s">this viral 鈥楩ake Obama鈥 clip</a> produced at the University of Washington. Researchers used AI to precisely model how President Obama moves his mouth when he speaks.</p> </div> <div class="component"> <div class="third-party" id="third-party-0"> This external content is available at its source: <a href="https://www.youtube.com/watch?v=AmUC4m6w1wo&t=13s">https://www.youtube.com/watch?v=AmUC4m6w1wo&t=13s</a> </div> </div> <div class="component prose"> <p>This image synthesis technique is more popularly known as 鈥楧eepfake鈥. The term 鈥楧eepfake鈥 (a portmanteau of deep learning and fake) can be unhelpful and confusing as the underlying technology has potential for both creative and nefarious use. It is the malicious use of the technology that grabs our attention, often cited examples have ranged from <a href="/news/technology-44397484">fake news</a> to <a href="/news/technology-42912529">porn</a>.</p> <p>So why is this problem important for the 主播大秀? Video reanimation can confuse (and impress) audiences, challenge our notion of truth and has the potential to sow widespread civil discord. It鈥檚 crucial for organisations like the 主播大秀 to get under the skin of the technology by understanding what it takes to create a compelling video reanimation and researching what can be done to detect manipulated media.</p> <p>For our experiment, we wanted to push the technological creative boundaries by exploring whether a presenter could appear to be seamlessly speaking several languages. To make this happen we asked 主播大秀 World News presenter, Matthew Amroliwala, to record a short 20 second script. We then asked three different presenters from the 主播大秀 World Service Hindi, Mandarin and Spanish services to record the same script but in their native languages. We deliberately picked diverse languages in order to test how effective the technology is.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06rm176.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06rm176.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06rm176.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06rm176.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06rm176.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06rm176.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06rm176.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06rm176.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06rm176.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>For the modelling and synthesis work we partnered with London AI startup Synthesia. Before recording his 20 second piece, we asked Matthew to read a prepared script which would tease out all of his facial movements. This was used as training data for the deep learning and computer vision algorithms. A generative network (this is a network used to generate new images of a person) was then trained to produce photorealistic images of Matthew鈥檚 face which would form the basis of his new digital face.</p> <p>Finally, to bring the digital face to life, the facial expression and audio track from our World Service colleagues is transferred onto the new digital face - 鈥奱 process called digital puppeteering.</p> </div> <div class="component prose"> <p>And that鈥檚 it. Take a look at the video below and see how convincing our reanimated video is.</p> <p>聽</p> </div> <div class="component"> <div id="smp-0" class="smp"> <div class="smp__overlay"> <div class="smp__message js-loading-message delta"> <noscript>You must enable javascript to play content</noscript> </div> </div> </div></div><div class="component prose"> <p>So, what did I conclude about our experiment? Spanish Matthew looks convincing to me. However, is there a feeling that something is not quite right when viewing the Hindi and Mandarin Matthew? Is the reanimation not quite as finessed, or is my brain so unused to seeing him speak mandarin that the suspension-of-disbelief is broken? Or is transferring non-European languages trickier technically?</p> <p>But consider this: we now have a flexible digital copy of Matthew鈥檚 face. It would be possible for him to record a new video (perhaps in his kitchen) and for us to reanimate those words onto any other recording of Matthew - in the studio or reporting on location. The implications for a trusted broadcaster like the 主播大秀 are serious.</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06rmw01.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06rmw01.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06rmw01.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06rmw01.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06rmw01.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06rmw01.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06rmw01.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06rmw01.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06rmw01.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>Technology is at a point where it鈥檚 possible to cheaply and quickly manipulate video and make it difficult to tell the difference from an original. We will need tools that can verify the authenticity of a video and be able to prove this to the audience.</p> <p>But what mechanism would instil confidence in our audiences? We are seeing academia and technology companies working on the problem of authenticity, but there is some way to go. For now, for the audience, there needs to be a heightened awareness of this technology鈥檚 capability. Seeing isn鈥檛 always believing.</p> <p><em>You can see Matthew Amroliwala's reaction to the technology on the <a href="/programmes/p06r8g4l">主播大秀 News Click programme.</a>聽</em></p> </div> <![CDATA[Digital news trends for 2018]]> 2018-06-21T10:24:06+00:00 2018-06-21T10:24:06+00:00 /blogs/internet/entries/28a9de20-8228-4b91-b74e-c2795aba8806 Jonathan Murphy <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p06bq18q.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p06bq18q.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p06bq18q.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p06bq18q.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p06bq18q.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p06bq18q.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p06bq18q.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p06bq18q.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p06bq18q.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Report author Nic Newman reveals this years news trends</em></p></div> <div class="component prose"> <p>Social media as a source for news is in decline for the first time, according to an international poll which was revealed to the 主播大秀 this week. Most of the drop was due to a growing distrust in Facebook as a news platform.聽 Meanwhile people's trust in the news in general has stayed stable with just over half of people saying they trust the news they use themselves.聽</p> <p>The <a href="https://reutersinstitute.politics.ox.ac.uk/risj-review/trust-misinformation-and-declining-use-social-media-news-digital-news-report-2018">Reuters Institute Digital News Report</a> is an annual survey of digital news usage across the world, and this year it polled 74,000 people in 37 countries.聽</p> <p>It found that usage of Facebook as a source of news had dropped for the first time, most noticeably in the US (down 9%) but also most other countries including the UK (down 2% to 27%).聽</p> <p>Other trends that emerged were:</p> <ul> <li>58% of those polled in the UK were concerned about fake news. This percentage is higher in other countries where there's a higher level of polarised opinion like Brazil (85%) with upcoming elections and Spain (69%) after the Catalan independence vote</li> <li>There's a higher proportion of people wanting government intervention to stop fake news in Europe (60%) than in the US (41%)</li> <li>Social platforms are least trusted in the UK of all countries surveyed (12%), with higher trust for "mainstream" media such as broadcast and quality newspapers</li> <li>More people are using messaging apps such as Whatsapp to share news, particularly in Malaysia (54%) and Brazil (48%) however the take-up in the UK is still relatively small (5%)</li> <li>There's a gradual increase in people paying for online news subscriptions - 16% up in the US, while for the UK it's 7%</li> <li>Podcasts are becoming more popular (18%), particularly among younger people</li> <li>The same is true for voice-activated speakers. Usage has doubled in the UK, with just under half using them to access news.聽</li> </ul> <p>You can find more details <a href="http://www.digitalnewsreport.org/survey/2018/overview-key-findings-2018/">here.</a>聽</p> </div> <![CDATA[Social media - a question of trust]]> 2018-03-23T15:18:48+00:00 2018-03-23T15:18:48+00:00 /blogs/internet/entries/11d08b11-8b70-48ab-b619-b60e594fedfb Jonathan Murphy <div class="component"> <div id="smp-1" class="smp"> <div class="smp__overlay"> <div class="smp__message js-loading-message delta"> <noscript>You must enable javascript to play content</noscript> </div> </div> </div></div><div class="component prose"> <p><em>Disruption and Deception - What Next for News?聽</em>was the topic for a lively panel discussion at the Social Media and Broadcasting Conference hosted by the 主播大秀 Academy. 聽Hosted by Nic Newman from the Reuters Institute for Journalism with a panel of industry leads, the debate covered areas such as fake news, click bait, responsible product development, data control and personalisation.聽</p> <p>But of course much of the chat was about the <a href="http://www.bbc.co.uk/news/uk-43474760">Facebook news story</a>. 聽Mark Little, CEO & co-founder of NevaLabs, sees this as a watershed moment. "Democracy is being weaponised. There's now an arms race and the forces of darkness are weaponising these social media tools. The good news is that now we're discussing these problems and working on solutions." 聽</p> <p>聽</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p0623ktp.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p0623ktp.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p0623ktp.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p0623ktp.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p0623ktp.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p0623ktp.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p0623ktp.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p0623ktp.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p0623ktp.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>The social media panel line-up</em></p></div> <div class="component prose"> <p>There was much debate over the nature and impact of fake news. 聽Orit Kopel, co-founder of Wikitribune, said it was important to distinguish between deliberate falsehoods and bad journalism. "Misinformation is very rare," she said, born out by a show of hands of few people in the audience who'd recently seen fake news in their social media feeds, 聽"The main problem is bad journalism. 聽It's more profitable to have click bait headlines linking through to bad content."</p> <p>There was also agreement that, overall, social media can be a force for good. 主播大秀 News social media editor Mark Frankel remains optimistic despite recent headlines: "It's not in our interest to lose Facebook. 聽Their recent algorithmic news feed changes could be a big opportunity. Ifi ti's about raising the bar, if it's about trust and respect, that can only be a good story for us."</p> <p>Mark Little felt that while reforms and controls were needed, the benefits of social media were still enormous, 聽"Let's not lose the democratic potential of having media that's not owned by gatekeepers, but is instead in our own hands".</p> <p>And what might be changing over the next couple of years? 聽Similar problems, just with different platforms, one panellist suggested. For Orit Kopel, the biggest changes would be around user control: "We'll see people reclaiming their privacy and taking more control of their online lives".</p> <p><em>You can see highlights of the Social Media and Broadcasting Conference on the <a href="http://www.bbc.co.uk/academy">主播大秀 Academy website</a>.</em> 聽</p> <p>聽</p> </div> <![CDATA[Infocalypse Now]]> 2018-03-23T08:00:00+00:00 2018-03-23T08:00:00+00:00 /blogs/internet/entries/3d9cc867-2383-4f68-a8cc-0af3a03c8313 Jonathan Murphy <div class="component prose"> <p>This week's headlines about Facebook have further raised questions about the trustworthiness of social media. 聽According to one industry commentator we're heading towards an "Infocalypse".</p> <p>Charlie Warzel, senior technology writer for Buzzfeed, will warn at a social media conference today hosted by the 主播大秀 Academy that advances in algorithms, combined with political manipulation, are creating a toxic cocktail where it will become increasingly difficult to distinguish between reality and fake news.</p> <p>"We have an online ecosystem of data that not only do many people not understand but even the companies in charge can't control."</p> <p>"The problem is that there are these platforms that incentivise engagement over everything else. So if it draws your eyeballs, if it's scandalous, those platforms reward that. 聽On the other side is literacy too. 聽If you know that video can be manipulated in a certain way, you can look at things with a more sceptical eye."</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p061zn6v.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p061zn6v.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p061zn6v.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p061zn6v.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p061zn6v.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p061zn6v.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p061zn6v.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p061zn6v.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p061zn6v.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""><p><em>Charlie Warzel, senior technology writer, Buzzfeed</em></p></div> <div class="component prose"> <p>And commenting on the Facebook/Cambridge Analytica scandal he pointed at the speed of growth of the platform as part of the problem.</p> <p>"What we're looking at right now with Facebook is an enormous platform with billions of users. The issue is that it's grown so fast. These companies are working on the fly essentially and they're moving fast. Facebook's motto used to be 'move fast and break things' and its very clear that they've broken a lot of big things. Of course you could never anticipate that you would build a tool from your dorm room and it would influence geo-politics."</p> <p>He sees this as a turning point in the public's safety awareness:</p> <p>"One of the biggest things that can be done is on the user side. Obviously there are tons of protections that need to come around governance and moderation. But on the user end we can use this as a moment to say, what am I doing online? What am I giving up? We do this in the real world all the time. We're constantly assessing our safety and our security and we have to do this online too. I don't think that the battle for truth and reality is lost by any means but this vigilance is key."</p> <p>And it could signal a culture shift for technologists:</p> <p>"Let's learn from our past mistakes so that in creating the next big platforms, we don't have to go through these growing pains of being cavalier with data. It's incumbent upon you if you're building this technology to be able to explain what you plan to do with it and how you intend to safeguard it, because the only thing we truly know about innovation and technology is that a bunch of people are going to use it in unexpected ways."</p> <p><em>Trust in Social Media was discussed at a panel event at the 主播大秀 Academy social media conference, which you can <a href="http://www.bbc.co.uk/blogs/internet/entries/11d08b11-8b70-48ab-b619-b60e594fedfb">read here.聽</a></em></p> </div> <![CDATA[Digital Creativity team trains photojournalists for School Report]]> 2018-03-20T12:57:48+00:00 2018-03-20T12:57:48+00:00 /blogs/internet/entries/b391979d-52a4-4669-bf1d-115bcf8ab9f1 Martin Wilson <div class="component prose"> <p>主播大秀 D+E鈥檚 Digital Creativity team helped transform youngsters from four schools into photojournalists for the day last week.</p> <p>It was all part of the 主播大秀鈥檚 School Report News Day that involves 30,000 school children and 900 schools around the country. Around 130 students and 30 teaching staff from schools across the north came into Media City for workshops to enhance their digital and journalistic skills.</p> <p>The D+E team collaborated with the 主播大秀 Academy to run a series of workshops that ended with the youngsters taking their own photographs around the Salford base and then publishing them on to the 主播大秀鈥檚 creativity platform Mixital.</p> <p>聽</p> </div> <div class="component"> <img class="image" src="https://ichef.bbci.co.uk/images/ic/320xn/p061qkqy.jpg" srcset="https://ichef.bbci.co.uk/images/ic/80xn/p061qkqy.jpg 80w, https://ichef.bbci.co.uk/images/ic/160xn/p061qkqy.jpg 160w, https://ichef.bbci.co.uk/images/ic/320xn/p061qkqy.jpg 320w, https://ichef.bbci.co.uk/images/ic/480xn/p061qkqy.jpg 480w, https://ichef.bbci.co.uk/images/ic/640xn/p061qkqy.jpg 640w, https://ichef.bbci.co.uk/images/ic/768xn/p061qkqy.jpg 768w, https://ichef.bbci.co.uk/images/ic/896xn/p061qkqy.jpg 896w, https://ichef.bbci.co.uk/images/ic/1008xn/p061qkqy.jpg 1008w" sizes="(min-width: 63em) 613px, (min-width: 48.125em) 66.666666666667vw, 100vw" alt=""></div> <div class="component prose"> <p>The theme of the day was accuracy and authenticity in news reporting 鈥 two values that are crucial to the 主播大秀. The purpose of the workshop was how to apply the same values to tell stories accurately and fairly with photos.</p> <p>Academy photographer Danielle Baguley explained some of the principles of photography by taking the groups through her portfolio that included photos from sporting events to Will.i.am. The youngsters were introduced to the principles of photojournalism and how to tell a story with pictures.</p> <p>At the end of the workshops the youngsters were set the challenge of gathering their own photos around Media City to tell a story of urban regeneration. Back in the newsroom, the youngsters edited their photos, selected the best and wrote captions. They then published them on Mixital and <a href="https://www.mixital.co.uk/channel/school-report-images">you can see them here</a>.聽<a href="https://www.mixital.co.uk/channel/school-report-images"><br /></a></p> </div>