Ö÷²¥´óÐã

Archives for January 2009

How we make websites

Post categories:

Michael Smethurst Michael Smethurst | 14:03 UK time, Thursday, 29 January 2009

Designing and building data driven dynamic web applications the one web, domain driven, RESTful, open, linked data way

For the past few months I've been touting a presentation around the Ö÷²¥´óÐã entitled 'How we make websites'. It's a compendium of everything our team has learned from long years developing /programmes, the recent work on /music and the currently in development /events.

As a warning there's very little original thinking in here. For those familiar with the concept of , the importance of , , and it'll probably be old news. Possibly it's interesting to see all these threads tied up in one place!?! Maybe it's interesting to see them all from a user experience point of view?!? Anyway, as ever, it's built on the years . Although obviously I'll make an exception for Paul Clifford and . :)

The presentation is here and the (slightly) expanded text is below for the sake of accessibility and Google.

View more or your own. (tags: )

Explore the domain

Thumbnail image for Domain Driven Design book

This should be clear from the business requirements - it might be food or music or gardening or...

Employ a domain expert. Get them to sketch their world and sketch back at them. Concentrate on modelling real (physical and metaphysical) things not web pages - try to blank from your mind all thoughts of the resulting web site. This work should never stop - you need to do this through the lifetime of the project as you refine your understanding.

Identify your domain objects and the relationships between them

Programmes domain model

As you chat and sketch with your domain expert you should build up a picture of the types of things they're concerned with. Make a list of these objects.

As your knowledge of the domain increases you'll build up a picture of how your objects interlink. You can sketch basic with your domain expert and keep sketching until the picture clears. Bear in mind you're trying to capture the domain ontology - this isn't about sketching database schemas. The resulting domain model will inform the rest of your project and should be one of the few artifacts your project ever creates.

Check your domain model with users

Sketch with users

Run focus groups and speak to users. Get them to sketch their understanding of the domain and again sketch back at them. After several round trips you should be able to synthesise the expert model and the user model. User-centric design starts here - if you choose to model things and relationships between those things that users can't easily comprehend no amount of wireframes or personaes or storyboards will help you out.

Check to see if your website already deals with some of your domain objects

Build horizontally, not vertically

If it does then reuse this functionality by linking to these pages - you don't want to mint new URIs for existing objects. Having more than one page per thing confuses users and confuses Google. Try to think of your website as a coherent whole; not as a collection of individual products. And as ever, don't expose your internal organisational structures through your website. Users don't care about departments or reporting lines.

The glory will always come from building skyscrapers - the real challenge lies in decent town planning. It's more difficult to build new services that stitch into your site and stitch into the web than build shiny, shrink wrapped, self contained products.

Design your database

Programmes database schema

Translate your domain model into a physical database schema.

Source your data

Creative Commons logo

Check if there are business systems in your organisation able to populate your schema. Check if there are existing websites outside your organisation you can use to populate your schema. Give preferential treatment to any websites that offer their data under a liberal licencing agreement - you can buy in data to help you slice and dice your own data but if you do this you might not be able to provide an open data API without giving away the 3rd party's business model. If your organisation AND an open data website can provide the data, consider the danger in minting new identifiers for your own data - can you easily link out / can you easily get links in?

Data licensing is one of those areas that often gets ignored in project planning. If you fail to consider it or get it wrong it can severely curtail your plans further down the line.

Pipe in your data

Programmes data flow

Whether you choose to use your business data or buy data or use open data you'll need a way of piping it into your database schema. You'll probably have to reshape it to make it suitable for publishing.

Make your models

Models

In an MVC framework your models should contain all your business logic. This mean they should capture all the constraints of your database schema plus all the extra constraints implied by your domain model.

Design your URI schema

Using post-its to design URIs

Your URI schema should follow naturally from your domain model. As an example if you're dealing with books and a book can have many authors then ../:book/authors should list all the authors of that book. At Audio and Music we tend to use large walls and lots of post-its to design our URIs. Add some string to show links and journeys and there's no need to ever draw another site map.

This isn't just about designing URIs for resources you link to - sometimes your pages will be made up of other resources - all of these subsidiary resources should be addressable too. It means you can easily change your user experience layer by taking out transcluded resources and linking to them instead or removing links and transcluding.

By making every nugget of content addressable you allow other sites to link to it, improve your bookmarkability and increase your SEO - cf. an individual 'tweet'. Bear in mind that some representations (specifically mobile) will need smaller, more fragmented representations with lower page weight - designing your subsidiary resources to be addressable allows you to easily deal with this requirement - transclude the content on a desktop machine, link to it on a mobile.

This is where we begin to talk about one web and REST. Each thing should be one resource with one URI - the representation you get back (whether desktop HTML or mobile XHTML MP or RDF or YAML or JSON) should depend on what your user agent asks for via . It means I can send a link to a friend from a desktop machine, they can click on that link from a mobile and they'll get back a representation appropriate to their device. Or vice versa. One web with no mobile ghetto.

It's important not to confuse URI design with site structure and user journeys. If you're used to working . This isn't true here. Think of the individual resources as tent poles - the user journeys are the canvas that gets draped over later.

It's nice if URIs are human readable. It's also nice if they're hackable. It's an .

Don't sacrifice persistence for the sake of prettiness or misguided SEO. URIs are your promise to the web and your users - if you change them or change their meaning you break that promise - links break, bookmarks break, citations break and your search engine juice is lost.

Remember: .

Make hello world pages for your primary domain objects

h1 for In Our Time

For now all they need is an h1 with the title of the object.

Make hello world pages for your primary aggregations

h1 for aggregation page

For now all they need is an h1 with the title of the aggregation and a linked list of things aggregated.

Define the data you need to build each of your pages

Some pseudo SQL

Traditional wireframes lump together data requirements (via annotations), page layout and (by implication) . It's best to split these out into 3 distinct tasks. The first task is to define the data requirements.

For each URI define the data needed to build all representations of the thing. Just because the HTML representation doesn't need to show the updated date doesn't mean the RSS or Atom or RDF don't need it.

Some resources will transclude others. There's no need to define the data required for these - just reference the transcluded resource.

Build up your HTML pages and other representations

More HTML

Now you know what data you need you can begin to surface this in your representations.

If you're working in HTML make sure you design your document to be semantically correct and accessible. Try not to think about page layout - that's the job of CSS not markup. Document design should be independent of page layout. In general your page should be structured into title, content, navigation - screen readers don't want to fight through calendar tables etc to get to the content.

Add caching and search sitemaps

Eggtimer

Knowing what can be cached and for how long is a vital part of designing your user experience. Cache for too long and pages go stale. Don't cache for long enough and you send unnecessary traffic across the wires and place extra strain on your application.

Cached pages will also be faster and smoother to render in a browser. And if your users are paying for data on a mobile every extra connection means bigger bills, which is definitely a user experience issue.

An example: if you're creating a schedule page for today's TV you want to cache for performance reasons but you don't want to cache it for too long since schedules are subject to change. But you can cache yesterday's schedule more aggressively and last week's schedule more aggressively still.

Creating XML helps search engines know which bits of your site have been updated. Which helps them to know which bits to re-index. Which helps to make your content more findable.

Apply layout CSS

Wireframe

Add layout CSS to your HTML pages. Experiment with different layouts for your markup by moving elements around the page. You're wireframing!

Test and iterate

Repeat

You should be testing with real users at every stage of development but it's particularly important to conduct usability AND accessibility tests now. It's like testing traditional wireframes but you're testing on the real application with real application behaviours and real data (no lorum ipsum nonsense).

Sometimes the results of your testing will require changes to layout CSS, sometimes to markup, sometimes to the data you need to surface and sometimes to the underlying domain / data model. Bear in mind if you're using data from existing business systems there may need to be heavy investment to make changes to that data model and employ the staff to admin those changes. Occasionally it might even mean renegotiating contracts with outside data providers. All design and usability issues are fixable - some just need more lawyers than others : )

Apply decor CSS

Decor CSS

Over the top of your wireframe application you can now start to add visual design and branding. This is exactly the same process as taking a paper wireframe and applying design treatments over the top except you're mainly working in CSS.

Experiment with different treatments - see how far you can stretch the design with the markup given. Sometimes you'll need to add additional markup to hook your CSS off.

Now's the time to add background imagery for headers, dividers, buttons, list items etc so best to open Photoshop / Illustrator to make your design assets.

And test and iterate

Repeat

Never stop testing.

Remember that .

Ideally you should be able to adjust your code / markup / CSS to respond to user requests. If you can afford the recruitment / developer time there's no better way to test than with a user sitting alongside a developer - the developer can react to user requests, tweak the application and gain instant feedback without the ambiguity that sometimes comes from test reports.

Again you should accessibility test - some of the design / decor changes may affect font sizes etc - make sure your users can still read the page.

Add any JavaScript / AJAX

Ajax

By designing your browsable site first and adding in Javascript / AJAX over the top you stand a better chance of making an accessible web site - one that degrades gracefully.

As ever Google et al are your least able users - search bots don't like forms or JavaScript - sites that degrade well for accessibility also degrade well for search engines.

Making every subsidiary resource addressable and providing these resources serialised as XML or JSON makes adding AJAX relatively trivial.

You'll probably need to tweak your CSS to adjust to life with JavaScript / AJAX.

And test and iterate

Repeat

Again test your site for accessibility and usability with JavaScript turned on and off.

Continue

Desk with sketches

Follow the same steps for each development cycle. Some development cycles will just be about surfacing new views of the existing domain model; some will require expanding your domain model.

Now you know your domain model and have made each domain object addressable layering over new views and more subtle user journeys should be trivial.

And keep testing!

Visual Radio - Phase 1 Final Thoughts

Post categories: ,Ìý,Ìý

Tristan Ferne | 11:51 UK time, Thursday, 29 January 2009

From Tom Spalding, Senior Designer on Visual Radio...

I realise that this subject has been blogged to death here, but it seemed a good idea to wrap up phase 1 of the trial with a bit of a user experience dissection, of what went well, what not so well, and what can be learnt for the future. Earlier posts have covered the technical aspects of the trial, here we will focus on the concerns of the user experience delivery.

First things first, I felt the trial was a success. At its very best it illustrated exactly what we were looking to achieve, at its worst it served to teach us where we can improve in the future.

The times where it really came together, were where the production teams in the studio were really engaging with the technology we had given them, using the console to supplement their broadcast, but not to the detriment of normal radio listeners.

Visual radio screengrab

The Switch show on Sunday 18th was the best example of this, with creative uses of graphs and text messages, with very strong links to live studio events, and innovative use of the video feed.

Sometimes, I felt what the trial occasionally lacked was a sense of pace, something conveying to a user that things were happening, a general sense of liveness. For great chunks of time nothing apart from the video was active in the console, this was certainly not our intention for the user.

We never wanted to show the video in total isolation, it was always meant to be supported by other visual data feeds, such as graphs, text messages and the studio blog. When we did show the text message feed, often we were displaying it with no visible updates for great lengths of time, this meant the messages themselves lost a sense of relevance. The longer these feeds remained static and stale, the more danger there was for users to simply ignore them and this has definitely emerged from our subsequent focus groups.

Arguably the text messages didn't get through the system quickly enough, and the studio blog was not updated as regularly as we would have hoped. We can learn from this, for example we could look to have more intelligent default states; to never let the console stand still and to always show something, without needing any manual studio intervention.

From a UX point of view we made a lot of educated guesses about how exactly the studio teams would use the console, in many cases there was too much of an overhead involved to use it fully. The managing of this overhead certainly got better as the week went on, as the Moyles team learnt to better engage with the product.

We didn't want to radically change the way a programme is produced, or how the output is perceived by the listeners. What we did want to do is add value for a new group of listeners, and I feel at points during this trial we achieved this.

Roll on phase 2...

Visualising Radio - delivering video and audio

Post categories:

Alan Ogilvie Alan Ogilvie | 16:02 UK time, Thursday, 22 January 2009


So, I just wanted to answer some of the queries we've been having about the audio and video that was used for the Visualising Radio trial.

My team - Terry O'Leary and Toby Bradley - organised the streaming elements, as we do for much of the music events we cover on a regular basis, as well as things like 'ScottCam' which you will remember from last year.

The first thing is a discussion about the audio, and a quick explanation (I've tried to write this in simpler terms, so audiophiles please suppress your urge to correct my terminology).

Heavily compressed audio tends to have a very 'flat' or 'upfront' dynamic range - which means that it doesn't 'sit' with a video track. There is no perspective with the audio. Visualising Radio was about taking the Radio 1 output and putting a video track together with an audio track - this proved an interesting challenge, one that isn't initially obvious.

The Radio 1 broadcast to FM, DAB, Freeview,... has significant audio compression applied at the studio - this is the 'sound' of Radio 1. If you were to use this against a piece of video there would be a feeling of 'dislocation' between the audio you here and the video you see.

So, as part of the trial, we decided to use a different audio feed from the studio - that had some audio compression but not nearly as much as the traditional broadcast feed. This would allow us to see how it 'felt'. Though if vis-radio is 'glanceable' then perhaps we need to review this, plenty of feedback from running this test though.

In terms of the video stream - we used Flash Media Encoder, running on a Windows XP based laptop connected to the Ö÷²¥´óÐã's Live Flash delivery network via a standard SDSL. (Tristan mentioned this previously) The video is encoded with On2 VP6 codec, and the audio input (above) is encoded to MP3 - all wrapped neatly and delivered via RTMP streaming from Flash Media Server.

Terry was our streaming engineer for the week, sitting with the vision mixer on site (Will Kinder), in with Moyles in the mornings and Switch on Sunday evening.

Here is a clip of the Visualising Radio console in action, this is from archive so the quality is a bit poor. (Thanks to Toby for editing this)


from on .

How visual radio works

Post categories: ,Ìý

Tristan Ferne | 15:34 UK time, Friday, 16 January 2009

Yasser introduced visual radio on Monday and the trial has been going on this week. There is one last chance to catch it - on Annie and Nick this Sunday night (7pm - 10pm). The technical team behind it have written this post to give some idea of how the technology driving this system works. So over to the tech team of Conor Curran, Sean O' Halpin, , , , , Ant Smith, Terry O'Leary and Will Kinder

One of the key elements of this project was to provide the ability for the editorial team to control the user client in realtime from the studio.

We considered HTTP polling but in the end found that the only way we could achieve the low latency and scale we required was to push messages to the client. We tried and rejected , mainly due to its verbosity, and there being no decent support for pub/sub in any Ruby XMPP library.

The first solution that showed promise was . This works by embedding a small Flash client in the HTML page for the sole purpose of providing an XML socket connection back to the server which it bridges to javascript. The server side is written in Ruby and integrates very well with Rails. Unfortunately, our tests showed that Juggernaut cannot yet scale to the levels we required. However, it provided the inspiration for our eventual solution.

As it turns out, we were already using a very similar solution to display LiveText on our network home pages via a third-party realtime messaging service provided by Monterosa called En Masse. Putting together what we'd learned from Juggernaut with the proven scalability of En Masse, we were able to piggyback our protocol over the existing messaging channel.

When a user loads the Visual Radio console, the En Masse server opens an XMLSocket connection to the connecting client and throughout the lifetime of the connection it will push XML messages to the Flash client.

Messages are fed to the En Masse server from our back end . All the back end processes are written in Ruby using our .

To control all this and provide the studio with the realtime control they needed, we built a Ruby on Rails web application which sends messages to the client via the messaging infrastructure. If the Radio team want to push content to all the people connected via the Visual Radio console, they will activate the specific module they want to show within the Rails admin application and a chain of events occur. At a high level, what happens is:

  • A message containing a url to a resource is put on a queue by the admin application
  • A process watching this queue, gets the message and parses it
  • A request is made back to the resource url above, which returns an XML packet
  • This XML is then posted to a server
  • This server then messages all clients connected via their XMLsocket connection
  • The client parses the XML and displays the information to the user

Now Playing Information

Whenever a track gets played on Radio 1 we receive a file from the playout system, containing the details about what was played via an HTTP POST request. This message is then put on a message queue, so that it can be archived and sent to other systems.

The track data is sent to us in a proprietary text format, created by the playout system vendor. So the first stage is to parse it into a data structure that is easier to process. This is then put back on to another message queue, again so that other systems can make use of it.

The next message queue processor looks up the artist information in using the artist name and the track title for disambiguation. If it unambiguously finds a matching artist in MusicBrainz then we add this information to the message, which can then be used to fetch an image and biographical information from /music/artists/:artistid

Architecture

This may be just a trial but because it's being broadcast live we need it to be reasonably fault-tolerant. To that end we're using a high-availability Apache based web tier which proxies requests back to multiple application servers. Each application server connects to a high-availability MySQL based database tier. In the event of one of our servers failing another will automatically take over with minimal disruption of service.

To manage the high-availability web and database tiers we're using open-source software called . Our application servers are all and are running Rails. Underpinning everything we use Linux running under Xen which provides efficient virtualisation of our physical hardware.

Streaming Audio and Video

The vision mix output along with the audio feed from the studio was encoded using a Flash media encoder. The On2 VP6 codec was used for the video and MP3 for the audio. This feed was streamed over an SDSL connection to the Ö÷²¥´óÐã's Content Delivery Network. The Flash client once launched attempts to connect to this stream hosted by the third-party.

You can read more about and over at some of the developers' blogs.

In Search of Cultural Identifiers

Post categories: ,Ìý

Michael Smethurst Michael Smethurst | 10:49 UK time, Wednesday, 14 January 2009

Post updated following comments: thanks everyone

For Books....

A Rainbow of Books by Dawn Endico. Some rights reserved.

Late last year we got quite excited about . Using the open word always seems to tick our boxes. We chatted about the prospect of a comprehensive, coherent Ö÷²¥´óÐã books site heavily interlinked with Ö÷²¥´óÐã programmes. Every dramatisation of a novel, every poetry reading, every author interview and profile, every play linked to / from programmes. The prospect of new user journeys from programme episode to book to author to poem and back to episode still seems enticing. We started to wonder if we could use Open Library as the backbone of this new service in the same way we use open data as the backbone of /music.

Unfortunately when we looked more closely an obvious problem came to light. Open Libary is based on book data and Amazon is based on products. Correction: OpenLibrary is NOT based on Amazon data (see Tim's comment). For now it models books in a similar fashion to Amazon (as publications/products not cultural artifacts). which is fantastic news. If you can I'd encourage you to do so. And the Ö÷²¥´óÐã isn't all that interested in products. Neither are users.

If I tell someone that I'm reading Crash they generally don't care if I'm reading or or . What's interesting isn't the product but the cultural artifact. It's the same story with programmes. Radio 7's David Copperfield isn't a dramatisation of or or , it's a dramatisation of - the abstract cultural artifact or work.

The problem is probably so obvious it hardly warrants a blog post but now I've started... Lots of websites exist to shift products. So when they're created the developers model products not looser cultural artifacts. And because the cultural artifact isn't modelled it doesn't have a URL, isn't aggregatable and can't be pointed at. As people use links to explain, disambiguate and clarify meaning. If something isn't given a URL it doesn't exist in the vocabulary of the web.

The problem is compounded by Amazon encouraging users to annotate it's products with comments, tags and ratings. Why is the rated 5 stars whilst the is rated 3 stars? They're essentially the same thing, just differently packaged. Are users really judging the books by their covers? Anyway it all leads to conversations which should be about cultural artifacts fragmenting into conversations about products. It also leads to a dilution of / as user attention gets split across these products.

I'm no library science expert but speaking to more library minded friends and colleagues it seems they use 3 levels of identification:

  • The system is used for general categorisation and classification.
  • The is used to identify a specific publication.
  • The bar code they scan when you take out a book is used to identify the individual physical item.

So there's something missing between the general classification schemas and the individual publication. Like Amazon, libraries have no means of identifying the abstract cultural artifact or work - only instantiations of that work in the form of publications. These publications map almost exactly to Amazon products and since Open Library is built on Amazon data [I]t's why we see in Open Library.

So whilst Open Library's strapline is 'a page per book' (which feels strangely familiar) in reality it's a page per publication / product. It would be interesting to know if Open Library have any plans to allow users to group these publications into cultural artifacts. If they do then we'd really end up with one page per book and one canonical URL to identify it. Update: which is fantastic news. This combination of open data and a model of interesting things is fantastic. At which point the prospect of links to and from Ö÷²¥´óÐã programmes (and Wikipedia / ) gets really interesting.

...and Music

So we've written in the past about . MusicBrainz models 3 main things (artists, releases and tracks) and provides web-scale identifiers for each. So why have we chosen to only expose artist pages? Why not a page per release or a page per track?

The problem is the same one as Amazon / Open Library. In the case of releases MusicBrainz models individual publications of a release. So instead of being able to identify and point to a single Rubber Soul you can point to or or . And in the case of tracks MusicBrainz really models audio signals. So this is different to is different to with no means of identifying them as the same song - for all we know they might have as much in common as and . Which isn't a problem except that we want to say this programme played this song by this performer - which performance / mix is much less interesting. Same with reviews - most of the time we're not reviewing a publication but a cultural artifact.

So how do we get round this? We're currently working with MusicBrainz to implement the first part of its . This will allow users to group individual release publications into what we're calling cultural releases. So we'll have one Rubber Soul to point at. After that it's on to works and parts and acts and scenes and acts of composition etc with a single page and a single URL for each.

...and Programmes

Again the problem resurfaces in the world of programmes. Most of our internal production systems deal with media assets and these assets aren't always grouped into cultural artifacts. But people outside the Ö÷²¥´óÐã aren't really interested in assets. If your friend recommends an episode of Horizon you're unlikely to care if they mean the slightly edited version, the version with sign language or the version with subtitles. Most of the time people talk about the abstract, platonic ideal of the programme.

Way back in time when /programmes was still known as PIPs a design decision was made to model both cultural artifacts and instantiations. If you look at the /programmes schema you'll see both programmes (brands, series and episodes - the cultural artifact) and versions (the specific instantiation). When we talk about one page per programme what we're really talking about is one page per episode. What we're definitely not talking about is one page per version or one page per broadcast.

Getting this stuff right is really the first job in any web project. Identify the objects you want to talk about and model the relations between those objects. The key point is to ensure the things you model map to user's mental models of the world. User centric design starts here and if you choose to model and expose things that users can't easily comprehend no amount of product requirements or or storyboards will help you out.

For want of a better label we A&M types often refer to this work as 'cultural identifiers'. One identifier, one URL, one page per cultural artifact, all interlinked. It's something that Wikipedia does better than anyone. One page per concept, one concept per page. bbc.co.uk could be a much more pleasant place to be if we can build something similar for the Ö÷²¥´óÐã.

Visual radio launches!

Post categories:

Yasser Rashid Yasser Rashid | 10:10 UK time, Monday, 12 January 2009

Today we have launched a new visual radio player!

visual_player.jpg

It's available for the Chris Moyles show on weekday mornings (6.30am - 10am) and Annie and Nick on Sunday night (7pm - 10pm). Go to the Chris Moyles or Annie and Nick pages during those times to check it out. Outside of these times you won't be able to see it - but that's why I've written this post. It is also only available for one week as we are trialling the service and hope to get as much feedback as possible to see what audiences think of the concept.

Visual radio is an important aspect of the way in which radio is evolving on different platforms so to quickly recap on what I've written about before; the work we have been doing focuses on complementing radio programmes with additional information such as the network branding, the track that's currently playing and contact details such as sms, phone numbers and email addresses. To use some in-house terminology, we are calling this glanceable information - snippets of information that are suitable to radio content, that you wouldn't want to stare at while you're listening to a show but will be useful if you hear something great and want to know what it is.

The other aspect of visual radio is enhancement. Each show has a different tone and reaches out to different audiences. Network graphics and presenter images help to reinforce the show's brand and convey its own distinctive identity to its audience. Enhancement also means taking advantage of the capabilities of a device and providing new ways to interact with the programme itself. This could be anything that uses the interface of the device or platform to access menus, to participate in games or view on-demand content.

The player we have launched today incorporates lots of new features that you don't get using the standard Ö÷²¥´óÐã radio player. While you are listening to the show it updates with live information and graphics. For example, if a track starts playing you see an image of the artist and then the player may update to show you text messages that people are sending in or video of what is happening live from the studio.

The following screenshots illustrate the different features of the player:

Now playing and studio messages:

image_text_messages.jpg

Note that this module is generated automatically by taking the data generated when a track is played through our internal playout system and then using that data to pull in the artist image and wikipedia info from /music. That's a nice bit of integration work by the tech team (more from them and their technical implementation on this project in a follow up post soon)!

A live video feed from the studio and text messages sent in from the audience:

video_your_messages1.jpg

Images that relate to subjects being discussed on air:

photo_your_messages1.jpg


We have tried to do something new with the SMS that we receive. Bar charts and swingometers can illustrate polls and the audience opinion.


Bar chart example:
bar_chart.jpg

Swingometer example:

swing_meter.jpg


The ability to display SMS and represent it in a variety of forms is an important development for how we deal with the thousands of SMS that radio programmes receive. Obviously its impossible to respond to each one so rather than feeling your text has fallen into a void we now (albiet temporarily) have a way to feedback to our audience.

Each of these elements within the player have been designed to be flexible and modular so that they can be turned on and off at any moment. It was important for the design team to come up with something that would still look impressive and enable a producer to display meaningful information that relates to the broadcast at any time. To enable this flexibility an admin interface provides the functionality to configure the content area of the player so that the video, images, text messages etc can be switched on and off when appropriate. The bar charts and swingometers can also be configured on the fly which means there can be multiple instances of them or they can be updated during the show. All of this is absolutely necessary because anything can happen on a live radio programme! The following image is an example of the interface and you can see that it provides an overview of each module available. One click and the module goes live on the player.

admin.jpg


Creatively, visual radio opens up lots of new opportunities, though some radio programmes lend themselves better than others. 5 live for example involves lots of audience interaction and breaking news stories and Mark Kermode film reviews are already streamed live on the website. There are huge challenges when you attempt to visualise programmes that are often prepared on the fly, which is why it's so difficult to come up with a one-size-fits-all solution. Theoretically there may be elements that will be common to all programmes but it's still important to design features that reflect the identity of the programme and its content. Coming up with an appropriate design solution also means taking into account complex technical considerations such as identifying the variety of data sources available and how we can be use that data while still ensuring it reflects what is being broadcast on air.

In terms of interactivity there is very little in this version of the player. This was a concious decision as we wanted maintain the philosophy of creating something that was glanceable and complementary to the broadcast. However in future versions we may explore how we can incorporate interactive features, things like Radio Pop functionality or the ability to contact the show from within the player.

But this is just the first step and we are hoping we can learn as much as we can from this trial and use the feedback to further develop our concepts.


Ö÷²¥´óÐã iD

Ö÷²¥´óÐã navigation

Copyright © 2015 Ö÷²¥´óÐã. The Ö÷²¥´óÐã is not responsible for the content of external sites. Read more.

This page is best viewed in an up-to-date web browser with style sheets (CSS) enabled. While you will be able to view the content of this page in your current browser, you will not be able to get the full visual experience. Please consider upgrading your browser software or enabling style sheets (CSS) if you are able to do so.