Live Blogging the Budget

This week’s budget gave a good opportunity to see how different news organisations handled live reporting on their websites so I did a quick scan through a few TV and newspaper websites and screen grabbed what I could see.

The reason I’m interested is that the 125 anniversary has provided an opportunity for a large number of events on campus for some of these like the Manifesto for Change event to have a remote audience engaging online via streaming video and live chat.

Read more after the break!

Continue reading “Live Blogging the Budget”

2010: The Year of Open Data?

I don’t like to predict the future – usually because I’m wrong – but I’m going to put my neck out on one point for the coming year.  2010 will be the year that data becomes important.

I’ve long been a believer in opening up sources of data.  As far as possible, we try to practice what we preach by supplying feeds of courses, news stories, events and so on.  We also make extensive use of our own data feeds so I’m always interested to see what other people are doing.  Over the last year there has been growing support for opening up data to see what can be done with it and there’s potentially more exciting stuff to come.

A big part of what many consider to be “Web 2.0” is open APIs to allow connections to be made and they have undoubtedly let to the success of services like Twitter.

Following in their footsteps have been journalists, both professional and amateur, who are making increasing use of data sources and in many cases republishing them.  The MPs expenses issue showed an interesting contrast in approaches.  While the Daily Telegraph broke the story and relied on internal man power to trawl through the receipts for juicy information the Guardian took a different route.  As soon as the redacted details were published, the Guardian launched a website allowing the public to help sort through pages and identify pages of interest.  Both the Guardian and the Times have active data teams releasing much of their sources for the public to mashup.

The non commercial sector have produced arguably more useful sources of data.  MySociety have a set of sites which do some really cool things to help the public better engage with their community and government.

In the next few months there looks set to be even more activity.  The government asked Tim B-L to advise on ways to make the government more open and whether due to his influence or other factors there are changes on the horizon.

But it’s set to be the election, which must be held before [June], which could do the most.  Data-based projects look set to pop up everywhere.  One project – The Straight Choice – will track flyers and leaflets distributed by candidates in order to track promises during and after the election.  Tweetminster tracks Twitter accounts belonging to MPs and PPCs and has some nice tools to visualise and engage with them.

I believe there will be an increasing call for Higher Education to open up its data.  Whether that’s information about courses using the XCRI format, or getting information out of the institutional VLE in a format that suits the user not the developer, there is lots that can be done.  I’m not pretending this is an easy task but surely if it can be done it should because it’s the right thing to do.

Since I started writing this entry a few days ago, the Google Blog post on The Meaning of Open. Of course they say things much better than I could, so I’ll leave you with one final quote:

Open will win. It will win on the Internet and will then cascade across many walks of life: The future of government is transparency. The future of commerce is information symmetry. The future of culture is freedom. The future of science and medicine is collaboration. The future of entertainment is participation. Each of these futures depends on an open Internet.

Let’s do our bit to contribute to that future.

Argleton goes national!

image

It seems Argleton just won’t die! Late to the game behind the Ormskirk Advertiser, Mister Roy’s visit and my post about the village some 13 months ago, the Daily Telegraph yesterday revealed the mystery of Argleton, the ‘Google’ town that only exists online.

It’s a nice article with exclusive interviews from Joe Moran from LJMU and, of course, Roy Bayfield. They’ve also managed to get answers from Google and their data provider Tele Atlas.  Google’s spokesman said:

“While the vast majority of this information is correct there are occasional errors. We’re constantly working to improve the quality and accuracy of the information available in Google Maps and appreciate our users’ feedback in helping us do so. People can report an issue to the data provider directly and this will be updated at a later date.”

Ah yes, report the fault… that’d be what we’ve done on several occasions without success and may be the reason why Google have decided to take corrections into their own – or more accurately the user’s own – hands.  It seems that drawing the attention of a national newspaper has caused Tele Atlas to pull their finger out:

“Mistakes like this are not common, and I really can’t explain why these anomalies get into our database.”

Let’s try a bit harder, shall we… is it because there is no process for checking data before it’s added?  Is it because you’ve chosen not to buy additional sources of data to verify against? Is it because your error reporting procedure is so poor that 13 months later it’s still in the database?  No?

For Google, errors like these are annoying.  They recently announced Google Maps Navigation for Android 2.0 offering turn-by-turn directions similar to Tom Tom and other devices but for free.  Accuracy of maps and the ability to keep them up to date will be one of the big selling points.

But time may be nearly up for Argleton “A spokesman [for Tele Atlas] said it would now wipe the non-existent town from the map.”

Update: Mister Roy appeared on Radio 5live’s The Weekend News (starts at 25 minutes).

>