Tag Archives: maps

Mapping the campus

We’re currently looking at a project which involves maps of the Ormskirk campus which – if you read my 125 by 125 blog – I find quite exciting. Maps are important for lots of things the University yet we’ve never had very good maps. We have access to lots of them, but nothing that’s quite suitable.

For example, our own campus map has all the buildings labelled and is pretty up to date but it’s not to scale or plotted against a real grid system.

We also have a 3D drawing of campus which is used in the prospectus and online in the interactive campus map. It looks nice but again it’s not accurate enough for plotting real positions and it’s a pain to keep up to date.

Google Maps is to scale but missing lots of buildings and some websites are moving away from it to other services because they’ve started charging for heavy users, not something I think they’d do to us but they could add advertising:


View Larger Map

Microsoft’s Bing is even worse:

The Ordnance Survey have released some of their data, and we also have access to it through Digimap, but questions remain over licencing and the frequency of updates.

We can do better than this by building on what has already been done by OpenStreetMap.

I’ve mentioned OpenStreetMap several times before going back three years:

[Mapumental’s] base mapping layer is from OpenStreetMap – a project to create a free (as in beer and speech) map similar to the ones available from Google Maps, or even from the OS. It’s created by volunteers who go out with GPS and plot the routes online. Almost all major roads are on there already and certain areas have excellent quality coverage – take a look at South Liverpool for an example of how good it can get.

The quality of mapping on OSM for Edge Hill hasn’t been great – in the past I’ve taken the odd GPS track or paths around campus and added them but generally it’s been pretty poor as you can see from this recent capture:

Edge Hill Before (CloudMade)

Recently though, the quality of aerial imagery available in the OSM editor has vastly improved making tracing over the campus a viable option where it hadn’t been before (buildings going back as far as the Faculty of Health were missing from Yahoo!’s images). A few hours work has resulted in a much more complete map of campus:


View Larger Map

As I write it’s still not perfect but all the buildings are plotted along with roads, footpaths and many of the facilities we have on campus like cafes and shops. OpenStreetMap allows anyone to make corrections and add missing features which will help keep it up to date.

Now that we have an up to date base map we’ll be looking at ways we can make use of it in some exciting forthcoming projects which I hope we’ll be blogging about soon!

Was 2010 the year of Open Data?

sometimes you throw a sixIn a little-read post published last Christmas Eve as part of our previous 25 days project I suggested 2010 might be the year open data became important:

I don’t like to predict the future – usually because I’m wrong – but I’m going to put my neck out on one point for the coming year. 2010 will be the year that data becomes important.

So let’s look at what’s happened over the last year.

  • Ordnance Survey Code-Point® Open data containing the location of every postcode in the country. With this people have been able to build some nice cool services like a wrapper API to give you XML/CSV/JSON/RDF as well as a hackable URL: http://www.uk-postcodes.com/postcode/L394QP (that’s Edge Hill, by the way)
  • The OS also released a bunch of other data from road atlases in raster format through to vector contour data.  Of particular interest is OS VectorMap in vector and raster format – that’s the same scale as their paper Landranger maps and while it doesn’t have quite as much data, they’re beautifully rendered and suitable for many uses, but sadly not for walking.

OS VectorMap of Ormskirk. Crown copyright and database rights 2010 Ordnance Survey.

  • Manchester has taken a very positive step in releasing transport data (their site is down as I type) – is it too much to hope that Merseytravel will follow suit?
  • London has gone one step further with the London Datastore.
  • data.gov.uk now has over 4600 datasets.  Some of them are probably useful.

In May I gave a talk at Liver and Mash expanding on some ideas about data.ac.uk. Since then lots of other people have been discussing in far more detail than I, including the prolific Tony Hirst from the Open University who have become (I believe) the first data.foo.ac.uk with the release of data.open.ac.uk.

So things are starting to move in the Higher Education open data world. I think things will move further with the HEFCE consultation on providing information to prospective students and maybe XCRI’s time has come!

Maybe 2011 will be the year people start to do data without even thinking about it?

Street View trike on campus

Google Street View trike on campus

Back in March, Google added Ormskirk – and most of the rest of the country – to Google Street View. Today we were visited by the Google Street View trike to take imagery of the Ormskirk campus. Unlike roads which are photographed using a car, private property like university campuses and Disneyland Paris are photographed using a trike allowing them to get along footpaths.

Most of the campus was covered today in just an hour or so but it’ll be somewhere between a couple of weeks and six months until it’s live on the website. Until then you can check out a (very) short video of the trike on campus:

Can you tell me how to find Topshop?

A couple of weeks ago my Twitter Search alerts for “Ormskirk” picked up the following:

Katie on Topshop

I was intrigued so searched Google Maps for “topshop near ormskirk” and sure enough, not one but two mystery Topshops were marked on the map.

Topshop near Ormskirk

Obviously this isn’t the case so why are Google showing them on the map? The addresses of the shops match Dorothy Perkins and Burtons – both other brands in Arcadia Group, owners of Topshop – but that doesn’t explain why they’re there.  As with Argleton, it may well be another case of Google mining data from whatever sources they can get their hands on and forget the accuracy.  I’ve reported the problem to Google, let’s see if they fix it.

Liver and Mash

I’ve already blogged about my own Mashed Library Liverpool talk but I promised to say something about the rest of the event, so here goes!

Mandy Phillips and Owen Stephens

Mandy Phillips kicks of Liver and Mash

The day kicked off with welcome and introductions from Mandy and Owen. I’d heard bits about Mashed Library events before and I know the basics of Mashups but I didn’t really know who would be there and and what to expect. There was a good mix of attendees and speakers presenting “lightening talks”, “Pecha Kucha 20:20″ talks and workshops. The thing that persuaded me to agree to speak and convinced me that it wouldn’t just be a bunch of librarians (!!) was the scattering of local speakers…

Alison Gow

Alison Gow

Alison is Executive Editor (Digital) for Trinity Mirror Merseyside, publishers of the Liverpool Daily Post and Echo. Despite “knowing” her through the Twitter, Friday’s Mashed Libraries event was the first time I’d met her IRL! The slides of her talk “Open Curation of Data” are online covering some of the things journalists and the newspaper industry have had to deal with since the superinterweb came along.

Aidan McGuire and Julian Todd

Julian Todd and Aidan McGuire on ScraperWiki

Aidan and Julian demonstrated ScraperWiki a project supported by 4iP and aiming free data from inaccessible sources and make it available for those who wish to use it in new and innovative ways, for example mashups. “Screen Scraping” isn’t a new idea but typically it’s done by individuals, embedded into their own systems. If the scraped website changes then the feed breaks and there’s no way for others to build on the work done.

ScraperWiki aims to change that by providing a community driven source for storing scrapers. It’s like Wikipedia for code allowing you to take and modify a scraper I’ve written for your own purposes.

There are already dozens of scraped data sources and more are being added every day. It currently supports Python but my language of choice – PHP – will be added soon so I’ll be giving it a go then.

John McKerrell

John McKerrell on Mapping

John’s talk about mapping had the most interest so he presented it to all attendees briefly covering mapping APIs, OpenStreetMap and tracking your location with mapme.at.

Phil Bradley

The first Pecha Kucha 20:20 talk was about social media search tools. I wasn’t writing down the links so check on Phil’s Slideshare page for the presentation coming out. I will say that Google’s support for Twitter is now much better than he seemed to suggest – for example allowing you to drill into tweets for a particular time. It can also be more reliable than search.twitter.com when using shared IP addresses at a conference.

Gary Green

Gary Green 20/20 talk

Gary mentioned that this was his first presentation so I’m not sure a 20:20 talk was the best idea but he handled it pretty well!

Tony Hirst

Tony Hirst talking about Yahoo! Pipes

The afternoon was dedicated to one of three workshops – Arduino with Adrian Mcewen, Mapping with John McKerrell or Mashups with Tony Hirst. I’ve done a bit of each before so I sat at the back of Tony’s talk to try to soak up some new tips.

After a final cake break there was the prize giving for the mashup suggestions competition.

@briankelly, @m8nd1 and @ostephens presenting prizes

So all in all a really interesting day! Congratulations to Mandy Phillips and all the organising team for an excellent event.

2010: The Year of Open Data?

I don’t like to predict the future – usually because I’m wrong – but I’m going to put my neck out on one point for the coming year.  2010 will be the year that data becomes important.

I’ve long been a believer in opening up sources of data.  As far as possible, we try to practice what we preach by supplying feeds of courses, news stories, events and so on.  We also make extensive use of our own data feeds so I’m always interested to see what other people are doing.  Over the last year there has been growing support for opening up data to see what can be done with it and there’s potentially more exciting stuff to come.

A big part of what many consider to be “Web 2.0” is open APIs to allow connections to be made and they have undoubtedly let to the success of services like Twitter.

Following in their footsteps have been journalists, both professional and amateur, who are making increasing use of data sources and in many cases republishing them.  The MPs expenses issue showed an interesting contrast in approaches.  While the Daily Telegraph broke the story and relied on internal man power to trawl through the receipts for juicy information the Guardian took a different route.  As soon as the redacted details were published, the Guardian launched a website allowing the public to help sort through pages and identify pages of interest.  Both the Guardian and the Times have active data teams releasing much of their sources for the public to mashup.

The non commercial sector have produced arguably more useful sources of data.  MySociety have a set of sites which do some really cool things to help the public better engage with their community and government.

In the next few months there looks set to be even more activity.  The government asked Tim B-L to advise on ways to make the government more open and whether due to his influence or other factors there are changes on the horizon.

But it’s set to be the election, which must be held before [June], which could do the most.  Data-based projects look set to pop up everywhere.  One project – The Straight Choice – will track flyers and leaflets distributed by candidates in order to track promises during and after the election.  Tweetminster tracks Twitter accounts belonging to MPs and PPCs and has some nice tools to visualise and engage with them.

I believe there will be an increasing call for Higher Education to open up its data.  Whether that’s information about courses using the XCRI format, or getting information out of the institutional VLE in a format that suits the user not the developer, there is lots that can be done.  I’m not pretending this is an easy task but surely if it can be done it should because it’s the right thing to do.

Since I started writing this entry a few days ago, the Google Blog post on The Meaning of Open. Of course they say things much better than I could, so I’ll leave you with one final quote:

Open will win. It will win on the Internet and will then cascade across many walks of life: The future of government is transparency. The future of commerce is information symmetry. The future of culture is freedom. The future of science and medicine is collaboration. The future of entertainment is participation. Each of these futures depends on an open Internet.

Let’s do our bit to contribute to that future.

Mapping Wikipedia

Google Maps Wikipedia
Google have started adding photos and Wikipedia entries to their maps. Currently only activated when using maps.google.com you can click the “More” button to activate icons showing points of interest. I was amazed by how many things are tagged but slightly disappointed that Edge Hill isn’t on there yet.

It will be interesting to see if this is a one off, or with any luck shows that Google are going to be making more use of microformats and tagged pages in search results.

Where Am I?

I’ve always had a bit of an interest in maps – I have a bookshelf full of walking maps – and so any time when mapping and the web comes together I find it pretty cool. Take Geograph for example – a project aiming to collect photoraphs of every square kilometre of the British Isles – it’s interesting to look around places you’ve been and see what other people think is important enough to take photos of!

Back to what I was going to blog about! Yesterday, and I nearly missed it, Google announced that their Geocoder will support the UK. There’s been a bit of a hack using another Google service for a little while, but this time it uses the correct APIs. This is hugely significant – previously the Royal Mail, who basically own the only source of this data, have charged a lot of money. There are companies who make this data available more easily on a pay as you go basis, but it was still a significant amount if you have no income from users or advertising.

Now Google will do it for free, on the client machine using Ajax calls to their servers meaning you can pass the postcode (or placename if you don’t have much detail) and it will pass back the long/lat coordinates and a bunch of other information about the address. So well done, Google, for negotiating an affordable contract with the Royal Mail, or maybe you’re just absorbing the cost in return for locking people into your system!

Via nearby.org.uk.