Monthly Archives: May 2010

Eduserv Symposium 2010: The Mobile University

Last Thursday was Eduserv’s annual symposium. The theme this year was “The Mobile University”, a topic we’ve been talking about for years as being the next big thing but has finally crept up the list of priorities and is now something we’re starting to act on. So I was really looking forward to spending the day at the Royal College of Physicians learning what others in the sector – and, more equally importantly, commercial sector – are doing for mobile devices.

The slides and videos are available online so I’ll just cover some of my thoughts on each and leave you to see the speakers for yourself.

Opening keynote: “Mobile, Mobile, Mobile!”

Paul Golding, CEO and Lead Innovation Architect, Wireless Wanders

Paul’s talk set the scene for the rest of the day by looking at the state of the mobile marketplace for example the trend of increasing smartphone market share. Established uses of mobile phones are clearly still king – 78% of people uses SMS in Q3 2009 – but over a fifth are now using the superinterweb on mobile phones and even early-adopter technologies like Location Based Services are being used by significant numbers.

He made the point that the introduction of Web 2.0 to mobile devices has opened up the platform to developers.  They are no longer controlled by network operators but have much greater access to users, devices and services.

The role of a University Computing Service in an increasingly mobile world. Or: “We don’t support that…”

Christine Sexton, Director of Corporate Information and Computing Services, University of Sheffield

I’ve been following Chris’ blog and Twitter feed for a couple of years and it’s really interesting to see the thinking behind decisions made in another “IT Services” department.  In her talk she challenged what is often perceived to be our approach to services – the response “we don’t support that”.  This is a bad reputation to have and one which I don’t think it always warranted – often it’s a case of us not communicating well.  My own view is that people often don’t mind being told “no” as long as it’s explained why, ideally with some alternative options presented.

Chris related this back to new services – while in the past we may have been able to “get away with” offering a single enforced desktop with no choice over browser, we must not be able to cope with a range of demands.  For some that will still mean being told what’s best for then with a package of systems that work together (and, more importantly, are supported).  Others want – even demand – the ability to use their own devices, software and services accepting that the level of support may be lower.

Students’ approach to IT provision by universities has changed.  In the past they would queue up to get a network account, email address and access to university systems, now new students just ask “where do I get the internet?”  In their annual new student survey, Sheffield found 30% of students have a “smartphone” and an astonishing 95% own laptops.  Say, how far are the 255 IP addresses your AP can allocate going to reach?

Chris then went on to show some screenshots of Sheffield’s (relatively) new iPhone app.  CampusM is a product by oMbiel and I’m not going to go into it too much here.  Suffice to say that while I understand the benefits Sheffield have got from their app, I don’t necessarily agree with the approach it takes to providing services to mobile devices.

To what extent will learning and teaching change in a mobile university? Thoughts from the University of Bath

Andy Ramsden, Head of e-Learning, University of Bath

Mr QR-Code himself although that wasn’t the point of the talk.  His talk used the context of what the landscape would look like int 2015 to examine what changes are happening.  There are a few example from the University of Bath using clickers and Twitter.  Andy made an interesting point about Moodle’s community approach to developing mobile solutions compared to vendor or third party solutions for student record systems.

There were yet more statistics in the slides re-enforcing findings from elsewhere that a large proportion (38%) of students have a data package included with their mobile tariff.

Real life experiences launching mobile apps

Tom Hume, Managing Director, Future Platforms

At time of writing Tom’s slides aren’t yet online so this is from my rather fragmented memory!  It started off quite depressing, highlighting just how many different devices, browsers and configurations there are in the mobile phone market.  Depressing too when he “exposed” manufacturers’ tactic of taking a silver/black phone, turning it pink and selling it to the female market!

While I believe the raw numbers he offered – things like the market share of all iPhone combined being less than 4% – I think it’s possible to read too much into this and get scared from doing anything for fear of not doing what you think is right.  The cynic in me might say that this might encourage some to go to vendors like Future Platforms to do the work for them!

Future Platforms do appear to have done very well at taming the early mobile internet beast, creating Java-based games such as crosswords for Puzzler but towards the end of his talk Tom seemed to get a bit more upbeat about the future.

Now when developing for mobile devices there are two approaches: pick one or more platforms and write specific applications for each or develop for the mobile web.  There are advantages to each but with the latter you can go some way to develop services available to a large proportion of likely users.  That might not be a particularly high proportion of the total number of devices but it may mean available to nearly all users with the data package and propensity to use it.

Closing keynote: Mobile and connected – the challenges and implications

John Traxler, Professor of Mobile Learning and Director of Learning Lab, University of Wolverhampton

For me the most important thing John’s closing keynote showed was that this whole mobile thing isn’t new.  He mentioned several examples over the years where hardware had been bought leading to the observation:

We run the risk of proving that spending money on education improves education.

So the challenge now is to prove that we can run projects that make use of existing hardware owned by the user.

Lightning talks

Nick Skelton, University of Bristol

Nick introduced the Institute of Learning and Research Technology’s Mobile Campus Assistant project.  I’ve seen this project before and it looks like a very interesting approach to delivering internally focused services.

Wayne Barry, Canterbury Christ Church University

Edge Hill is up against Christ Church’s iBorrow project for the Times Higher ICT Initiative of the Year so I should be saying it’s rubbish and our project was much better… but I can’t because it’s actually quite a neat little idea:

The iBorrow project makes grabbing a computer as easy as borrowing a book.

The service offers 200 thin client netbook computers with things like location and use tracking. The presentation showed how availability of the computers boosted use of the wireless significantly. Probably the most interesting thing was that they were used for social networking just 14% of the time – far less than academic uses.

Simon Marsden, University of Edinburgh

Another – more thorough – review of statistics gathered about mobile usage by a survey.  Just shy of 2,000 responses from across the university.  Some key points:

  • 49.2% “smart” handsets
  • 60% had “sufficient” or unlimited mobile internet access
  • ability to view course information was requested most followed by timetables and PC availability

It’s worth a look through the slides to pick out extra details.

Tim Fernando, University of Oxford

Final lightening talk was about Oxford’s mobile project, now named Molly.  Again I’ve tried to keep track of developments over the last few months and it’s interesting to see them actively make the case for releasing the source code and the benefits it can bring to the project as a whole and individual institutions adopting the platform.  Molly has some nice touches like using OpenStreetMap Point of Interest information as well as the more mundane like supporting Z39.50 to integrate with library catalogues.


This post is in real danger of breaching my 48 hour rule and never getting out of draft so I’m not going to say much more for now about the conference.  It was a really useful day to see what other people are doing and to confirm in my mind some of the approaches we’ll be taking to mobile web developments over the coming months.

A few other links about #esym10:

Liver and Mash

I’ve already blogged about my own Mashed Library Liverpool talk but I promised to say something about the rest of the event, so here goes!

Mandy Phillips and Owen Stephens

Mandy Phillips kicks of Liver and Mash

The day kicked off with welcome and introductions from Mandy and Owen. I’d heard bits about Mashed Library events before and I know the basics of Mashups but I didn’t really know who would be there and and what to expect. There was a good mix of attendees and speakers presenting “lightening talks”, “Pecha Kucha 20:20″ talks and workshops. The thing that persuaded me to agree to speak and convinced me that it wouldn’t just be a bunch of librarians (!!) was the scattering of local speakers…

Alison Gow

Alison Gow

Alison is Executive Editor (Digital) for Trinity Mirror Merseyside, publishers of the Liverpool Daily Post and Echo. Despite “knowing” her through the Twitter, Friday’s Mashed Libraries event was the first time I’d met her IRL! The slides of her talk “Open Curation of Data” are online covering some of the things journalists and the newspaper industry have had to deal with since the superinterweb came along.

Aidan McGuire and Julian Todd

Julian Todd and Aidan McGuire on ScraperWiki

Aidan and Julian demonstrated ScraperWiki a project supported by 4iP and aiming free data from inaccessible sources and make it available for those who wish to use it in new and innovative ways, for example mashups. “Screen Scraping” isn’t a new idea but typically it’s done by individuals, embedded into their own systems. If the scraped website changes then the feed breaks and there’s no way for others to build on the work done.

ScraperWiki aims to change that by providing a community driven source for storing scrapers. It’s like Wikipedia for code allowing you to take and modify a scraper I’ve written for your own purposes.

There are already dozens of scraped data sources and more are being added every day. It currently supports Python but my language of choice – PHP – will be added soon so I’ll be giving it a go then.

John McKerrell

John McKerrell on Mapping

John’s talk about mapping had the most interest so he presented it to all attendees briefly covering mapping APIs, OpenStreetMap and tracking your location with

Phil Bradley

The first Pecha Kucha 20:20 talk was about social media search tools. I wasn’t writing down the links so check on Phil’s Slideshare page for the presentation coming out. I will say that Google’s support for Twitter is now much better than he seemed to suggest – for example allowing you to drill into tweets for a particular time. It can also be more reliable than when using shared IP addresses at a conference.

Gary Green

Gary Green 20/20 talk

Gary mentioned that this was his first presentation so I’m not sure a 20:20 talk was the best idea but he handled it pretty well!

Tony Hirst

Tony Hirst talking about Yahoo! Pipes

The afternoon was dedicated to one of three workshops – Arduino with Adrian Mcewen, Mapping with John McKerrell or Mashups with Tony Hirst. I’ve done a bit of each before so I sat at the back of Tony’s talk to try to soak up some new tips.

After a final cake break there was the prize giving for the mashup suggestions competition.

@briankelly, @m8nd1 and @ostephens presenting prizes

So all in all a really interesting day! Congratulations to Mandy Phillips and all the organising team for an excellent event.

Last Friday I went to Liver and Mash, the Liverpool Mashed Libraries event held at Parr Street Studio 2. I’d been asked to speak by Mandy Phillips (formerly of this Parish) about – erm – something!

I’ll cover the rest of the event in another post but here I’d like to write about the topic of my presentation. You can see slides below or watch on the Slideshare website to see along side my notes:

It took me a while to think of a subject to talk about but eventually I started considering the role higher education institutions play in mashups and in particular what we can bring to the party.

This actually builds on some ideas I’ve been thinking about for a while. On Christmas Eve last year I posted as part of our 25 days series an entry about 2010 being the year of open data. Edge Hill was closed by that point so I’m not surprised few people read it but I said:

I believe there will be an increasing call for Higher Education to open up its data. Whether that’s information about courses using the XCRI format, or getting information out of the institutional VLE in a format that suits the user not the developer, there is lots that can be done. I’m not pretending this is an easy task but surely if it can be done it should because it’s the right thing to do.

So my presentation expanded on some of these ideas. Firstly we need to accept that what we do online isn’t going to suit everyone. HEI websites are huge unwieldy beasts. Doing a Google search for produces over 8,000 results; has 236,000 pages! Combine that with Sturgeon’s Law and we’re in trouble:

Sturgeon’s Law: 90% of everything is crud.

[Before anyone says it… yes that means 90% of what I say is crap!]

If we accept that our websites aren’t going to deliver everything to everyone we have two options: firstly we could throw resources at the problem to add more and more content, but we know from experience how that ends up:

Alternatively we can strip down to our core audience and find other ways to satisfy the so-called “long tail”. To me that means providing data in an open, accessible form that users can take and use in ways that suit them. Let’s do that.

At IWMW 2008, Tony Hirst submitted an innovation competition entry to show what autodiscoverable feeds HEIs feature on their homepages. It seems in the two years since Aberdeen the number of sites with discoverable feeds has crept up but is still less than half.

Typically these feeds contain easily available information. News stories are recycled press releases. Often forthcoming events are available as an RSS or Atom feed or even as an iCal feed that can be subscribed to in Google or Outlook Calendar.

Universities also run courses and there’s a standard format for publishing them – XCRI-CAP.

So far, so general, but what other information are people looking for? Freedom of Information legislation came into force on 1st January 2005 applying to all public bodies including Universities. WhatDoTheyKnow from MySociety allows anyone to submit and track FOI requests – simultaneously the most awesome and scary thing for anyone working in a public sector organisation!

Most HEI websites also contain FOI pages or a publication scheme but often the information available is locked up in difficult to access documents. PDFs and unstructured webpages are typically the format of choice. We can make this information more open by publishing in more accessible formats. Maybe uploading to Google Docs (which allows export as CSV or through an API) would be an easy thing to do.

We also have systems containing interesting data – HR, Student Record System, VLE, Library Catalogue – but getting information out can be difficult. When procuring new systems we need to be asking the right questions about vendor’s approach to open data and APIs and building this into the requirements specification.

So my challenge is for us to create It would be great to do something sector wide along the lines of (but, y’know, better!) but an easier model to get started with is something like the Guardian’s Data Store. Let’s start in the areas we have control over:


Let’s create a webpage, publish some links to existing data. If we have spreadsheets upload them to Google Docs and post the link. If we have systems with a rubbish API, let’s knock up a wrapper layer do expose something more useful. isn’t going to happen overnight but each of us can do our bit to build a more open sector.

In the pub following the event the discussion continued with Brian Kelly from UKOLN making an interesting point:

Interesting thought from @briankelly post #mashliv: expect the telegraph/daily mail to hit on public sector/HEIs about transparency/opendata

If that’s not a good enough reason to take open data seriously, I don’t know what is.

One final point about the presentation. I noticed a tweet from Alison Gow of the Liverpool Daily Post and Echo:

Not sure about bravery – I didn’t actually recognise Julian until after the presentation was over which is probably a very good thing!