I’ve harped on about web font limitations before; so I’ll just say that the fonts we could use were tragic and boring. I’ve also mentioned that we used sIFR as a fix for about six months, which was too quirky and jerky.
Hypothetically we could host fonts on our server and reference them using @font-face. This way all the fonts used in the printed prospectus, could be used on the website; but there’s the complex issue of copyright to contend with.
Typekit offers a library of fonts: you pay a subscription; you choose which fonts suit you, and they deal with the legal stuff. I was impressed by the clarity of the sign-up process, and how easy it was to set up. The font libraries are impressive; all the big type foundries are represented.
The only problem I have is the way different browsers render fonts: Some render crisply in one browser; but appear pixelated or weakly defined in another. They also lose clarity at smaller sizes. Unfortunately these problems are with the fonts themselves, so even if we hosted them we’d still have the same problem.
Consequently if we get a style guide that says Garamond should be used for headings, and I refer to Typekit, check the browser samples and find that it is unreadable in Firefox, we simply can’t use it. Straight away we would be looking at alternative fonts; when we didn’t want to fudge or compromise.
Weighing things up – I would use Typekit because it is easy to use and well designed, and any worries about copyright could be put aside.
Its Christmas eve. Are you excited? In less than 24 hours Santa will have been, scoffed his mince pie and glass of port 😉 and gone on his merry way.
So just how long do we have to wait for Christmas?
That’s for tomorrow. If you’re a believer (and you’ll only get a stocking full of sprouts if you don’t believe) or you have kids, you may also be interested in tracking Santa’s progress. You’ll also find games for the kids to play.
Happy Christmas from Web Services, Edge Hill University. Now where did I put those humbugs?
In an earlier blog post we talked about big changes to the staff intranet. We intended to move all the content from the staff intranet into a wiki. Each department would have a space created and they could manage their own content. Most departments have now got a wiki space and are actively using them.
The Wiki is a web based tool that you can use to add content, edit content, add documents in most formats as attachments, add images even add and display video. It was set up to take over from the intranet.
The Intranet is difficult to maintain and Web Services does not have the resources to read every page on the intranet nor do we have the knowledge to know what content should be there. We have to rely on departments informing us of content changes. Some of the content has not been revised for some time. Therefore we can not be sure of the timeliness or validity of the content. In fact we could be sure that some of the content was out of date. Hopefully the Wiki would help address this.
We wanted to supply a system where departments could control and take ownership of the content that they use. Staff could keep content up to date everyone in a department would be able to contribute. Documentation would be better organised and there would be one true source for the information they control. Email size would be reduced as staff would just send links to documents instead of long email or email with documents attached being sent out to everyone.
Most of Web Services had set up or installed Wikis but we did not actively use them prior to the install of Confluence and the creation of the webservices space. During our initial test we found:
• The wiki easy to use
• Easy to add content, attachment and other media
• Easy to keep track of changes
• Search was very good
• Finding pages or content quick
• Structure was easy to control
This let us feel that the Wiki would be easy to implement and roll out across the University.
Spaces where set up for all departments that had an intranet site. We set up some basic training based on our test. The departments where informed about the existence of the Wiki spaces and that training was available. Access was given to the spaces and training was delivered.
We spoke about the Wiki what it is and what to use it for. The training was an overview of all the basics in the Wiki:
• Creation of pages and links
• Editing pages
• Add attachments
• Add images
• Browse pages
Wiki in Practice
There was an under estimation of the amount of work involved in getting the content imported into the Wiki. Departments had to read over all the content they had in the intranet, copy relevant content, delete duplicated and old content, edit out of date content, create new content and organise in to the correct areas.
The support mostly focused around visual design, we did not anticipate the amount of work involved in the design of the pages.
• Created the homepages for the departments
• Created the section homepages or team home pages
• Sourced images for the homepages from creative commons
• Helped with suggestions of what should be in the Wiki
• Gave one to one support were necessary
There is no ‘right’ way to use a Wiki. The Wiki is not built as such but evolves as content is added by the people who use them. The structure of a Wiki, and how it is used, reflects how the people using it want to structure it and how they use it.
Finding departments that are interested in using a Wiki and individuals willing to get the project started. Identify a good project to implement (Staff Handbook, Manuals and Guides or FAQ (Perhaps a Wiki for induction new starters could go in and leave messages about themselves or about University).
When using a Wiki we need to check to see if everyone understands what it is and that they are open to using it. Using a Wiki and getting acquainted with it can be difficult.
We should change Wiki training so that we still give an overview of what the Wiki is. The group can then discuss which content is to be added then together add the content to the Wiki. People come along expecting to learn how to use the Wiki, they can interact with other users as they work gaining knowledge of how the Wiki works while adding content.
Give everyone in the department edit rights (remember there is a full history of edits in the Wiki and we can always restore previous version if mistakes are made). Also there is an advantage to mistakes being made as this will encourage staff to use the Wiki in order to fix the erorr. < see you would fix that if you could.
There are different ways users will get involved with their Wiki:
• Consumer: Read the content
• Editor: Make edits as they see them
• Contributor: Add new content
• Maintainer: Organised the content look for and add new features
A Wiki works best when it is being used, it's important for the homepage to look fresh. We could have a panel on the homepage that shows a list of pages that have recently been added or edited, or a list of pages which have a particular label.
We need to get people visiting the Wiki on a regular basis this can be done by creating content that people like to read, want to read or need to read. At Edge Hill the weekly updates that are currently sent out by email) could be added to the Wiki this would help to get people using the Wiki.
We need to develop good templates that are easy to use. Pages that have a nice format will encourage people to enter content knowing it will look good. This would mean getting the templates ready so we where confident each one would work as intended. Also a couple of tweaks to the CSS could format the text better.
A Wiki user group would also be good as this would help with passing on information about how the Wiki spaces can be used. The group could also inform us of any issues they have with the Wiki.
Sites like Facebook, Twitter, MySpace , LinkedIn, Orkut, Zorpia, Flicker, twicpic, yfrog and YouTube.etc, are social media sites designed to share information such as who and where you are and what you are doing, photos and video. This is a comical example of how information is shared using social media sites to tell the Digital Story of the Nativity.
Social media sites are a great way to connect to close friends and family, or even re-connect with old classmates and old co-workers. Also it can be a great way to find and connect to new groups with interests common to your own.
With just a few clicks people can access messages, know where their contacts are, what are they doing. All this makes for an entertaining experience network. The speed and immediacy that characterizes them are useful to inform and share content, true, but it can also jeopardize your own job or worse and compromise your privacy and security. Also the content posted on the site stays on the server even after you disable your account and is searchable.
The trouble starts when it becomes an addiction, if people spend too much time discussing everything that happens in their lives without taking the necessary steps to protect what is shared. It is logical to enjoy safely social media sites, which have the ability to control how visible your information and pictures are on the site as well as any search engines who parses that data.
Below is a series of recommendations to protect the identity information in digital networks and it is up to you to decide how visible you want your contact and profile information, videos, photos, and other posts need to be, and take the time to set the appropriate controls within the media site in question. For each one of us it is important to have a good view of what is published and learn to manage and protect our identity.
Although this presentation is from last year some of the information is still relevant
The search is an important aspect of social networks. It is up to the users if they want to be seen by all members of a network or just want to be seen only by their contacts.
It should make a conscious choice and set the profile, rather than leave it to the default settings, they usually allow some of the profile information such as name and main photo to be found on page and through Internet search engines.
Be cautious about posting and sharing personal information. Do not reveal passwords, keys, date of birth, home address, phone number and place of birth. Do not put your full resume online, if you must, remove it when you find a job. Protect the answers to secret questions that make social networks. The combination of that publicly available information and your public post about hanging out with friends on Saturday night across town could be enough for someone to take advantage of the situation and break into your house.
Generally every page has a link that explains how to use the information by users and gives tips for safe keeping. Note that when entering a network, it is giving license or right to use any information that is on these sites.
All sites have a link to a “Privacy” page that explains how your information is used and provides tips on staying safe. Determine how visible the information is that you post on social networks and search engines (profile, photos, videos and posts.) Each page allows you to select privacy settings, read them and adjust by trial the level of control and privacy you want.
There is an option that messages, pictures and videos that can only be seen by their followers, for this you must change the default settings of Twitter.
Think twice before posting or even clicking on a post. Consider what could happen if a post becomes widely known and how that may reflect both on you (as the poster) or your family, friend or workplace. i.e. Jason Manford quits The One Show over sex messages
With a little effort and some common sense you can enjoy and safely participate in social media sites. Ultimately though, it’s up to you to manage your digital identity. You must use good judgment about what you post and learn how you protect your personal data and reputation from the digital networks.
‘Mister’ Roy Bayfield reflects on empowerment of users
Part of the next stage of evolution of the Edge Hill site will be greater distribution of direct content creation across academic departments. This won’t be some clunky CMS that costs a fortune, takes ages to implement, adds layers of semi-automated bureaucracy and then doesn’t work anyway. Instead, we intend to give selected people (whoever their department decides to nominate) access to tools that are as simple as blogging, ie they will be able to write, embed images and video and click publish – job done.
Isn’t this all a bit scary? Will dozens or hundreds of staff suddenly be bestowed with the combined powers of King Midas, Dr Frankenstein and the Sorceror’s Apprentice? Well…maybe. But the alternative – failing to do justice to the rich diversity of research, scholarship, student work and experience across the University – is even scarier. It has become easier to publish to the web (on, say, Facebook) than it is to put a PowerPoint presentation together, so why would people want to have to send stuff to other people to place online?
Well one reason is that an organisation such as ours has to manage its reputation carefully. Another consideration is the quasi-contractual status of published information, particularly relating to courses. An incorrect or outdated claim about, say, professional accreditation could get us sued. All very well for the Cluetrain Manifesto to quote Herman Melville saying “”Let us speak, though we show all our faults and weaknesses — for it is a sign of strength to be weak, to know it, and out with it…” – how many QAA audits or Ofsted inspections did he have to go through?
There are some serious issues to consider. But for me these aren’t reasons not to enable a broader group to publish directly online – just reminders that we have to do it properly.
There are a number of things we have to get right. Off the top of my head, these include
– The people-management within departments
– Tone of voice – when to be informal and when to be corporate
– Training and standards so that we don’t trip ourselves up with, say, copyright violations
– Links between centrally-produced, corporate content (such as quality-assured prospectus entries, PR features) and locally-produced material
– Developing the right system that is easy to use
– Navigation so that routes through the site, for various key user groups, actually lead to the cool new content that will be popping up all over the site.
I wonder how many of you this year have visited a bookmakers and put on a bet for a ‘White Christmas’, not many I would imagine or maybe I’m completely wrong, one thing I do know however is as its more of a sure thing this year I can’t imagine any of them paying out. Well imagine my surprise when I visited a few betting sites just to see if they were taking bets on a ‘White Christmas’, do you know what, they are but at much lower odds! Pretty much an inevitable conclusion to expect one flake I would say, at least for this year.
Practically anywhere in the UK has roughly a 90% chance of seeing snow in the winter, it very rarely falls at Christmas (generally in January and February). However apparently it does occur approximately every 6 years.
I’m sure we all remember last year’s Christmas, 2009 and the start of 2010. The gritters had been out too early after the councils had not anticipated a big freeze, certainly nothing like we had seen previously since 1986. (I was 12 years old then and remember fondly the skiddy patches that would last for weeks). Britain was covered with thick lying snow which easterly winds had brought over the previous week. Travel over much of Britain was badly affected by ice and snow on the roads, and made more slippery by partial daytime thaw followed by overnight refreezing. It was the first white Christmas anywhere in the United Kingdom since 2004.
The second big freeze of this winter, due to start this week, is likely to last for as long as a month, putting the country on course for a winter which could be even colder than the notoriously treacherous 1962-63. This year however has been the coldest start to a winter for 100 years; bitterly cold winds from the Arctic will without doubt bring a blanket of snow to Scotland, the North, London and the South-east on December 25th. Wales, the South-west and central England will probably be a winter mix of sleet with snow on higher ground. So there you have it almost proof that this year will be a really cold, white Christmas. So wrap up, stay safe and warm and have a Merry Christmas.
Still no official announcement from Yahoo, but any regular delicious user is already looking around for alternatives. I remember a couple of years ago that I was enjoying synchronising my delicious bookmarks with Ma.gnolia, but that project appears dead in the water. Maybe this would be a good time to resurrect it.
Mike has already given Pinboard a try. Maurice is giving Google Bookmarks a go. The beauty of delicious was that as a team if we all used the same product, we could use feeds to tie everything in together and share what we found interesting (if we felt like it).
It would be a sad day if delicious went to the wall. So in a journey of sentimentality, I’m going to review my first 10 bookmarks on delicious:
Codestore Possibly the best site for Lotus Domino developers. These days very focused on Flex development, but still an excellent resource for all web development techniques
Simplebits Dan Cederholm, a frontend developer from Canada and author of one of the best html books ever: Web Standards Solutions. The site has also had a very nice makeover recently.
Zedzdead Its mine!…and posts are a little thin on the ground these days.
Tuesday saw the first meeting of OMAC – the Online Marketing and Communications project that will, over the coming months, address a wide variety of issues with our main external website. As Roy will explain in a post on Monday, part of this involves opening up the website to more contributors which means a whole world of terminology for users to learn.
A few of these came up in the first meeting so let’s go over them!
These are keywords attached to bits of information to aid navigation, categorisation of searching across a set of documents. We’ve been using tags on our website for several years and they’re pretty embedded into the site from videos and courses to news and events, everything is tagged. We also use tags to link parts of the site together. For example a department will list all news stories with a particular tag.
I don’t think this term came up in this week’s OMAC meeting but it’s one of my biggest regrets from my time working at Edge Hill. Wikipedia defines Vanity URLs like this:
A vanity URL is a URL or domain name, created to point to something to which it is related and indicated in the name of the URL. In many cases this is done by a company to point to a specific product or advertising campaign microsite. In theory, vanity URLs are creatively linked to something making them easier to remember than a more random link.
In our case, the term more typically refers to a short web address. For department sites, it’s the address their site is accessible at, for example www.edgehill.ac.uk/education – while others may be created for a specific event or course and redirect deep into the site.
Another term I introduced to Edge Hill and cringe whenever I hear it, creeping personalisation refers to the practice of building up a profile of a user in a piecemeal way. It may be that to register users only need enter their name and an email address and as they start to use the site extra information is collected.
At the other end of the page is a trend towards having larger footers able to provide more structure to links within them, perhaps with more overtly useful information than the types of link currently present.
The web is full of buzz words and odd terminology but we’re always here to guide users around what they need to know.
I’ll admit it, I like to dream too.. though I’m sure it’s not much of a secret, the content of my dreams will surprise some!
I dream about ideal web serving infrastructures.
There, I said it, and this is my dream:
What I’m hoping to get set up in the new year is all sorts of cool stuff that I’ve been trying to find excuses to implement!
I’d first like Internet users to come in and hit our main Zeus Load Balancer this gives us the flexibility of being able to route the web requests anywhere we like instead of moving IP’s and changing DNS.
Then could hit Varnish Cache and have all the static bits and pieces cached in there which should take load off of poor old Apache (maybe we’ll try out nginx..).
From there I’ll be caching frequently built HTML fragments and database calls in memcached.
I’d like to try and factor in MySQL Proxy at some point to take the load off of out MySQL server and get it spread out to some slaves, which should make backing up much quicker too.
What’s your infrastructure like?
Can people see and problems and suggest alternatives?
P.S. I totally published this on the 16th at 8am and not on the 17th at 4:30pm! (Maybe the snow delayed the delivery?)