“In a world of dreams, we are but Gods limited by imagination”.
As technology evolves, concepts change and users adapt. We find ourselves challenging our systems to do more and more with less and less. In an effort to minimise overheads and maximise efficiency, we are presented with complex problems and demands to ensure availability, improve quality and reduce our carbon footprint. Parallel to all these demands; costs, implementation time frames and learning curves are expected to be nominal.
To set up a conventional system there are many costs involved, including (but not limited to): research, strategic planning, environmental preparation, physical setup requirements, hardware costs, setup of hardware, installation of software, staff costs, time, maintenance and not forgetting replacement costs.
This cycle of expenditure is conducted every time a new server is purchased and partially replicated for systems which are reused.
Obviously as companies develop, using these same old techniques of purchasing numerous, bigger, faster and more powerful servers is going to snowball exponentially and eventually reach an unmanageable, unsustainable terminal limit of growth.
It is at this point we must realise, it is not that the services that should control the necessity for resources, but rather the inevitable obligation to consider how the services are governed themselves. In short, focus on how and why we run our services.
From this seemingly simple concept, it is all too easy to get lost in the inexorable complexity of modern networks and common practises. The essence of the requirement is obvious; we need to consolidate and share resources, allowing for the inescapable fact that no computer is perfect and systems will fail.
When considering the solution, one may be reminded of the renowned William Gibson and his literature, but that future fiction is here and now.
Thus we arrive at the proposition of a virtual environment, mimicking all the behavioural characteristics of a standard network, but existing theoretically, in a world constrained by the same rules of our physical networking reality.
After all, we must remember that computers are fundamentally just a network of circuits and components themselves, so why can’t we have a host which imitates a larger network internally.
VMware offers us exactly that. VMware, Inc. is a publicly-listed company, founded in California (1998), with revenue of US$1.33 billion and more than employees 5,000 in 2007.
VMware virtualisation is becoming rapidly accepted and it does not take long to understand why. In a world driven by hundreds of factors, requirements and demands it simplifies to equate into efficiency and cost. As everyone knows, to make money, a service must be provided and an investment must be made. Thus there is a drive to resolve the problem with the most cost effective method available.
This technology has grown from typically being used in a development environment; allowing for testing without persistent changes. The modern approach of a virtual infrastructure allows us to deploying dozens of virtual machines on a single host, allowing mass redundancy of servers which are only active during certain periods of the day and systems which do not require the full power of a dedicated server.
The infrastructure is deployed across several high performance machines which are clustered together in a synchronous group. Within this cluster they pool and share the available resources which enable maximum efficiency of system CPU and memory utilisation. This clustering technique also means that we can assure high availability of these services; imagine a process is running on a particular box and this host has a power failure or hardware fault. Each virtual machine previously running on that host can be restarted immediately (even automatically) on a different host within the cluster and maintain the same state as before the glitch.
Even more excitingly, assuming one of the hosts in the cluster requires some downtime, we are able to migrate (VMotion) the virtual machines without any interruption to service between the host machines. This maintenance mode allows for minimal downtime and continued service.
Having all these services running centrally also provides fantastic management and control opportunities, allowing remote control across the whole system, automatic events, dynamic resource scheduling and the ability to create a virtual machine (a server) from a template in a matter of minutes.
This might all sound fantastic in theory, but you may be wondering how and if it is possible to convert your existing system into this world of fantasy. Most standard operating systems are supported in this virtual environment, including Microsoft Windows, Linux and Solaris to name a few. VMware converter allows a physical machine to be copied (while running) onto the virtual environment giving us an identical copy of the physical machine and with a few minor setting changes, we can begin running the services almost immediately.
The process of moving the physical servers to a virtual environment is known as ‘virtualisation’.
With all of these benefits, it is not difficult to see where the future of hosted services is leading. Virtualisation provides manageable, reliable, efficient and effective systems which are cheaper to run both long and short term. Due to fewer physical servers, we need less energy to power them and with a reduction of heat generation, these systems really are good for our planet!
“The future is already here – it is just unevenly distributed.” – William Gibson