Why the box? Rethinking the architecture of the PC

Robin Tapp explains why the “PC” needs a revolution, not an evolution in every day terms even his Mum could understand.

Desktop virtualisation is complex to implement effectively and can be opaque to understand at first.  However, the concepts are simple and, if we cut through the acronyms and hype we find some beautiful architectural principles that are so obviously “right” we might wonder why corporates are still buying so many PCs.

My Mum is not a car mechanic by any stretch of the imagination, but she understands that her car has an engine to make it go, a fuel tank to store the “go gas” in, wheels that go round, brakes, steering, a clutch, etc.  The basic architecture of the car has been the same for so long that it has seeped into our consciousness, almost without us realising it.  Technology has advanced tremendously over the car’s lifetime, but astonishingly its architecture has remained basically the same.  Components have developed, become more efficient and more powerful but there has been nothing so disruptive as to change that basic architecture.  Even electric cars have motors in place of engines and batteries in place of fuel tanks, building on a very similar architecture.

The PC hasn’t been around as long as the car and yet it has been around long enough for most of us to understand it’s  basic architecture at least at the same level as my Mum’s understanding of her car.  There is a processor, some memory, a file-store an operating system, connectivity, applications and an ability to present to a screen. It was designed around relatively poor connectivity to provide everything you need to work inside the box, a genius simplification of what was previously a large complex and expensive set of choices.  It created an architecture that supported commoditisation on a truly magnificent scale; certainly massively greater than can possibly have been envisaged at its inception.  That basic architecture has remained unchanged for decades, even if it is it clothed emperor-like in a sleek tablet exterior.

We take it for granted that our next car and PC will work in the same way.  The reasons for the car’s architecture remain unchanged, which may provide some insight into its stability, but some of the drivers for the PC design have shifted considerably.  Connectivity is now so widespread and fast that, in many cases, it no longer makes sense to put all of the components in the box in front of you.  Virtualisation provides the ability to abstract those basic building blocks from each other and deliver them elsewhere and potentially at scale.  Imagine if you could take the engines out of a thousand cars and build one huge one in a power-station somewhere and still deliver the power to each driver as they need it.  This approach might increase acceleration by powers of ten as power across a number of vehicles is logically pooled and provide similar-scale improvements to economy because the engine would be providing power on demand for the driver rather than running all the time.  This might also enable us to remove the dangerous and heavy fuel tank, reducing the risk of fires and removing the need to stop for fuel.  That is where we are getting to with desktop virtualisation and it is having similarly fundamental implications.

We can take almost any of the major components out of the box and build them where and at the scale that makes most sense from a performance, price, security and maintenance point of view.

Coming back to your basic PC design, we can start thinking through each component asking “in the box or in a data centre?” and “build it once for each user or many users share?”.

Arguably, the first out of the box are the applications.  Having entire applications installed on each PC is a pain to manage, and depending on what the applications do and where they keep their data, can give you a nightmare to maintain data integrity and a disaster-waiting-to-happen from a security point of view.  It can be made to work for office productivity tools, because they are inherently individual in nature, but as soon as your data is being updated by many users at once (as it will in most enterprise applications) it starts to make more sense to centralise parts of the application.  Good old-fashioned “Client Server” architectures started this migration out of the PC, and various forms of “Application Virtualisation” have now taken it to its logical next step. In application virtualisation, we simply run the application entirely elsewhere and make it accessible from the device in front of the user, whether it is a PC or a thin-client device.  Many of us experience some form of application virtualisation when we log into do our banking on-line.  We don’t download some complex banking application onto our laptop, nor do we download lots of data, thank goodness from a security point of view.  Yet we are able to pay bills and transfer money on-line.  There are a range of technologies for allowing us to access applications that are deployed outside the box, but they share that basic principle; little or no code or data on the “end point device” (we start needing new language to describe it now because PC refers to a specific architectural design that is now being challenged).

There are, at a very high level, three types of information stored on the hard drive of the typical PC: software, settings and file-store.  We have already covered taking the applications out of the box, so I am going to skip past software.  Settings are much of what makes your desktop recognisably yours, like your background, the arrangement of your icons and your language preferences (unless you are a non-US English speaker using Microsoft products, in which case Bill’s little joke is that it does matter how many times you reset your language, it will always revert to US English while you aren’t looking).  Taking these settings out of the box is technically challenging because they are spread across multiple applications and folders in a rather random fashion, however there are technologies available which seek them out and can apply them to any similar environment that you log-in to. This is often called  “User Virtualisation” in one of those beautiful pieces of overstatement that aggrandises a relatively modest subject, like the “World Series” (in the UK we call this “Americans playing rounders with themselves”).  Of course, the end-user isn’t really being virtualised, but their settings are.  This is potentially powerful, as it allows an End User to log-in to any similar machine and have it look like theirs when they get there and can be used to support hot-desking, working from home or migration between machines.

By file-store I mean our documents, databases, pictures etc. They present an interesting conundrum.  Ideally, we want them stored as securely as possible and in an environment where they will persist regardless of individual component failure.  That sounds much more like a high-availability data-centre architecture than a laptop hard-drive.  Ever accidentally poured coffee on your laptop, dropped it or (and this is the real killer for big business and government) left it on the bus or in the pub?  The last place you want sensitive data is on a laptop hard-drive.  However, unless you are virtualising the applications or the whole desktop, taking the data out of the box can severely compromise performance (and that is before you even start thinking about working on a plane).

Before we leave data storage I’ll just mention the green angle – If I have an organisation with 5000 users, that’s 5000 spinning disks (at least) eating power and heating the atmosphere.  By bringing them together, I can deduplicate the data and use a lot less disk in total, meaning a lot less power in total.

Next under my spotlight is “compute power”, by which I will combine processing and memory for convenience.  For most office or home workers it makes lots of sense to share compute power because it is cheaper and allows natural pooling.  Most of us demand heavy processing comparatively rarely, In between typing, talking, sitting in meetings and nipping out for a grande, low-fat, macchiato with sprinkles, hold the whip.  However, when we do demand a burst of processing, we are hugely impatient if it takes more than a few seconds.  How wasteful to build all that power so it is there for the few seconds that we need it and then it sits idle while we think about how many major bullet points we should have on our next presentation slide.  Much better to share it so that when I don’t need the power, it is available to the rest of the office, where someone is bound to be drumming their fingers waiting for PowerPoint to open.  Taking the processor and memory out of the box and providing compute power from a data centre is typically called “Desktop Virtualisation”.

For some very mobile workers it may still make sense to provide them with local processing, so that they can work off-line.  The concept of “off-line” is a fast-dissolving one, with widespread Wi-Fi connectivity in public places, broadband at home and 3G about to start giving way to 4G in the mobile market (I even received a boastful email from a connected colleague on a plane recently).  However, “off-line” isn’t quite dead yet, so we still need to think about it, as anyone travelling on the London underground system will tell you.

You will probably find that your answers vary by User group, function or area of the business.  Good.  That means you are already challenging for what is the right answer.  You have started to redesign your end-user computing architecture, breaking free from the mental laziness that results in the answer “PC” before finding out what the question was.  In my experience of working with corporate business in this space, the right answer today is rarely “PC”.  The right architectural principle is, consider each building-block in the traditional PC and where and how best provide that function for each end-user group before diving into any one technology, especially before assuming that the right answer remains “in the box”.

Advertisements

About thedaasler

A supplier of Desktops as a Service (DaaS) who gets ever so excited about things cloudy, Ux-ey and involving virtual desktops.
This entry was posted in desktop virtualisation, End User Computing Strategy and tagged , , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s