August 15, 2003

Technophobia

When my father started using computers at his office, it must have been way back in the late 1980s, it had an operating system that used to reside on a floppy disk, no intuitive Graphical User Interface (GUI) or any of the bells and whistles that we take for granted in a computing environment these days. So one would assume that as things progressed towards the new millennium and the average computer user's experience began to become more synonymous with the eye candy of Windows XP than that of the monochrome command line, things would have become a lot more easier. Rather interestingly, the answer is a definite no.

To be fair, the story is of a double-edged sword. To drive down the entry level cost of the computing experience, it was required to drive up adoption on a massive scale. This was not quite possible with the rather non-existent ease of use features of the command line. It is quite pointless to go into the story of who gets the credit for the GUI or any technology or brands related to specific companies. The angle of interest here is on a different line. Coming back, the "for dummies" approach, it did end up meeting the adoption and price targets, maybe it even overshot it. But, my father just cannot make any sense of it.

It is rather baffling for me to understand why a person who could once comfortably trot off 50 odd commands and majority of the flags associated with them could not have a much better experience with the "point and click" paradigm. Before we head down this road any further I would like to furnish some disclaimers. What is going to follow is thinking that is mostly speculative and opinionated in nature. I would not in most cases have specific instances or studies to back up my assertions nor am I pretending to be an authority on what is being written about. If you have a problem with that, this is where the exit button can be pressed.

As far as I can understand it, a lot of the lousy user experience can be blamed on, yes, lousy interface design. If you put three people together and ask them to achieve the same objectives in the same computing environment, the odds are quite high that they would have three different approaches to doing the same thing. I have observed this a lot at work, where some people even go to the extent of downloading and reinstalling a software just because the normal shortcut that starts the program has been changed or is absent. Even within broad categories, be it a power user or a novice, per user approach differs.

Ideally, we should not be discussing the shortcomings of UI design and the end user experience in this age especially when the GUI is credited with winning over so many users to the stupid white box that takes away so much of our time. Yet, for some strange reason the "cancel" button on the latest Red Hat Linux distribution was on the left side, before the "yes" or "save" button, causing me to impulsively cancel things instead of saving them. For a software philosophy that is so strongly grounded on "freedom" most of the GUI add-ons on Linux end up being a disgrace by trying to a better Microsoft wheel than charting a new and better course.

But then, can Microsoft be left far behind when it comes to messing things up? There used to be this nifty yet hidden configuration utility in Windows 98 called as msconfig.exe. Mind you, it was something that could still have given the average user quite a scare, if he decided to use it. Msconfig's main use was to show the settings and importantly most of the programs that would be launched on start up. And when they launched the Home edition of their latest OS, Windows XP, instead of improving on the utility, it still remains hidden and the only visible method to check services that run at start up is to fiddle around with the "Service Control Manager". Any XP users here know what "Provides the endpoint mapper and other miscellaneous RPC services" means?

The malaise, though, is not totally the blame of a few misplaced buttons or icons. It is symbolic of a much larger issue, of the over-dependence on the API paradigm to approach interface design problems. An API, in very simple words, are terms of reference provided by a program that would generate a specified set of responses, without necessitating an in-depth understanding of what the program is or what it actually does. To further simplify things, it would be like having a few common hand gestures for communication all over the world, enabling anyone to survive anywhere.

I am not saying for even a single moment that it did not have its uses. A lot of what we take for granted on the Internet like web-based mail would not exist if it were not for the use of this approach. If it were not for this layer it would have been impossible for the programmer and interface designer to provide the context to what is often just meaningless integer or character data sitting in a relational database. Yet another very important function would be the consistency of interfaces and rapid application development and deployment.

Time to bring the father back in. The problem now for him is that there are too many layers or just too many complications in a single layer. So many that even most of the programmers who make these programs are getting distanced more from the core technology and the end user (the API decides what the interface looks like) at the same time.

In another 10 or 20 years we would have a generation of coders who would not have any idea about the most basic of things like the transport or the protocol layer, because there would be some one line procedure that would do it for them. You do not believe that? Take a look at the number of new projects being released on Gotdotnet. This is the next stepping stone of the experience I had at my first job, where I had excellent Visual Basic programmers working with me, who did not know what a system DSN meant.

The issue gets even more complicated when you take the case of the average user. Take even a single aspect out of their normal user experience and they are stuck, even when there a dozen workarounds for it. But there is literally no "thinking out of the box". They are not encouraged to it, they are not used to it. The layers are scaring people off technology, so much that as long as it works the way they are used to, they do not care what other things it does at the same time or what it is capable of. Case in point is the large number of machines on broadband connections used to launch distributed denial of service attacks.

The apt word is intimidation. And that is what the API approach does to most users, it scares them from using and exploring things further because it looks complicated while it is not. The fundamental framework of computing even now is the basic 0 and 1, and everything must translate down to that at some point or the other. And adding to the whole mess are the geeks at the core of the development process.

A CEO of a respected and suitably large newspaper's online edition, faced with the question of implementing syndication via RSS feeds, shot off the question to his tech head "What the hell is an RSS feed?". Even the other chap had no clue. And development is already underway for the next syndication specification. It does not matter as long as at least the developers understand it. End users and the rest of the world can be damned. We can always write a new specification.

A result of this disconnect is the phobia for technology from people who do not understand it. The most adverse impact this has in on privacy issues. A 10 minute packet capture of any corporate network can make for a very interesting study of insecure communications and the port/worm scans that I get on the dial up from home is even more scary. I know at least 20 people with computers on broadband lines who would not have heard of the latest RPC worm and some of these are system admins of large networks that span the country. And here I am worrying about my father!