Fifteen years ago, in 1996, while I was still a student at Carnegie Mellon University, I wrote an article (blog post in today’s parlance) about the future of computing. The article was really a response to a concept that Larry Ellison from Oracle and Scott McNealy from Sun were pushing at the the time. That of the diskless “Network Computer“. At the time I didn’t have the guts to call them out by name, but I did feel strongly about how the technical direction for the NC was wrong.
I mentioned this article in a conversation I had with the co-founder of a new startup. He tried to look for it an couldn’t find it and so emailed me to see if I still had a copy. I tried to find it on the Web and I couldn’t either. After all, this was Before Google (B.G.) and also before the Internet Archive and the Wayback Machine (which is a lot of fun when you’re feeling nostalgic or just want to laugh at what was cool back in the day!).
After a little bit of digging through old backups, I found a folder with drafts of a couple of my old articles/paper from 1996. At the coaxing of some folks on Twitter, I’m reproducing the article in its entirety here. Just as it was, with no edits. Before you read this, please be aware that this was written in 1996 – that is 15 years ago. I was still young, and had a full head of hair (really, I did). The Pentium / Pentium Pro was the state of the art processor. There was no WiFi, and CRTs still dominated the display market. Laptops were just beginning to appear. Floppy disks still existed. CD-ROMs were new and cool (no DVDs!).
Hope folks enjoy reading this piece of history (at least my history!) and get a few chuckles out of it…
Today’s computer a couple of years down the road….
A vision of what the next big thing in the computer industry might be.Manu Kumar
(sneaker@cs.cmu.edu)
School of Computer Science
Carnegie Mellon University
Sunday, May 05, 1996Introduction
I like most of my papers to be colloquial. It allows me to explain what I want to say rather than get lost in formality. So this paper is going to follow the same colloquial style.
Everybody and their dog is attempting to predict the future directions of the computer industry. Its extremely fast pace is leaving everyone guessing about what’s going to happen next. The person who puts his/her wager on the right choice is going to land up with some big bucks and others may fall from their high horses and lose their millions. It’s a gamble… a gamble with high stakes, high risks and most importantly, driven by a highly scientific and technological basis.
In my opinion, the fact that the industry is based on technology gives it a certain degree of predictability; and in this paper I chose to use what information I’ve gathered regarding the industry in order to predict the directions of the machines for tomorrow. What you have to keep in mind is I’m talking about tomorrow, i.e. the near future, although I may wander off a bit and become a bit more futuristic.
One caveat regarding this paper: it assumes that you are somewhat familiar with the current industry news and technological developments.
Today
The Web is taking (taken) over. It’s had an exponential growth. In fact, it’s growing so fast that the word exponential may very soon be an inadequate description. And now, all the big companies (Oracle, Sun, Microsoft etc.) are talking about integrating the desktop with the Internet. “Internet Appliance” and “Network Computer” are the hot buzzwords.
And I agree with them all. They are right. The Network Computer is the way to go. But, I don’t agree with everything that they propose. In the next few sections I discuss what I feel the Network Computer or Internet Appliance should be, and what I think would be some of the essential characteristics of it’s design.
Moving Down the Road
So now that I’ve laid the foundation…
The Network Computer
The network computer, is analogous to the dumb terminal of yester-years. Except that there’s one big difference. Somewhere along the way it got smart. In fact, it got intelligent enough so that you no longer need to be a whiz to use it. It just works. And how does it work? Well, that’s what’s coming up next….
Lawrence Ellisons’ description of the Network Computer (NC) with just two wires is exactly right. What makes it simple today to buy a television from a store, bring it home and use it the same evening(i.e. you don’t need an expert to set it up for you! …well, okay at least most people don’t!)? The television needs only two wires. You know exactly what they’re for. It needs juice (electricity) and it needs the antenna or cable plugged into it in order to receive the television signals. Similarly, the Network Computer, must have only two wires, the juice-wire and the net-wire.
The biggest problem with making computers as wide-spread as televisions are today, is the end user. The end user may not be competent enough to figure out the intricate details of plugging in the right wires, installing the operating system (well, most machines come with them installed now), installing the software they need…. and most of all getting it all right. Or even if the user does have the necessary level of expertise, he/she may lack the time or the inclination. The bottomline is that the end-user does not want to deal with all the complications of setting up a computer just right. (Actually, I’d be willing to argue that right now cheap and good computer support has a great market potential if it’s done right; but that’s a whole other story.) But is the user really a problem? Or is it the equipment? Why does a computer need to be more complex than a television?
The Network computer resolves all these problems. You bring it home. Plug it in to the electric socket and plug it in to the network socket Okay, so that’s being a little futuristic… but think about it, how skeptical were people when Thomas Edison, first invented the light bulb? I bet they said then, that it’s impossible to get electricity everywhere. Or when Alexander Graham Bell talked on the telephone for the first time, they must have said that it’s impossible to have a telephone in every home! But today, you can plug in 30,000 feet above the ground and you can call home from virtually anywhere. Okay, so maybe that’s an exaggeration, you can only plug in 30,000 feet above the ground if you fly first class on some airlines. But the point is electricity and even phones for that matter, at one point in time faced the same skepticism that the net faces today. It will happen.. you will have a network socket… but for now it may just be your phone line, ISDN line or eventually your Cable TV line.
So where were we? We brought the NC home, and plugged it in. There is no on-off switch.. you don’t need one. It is always supposed to be on (or maybe asleep… but never off!) So you plug it in and it come on. Now what? Before, I describe that, let’s take a little techie-diversion. I want to first describe address the technical issues as they will help in understanding what happens next.
Technical Details
So far the only thing I’ve mentioned about the NC is that it has two wires. Now depending on how futuristic you want to get, I forsee several different approaches to the NC. I classify them into Maybe Tomorrow, Maybe Next Week, Maybe Next Year and Real-Soon-Now (of course these are just to give you a relative time frame, so don’t take it literally).
Maybe Tomorrow: Tomorrow NC may look a lot like the desktops and portables of day. It’ll still have a keyboard, a mouse and all the regular parts of a conventional computer. It’ll have it’s own monitor, sound etc. The biggest difference in this Network Computer will be the Operating System and how it handles file storage (common difference to all)
Maybe Next Week: Drop the keyboard. Improve the pointing device. Add in a good Speech Recognition. Make it even more powerful, bigger in storage, but smaller in size.. small enough to move around easily (notebook size?).
Maybe Next Year: Make it even smaller. Put in some flexible high resolution display. Make it even lighter. Possibly cut one of it’s wires off completely and reduce the need for the other, i.e. the network is wireless, and the power consumption is low enough to give it a usable battery life!
RSN: Drop the monitor or whatever is being used for a display. Add in some holographic video. Drop the mouse. Add in some Intelligent Agents. …basically you could let your imagination run wild with this one. But let’s leave that for the time being and come back to realistic things.
Several of the technological advances necessary for achieving the different levels of functionality described above, are already being tested in research environments. However, it will still take a little time for them to be made usable by the masses and more importantly, commercially viable.
But the real big difference in the Network Computer I believe is in the storage model.
The Storage Model
The entire storage model of the Network Computer can be summarized in one word: cache! All the storage on your local NC is a cache. The entire local hard disk is nothing but a huge cache.
When you power on the Network Computer for the first time, it’s Flash ROM based Operating System boots up and immediately contacts it’s vendor(s) over the net connection. It then asks the user what his application for the machine is and then proceeds to “cache” the appropriate software… over the network. The user need not have any idea about how to install software, how much disk space it needs, where he must install etc. All he says is, I want to use this machine to write papers, or surf the net, or play games, or do development.. or any combination of the above. The machine is self-aware. It knows how much space it has, what the requirements for installation are where it should install etc. Of course, not all of this information is within the machine. The information is distributed… on the network.
So take for example a word-processor. I tell my NC that I want to write a paper. The NC checks it’s local cache to see if it has the word-processor software cached in it. If it does, it verifies that it’s copy in the cache is up to date. If not, the NC will automatically cache a fresh copy or apply a patch (differential updating) to update it’s own copy to the latest version. Then I can proceed to word-process to my hearts content.
The key idea is that the user has no floppies to deal with and need not know anything about the machine or about managing the software. The machine automatically updates itself and maintains itself.
The document is always saved… by continual checkpointing. And it is not only saved in the local cache, but uploaded to the secure centralized storage provided by the Network provider. Yes, your main storage is on a network file system, somewhere in cyberspace. So you can be anywhere in the world, as long as you connect to the net, you will always be able to get any of your files.
The obvious questions arise… what if you are not connected to the network? What if you want local storage so that you are not wasting time and bandwidth getting everything from the network all the time? How big should the cache-size be? And so on. Before, I address these questions I would like to explain the rationale behind this idea.
Rationale / Source of the Storage Model
The model given above is derived from the use of AFS (Andrew File System) developed at Carnegie Mellon University (principal architect: Mahadev Satyanarayanan). In AFS all files are stored on distributed file-servers. AFS clients operate by using a cache. Whenever the use needs a file, it is read into the cache. And used from the cache. If the file on the server changes, the cached copy is now invalid (depending on the version of AFS this is handled differently).
AFS relies on a connected network. If the network dies, AFS cannot function correctly. Coda, the next generation successor of AFS, is smarter. It allows for disconnected operation by allowing the disconnected user to continue using the cached copies.
The AFS and Coda is one piece of the background I needed to introduce before explaining the rationale for the model. The other piece is a more far-fetched. It may be hard for some to swallow at the time but this is where I forsee the industry heading towards. Eventually, you will not be buying software the way you do today.. in stores. Software will be sold over the network. And most probably, there will no longer be a single one time fee that you pay for a particular version of the software. There will be software subscriptions, just like magazines (in fact we can already see some of these in offers like the Visual C++ Subscription from Microsoft). The charge for software will probably be on a per-use basis. So you are no longer paying for a particular version of a particular software. You are paying for using the software each time you use it.
Let me illustrate with an example. I am writing this paper in a word processor. I use a word processor very often to write papers. However, I use a spreadsheet only once in a blue moon. Now, does that mean I should have to buy the entire spreadsheet package, even though I use it only a few times? In the pay-per-use software business model, at least in my opinion, both the consumer and the developer benefit. The price the consumer pay is proportional to the benefit he/she receives from the software. And the developer receives a proportionate payment from each user.
I spent a long time trying to come up with an appropriate analogy for this scenario. But “software” is so unique that it was hard to come up with a single example that would illustrate the point. One example, is of two people who have different appetites. The person who eats less pays less for his food. There person who eats more.. pays appropriately. Another analogy, is that of renting a car and paying by the mile. However, the main objection to these analogies, is that software unlike food and cars in non-tangible. It has nearly zero replication cost, especially if everyone is downloading it from the net; it does not get consumed or decrease in value with use. So then, why should users always keep paying the developers and software companies on a per use basis?
It’s a valid question. One I’ve been pondering over myself. Maybe the per-use price can be made so low that users don’t mind, and the software companies can still be profitable. Or maybe there can be a price ceiling imposed on the maximum payment on a per-use basis. Say, I use my word processor so often that if I use it everyday for a whole year, I would be paying the software company about two times as much as I would if I bought the package in a store today. In that case, maybe the maximum payment ceiling may say that if you have used the software n times or for n useful hours (the number of times a word processor has been used would be an inaccurate representation of amount of benefit obtained by using it… number of hours seems more suitable) and have hence paid n times x units of currency to the developer/software company, thereafter all subsequent uses will be free.
These of course are just some of the options. As an analogy, think of the different telephone/long distance plans. Each one has a different pricing structure, catered to the use of the customer (usually it’s catered to extract more money from the customer, but in most cases I think both the customer and the company benefit).
Software is never “complete”. I would be willing to argue that it is impossible for a software (or anything for that matter!) to be finished, completed, perfect. There is always room for improvement (it’s the largest room in the house!). A developer can always think of some bug which is still in the code. There is always some feature that can be improved. So in the pay-per-use Network software distribution model, the user can always be using the most up-to-date version of the software. (Of course, at times older versions are better.. in which case the user may actually tell his machine to not automatically update the software without checking first).
So with this little background given, let me tie it all in to the storage model given above. The NC’s hard disk functions as the cache in an AFS client. It caches all the software that a user need to use. Whenever the user uses a particular software the use is logged, either locally or remotely. (Privacy issues are bound to come up here. Which is where local (well, not really local… it would still be stored on the network storage, but in the user’s secure personal account) logging is better. Only the number of use-units is reported to the software vendor, for charging purposes.
Whenever the user begins to use any particular software, the version of the software is verified over the network. If the vendor has released a new version since the last time the software was used, the patch will automatically be applied, unless of course you the “advanced” user has told the machine not to automatically apply patches and check with you first.
Now the cache storage model is a little different from the regular caching models. It is an intelligent cache. The user can tell the cache that I use this file very often, so I don’t want to get it over the network each time… make sure that the most up to date copy is always in the cache(Cache Hoarding). Though initially, this decision may be made by the user, eventually, it can be made by a software agent which monitors the way the user uses the machine, and tunes the cache accordingly.
When the user is not connected to the network, the optimistic caching principle of Coda comes into play. The user can still work on whatever he has in the cache. The network copy will be updated the next time the user connects.
So this is what I consider the biggest difference, treating the entire hard disk as a cache. Now, let me discuss what I think can make this model work and make it happen really soon.
Java OS
Before I even begin this section, let me admit that I am biased. I am completely sold on Java.
In my opinion, the JavaOS that Sun has been touting for a few months now, can be made extremely useful. Think about it. All it is, is a simple OS. Which does cache handling and gets all it’s intelligence from the information on the network. It can be extremely lightweight (especially once the Java microprocessors that Sun is planning on come out) and yet have decent functionality.
The NC then just reduces to a nothing else but a Java run-time environment. Which can be manufactured very easily and extremely cheaply. (Costs are a big factor in technology!) The only “software” on the NC then is the ability for it to go on the net and find it’s vendor. Once the vendor has been located, the NC knows how to upgrade it’s OS. So it goes out gets the latest copy of the Operating System. From there on all the operations are handled by the OS. The advantage of the OS being done in Java is that it’s completely replaceable. The software that I’ve been talking about above is nothing else but Java applications which are cached on the local hard disk.
Of course the same thing is possible with Windows as well or with any existing commercial OS. But the idea is to have a very light weight operating system. A modular OS. A new OS, which is designed for such use.
Currently, performance issues seem to be the biggest hurdle for Java. But Sun Microsystems is developing Java micro processors. My hunch is that the Java micro processor or any other lightweight yet extremely powerful processor (StrongARM etc). may provide the necessary base for developing such a JavaOS.
Intelligent Agents
Let me add a small blurb on intelligent agents. Intelligent agents will become an integral part of the NC. Like the caching agent I described above which monitored usage and came up with a heuristic for the best caching policy. Or we can have browsing agents and filtering agents which prevent our puny little minds from the barrage of information coming at us. There has enough been said about Intelligent Agents, by several who I consider to be a lot smarter than I. Let’s leave it to the best, and simply acknowledge, that this NC will open new arenas (commercial arenas?) for Intelligent Agent use.
Conclusion
This paper is a real mess. It definitely needs some more reorganization. But it contains some of the points I wanted to make in some way, shape or form. The objective of the paper was to speculate on what’s happening next and possibly see if other’s agree with this speculation or not. I’d be happy to entertain your comments or suggestions or even discuss this with you in more detail.
You can follow me on Twitter at @ManuKumar or @K9Ventures for just the K9 Ventures related tweets.