The future of software.
One of the best things about my job is the opportunity it gives me to talk to interesting people building surprising applications. I have been lucky enough to do a lot of that just lately. As a result, I have become convinced that the way we build, deploy and manage applications today is wrong, and that future systems are going to look and work very differently.
I wrote about this lately in my post on mashups, but I didn't think the thing all the way down to the ground there. This post is the first of two in which I'll develop the idea more completely.
The network is the computer
John Gage at Sun Microsystems dreamt up the catchphrase "The network is the computer" long before it was true. The vision then was that the world would be covered with computing power tied together with ubiquitous communications. The internet from space  still shows big dark holes, but it's clear that it's only a matter of time, now. If you're reading these words, then in most of the places you go, you have easy access to broadband and cycles.
Obviously, if the network is the computer, then the software you use is going to run on the network, and not necessarily on the collection of wires and chips underneath your desk. Gage was looking a long way out, but he saw the future clearly.
The timeshare generation
It's true today that applications run on the network, and not on your personal computer. Every time you fire up your Web browser or email client, you're running a distributed application. The client software on your local machine talks to server software running remotely so that you can read the news, shop for good deals on travel and keep in touch with your family.
Important business applications are moving in this direction as well. Before we sold Sleepycat to Oracle, we used Trinet as our outsourced HR and payroll provider, and Upshot (since purchased, serially, by Siebel and Oracle) for sales force automation. These hosted apps allowed us to work from anywhere in the world, to cooperate with one another and to rely on a central service to manage day-to-day operations of information technology that would have been a lot of trouble to run ourselves.
These services, and others like them, are useful and valuable, and I am glad that they were available to us. They are not, however, very interesting. They do not make good use of the network as the computer.
Essentially, these services are exactly like timesharing systems in the 1960s and 1970s. Instead of buying and running a large and expensive computer system for yourself, you contract with a specialist who builds and operates that system for you. You have the illusion that you are the only user of the system, but in order to realize economies of scale, the specialist provider is really sharing the same computers and software with lots of other people.
Hosted apps are the same monolithic standalone software packages that we used to have to manage on our own. We get better reliability and lower cost by centralizing them and spreading the maintenance cost across many users. Fundamentally, though, we are doing the same old thing on the brave new platform.
The IC revolution
A near historic analogue to this situation is the invention and adoption of the transistor in the 1960s and 1970s. When it was first invented, the transistor was widely viewed as an excellent substitute for the vacuum tube in electronics -- it was smaller, much more reliable and vastly cheaper. Vacuum tube systems were rapidly replaced by transistor systems, and radios could suddenly fit in your shirt pocket.
The real power of transistors wasn't unlocked until the advent of digital systems , and especially the invention of integrated circuits (ICs) by Bob Noyce and others. ICs are not transistors doing the work of vacuum tubes better -- they are transistors doing something that vacuum tubes never could .
Today's hosted applications are nothing more than better vacuum tubes. They are an old idea -- timeshare computing -- copied to a new medium -- ubiquitous networked processor cycles. Hosted apps, like portable radios, are merely better. They are not different.
What will change
The next ten years in technology will see more and faster processing and networking. The change in quantity will drive qualitative change. We will begin to build applications that are different in kind from the ones we use today.
Applications of the future will not be monolithic systems centralized to simplify their management. Instead, they will be composed of small cooperating components, each specialized in a particular task, tied together on demand to perform a particular task. Pieces of the application will run in different administrative domains: IBM may get some data analysis from Microsoft in order to tune its Yahoo! ad keyword selection based on the clickstream it observes among shoppers on Dell's e-commerce site.
You can already see examples of systems like these. Mashups are a halting first step. Sun offers compute cycles for hire. Amazon is selling cheap online storage via S3. Internally, Amazon is building its core technology platform in exactly this way. Hard-core technology companies like Sun and Amazon are several standard deviations out on the high end of the curve, but over time this architecture will become commonplace. One day, ordinary non-technical consumers will not only use network computing apps like this. They will be able to program them themselves, easily tying information and analysis together to answer questions. They will not concern themselves with what work is done where.
The hard part
Software engineers have long ridden on the backs of hardware engineers. Computer programs are fast and sophisticated today mostly because the people with the soldering irons have made chips so fast and memories so big that we can be profligate when we program them. To some extent, we can follow the same strategy here. The technical trend toward ubiquitous computing is almost irresistible.
There are, however, critical problems we have to solve to make this new kind of application work.
When we reach across the boundaries of organizations effortlessly, and stitch together applications from all over the place, how can we trust the answers we get? How can IBM be certain that Microsoft got the right answers when it analyzed the Dell clickstream? Was that clickstream correct?
Just as importantly, how can we be certain these applications will run at all? Systems made of many small pieces have many places to fail. Any single component failure, or the failure of any connection among the components, can freeze the application as a whole. When we build distributed systems, even out of simple and reliable pieces, we introduce complexity. Complexity is a crushing weight that eventually guarantees failure. How can we manage that risk?
Those problems are hard ones -- too hard to explore here. I'll write more about them later.
 Eick's is one of several very cool maps digested by CNET. See in particular the colorizations by Bill Cheswick. Cheswick runs Lumeta, which specializes in building and rendering these maps. They don't show geography -- they show a deeper truth.
 Digital systems do not actually exist -- transistors are really just analog devices with very steep transfer curves. I have not mentioned it to anyone, though, because I do not want to undermine the global market for digital technology.
 I am not ignoring early work on tube-based computers. ICs are devices that could never have been built on vacuum tube technology.