The Big Switch by Nicholas Carr
Nicholas Carr, famous for being among the first to publicly point out, in IT Doesn't Matter, that investment in information technology had gone from being a differentiator to a cost of doing business, is back in the limelight with an ambitious new book, The Big Switch (website). It starts out with a fairly focused intent -- to understand the potential shift to a service-oriented, utility-based model of computing. It accomplishes that intent rather hurriedly, but reasonably well, and then marches on to bigger things, with mixed results.
The overall recommendation: well worth a read, so long as you stay aware of a couple of critical blind spots in the book's take. The Highlights Anchoring the book is a fairly detailed analogy between utility-based computing and the shift, a century ago, from captive industrial power and candles to electricity grids. The analogy is quite detailed, down to a comparison between Samuel Insull, Edison's one-time financial consiglieri and later a pioneer in creating the modern electric-power industry, and Marc Benioff, founder of Salesforce.com and one-time right-hand man of Larry Ellison of Oracle. The title and set up evoke metaphoric visions of a giant, worldwide God computer (there is a chapter title iGod), tightly integrated with humans a la Matrix, and working through a vast infrastructure of both centralized and peer-to-peer computational intelligence. Many (myself included) believe in some version of this vision, and are placing bets accordingly. The ingredients Carr chooses to pick out, to weave into his synthesis, are the usual suspects; if you aren't familiar with the raw material, here is the short list:
- Software as a Service
- Utility-based computing, grid computing and cloud computing
- Techno-ecosystems such as Amazon's EC2
- Closer to individual users, Webtops such as those provided by Google Apps
- Virtualization and the resultant loss of identity of what it means to be a computer
- Not a Done Deal: Carr seems to think utility/service based computing models are a done deal, and only the engineering detail and economic logic needs to be worked out; that we soon won't need anything more than browsers on our PCs. Far from true (though I wish it were). The fundamental science is far from mature and several pieces of the puzzle are very dubious indeed. Further breakthroughs are clearly necessary. But that's a more geeky discussion that I can take offline with those of you who are interested.
- Centralized vs. P2P: Carr also lightly glosses over the distinction between centralized and peer-to-peer aspects of the vision (huge earthquake resistant compute clouds on the one hand, and Napsterish P2P ideas on the other). He dismisses the distinction as irrelevant, saying that it is the 'centralized coordination' that is the key feature, whatever the physical morphology. Again, not true. The distinction might well end up being critical and substantively change the story the evolves.
- It ain't Electricity: Finally, the electricity analogy. Carr smartly covers himself by noting that the electricity analogy is limited. Yet, he doesn't spend too much time exploring the ways in which it is limited. Information is a fundamentally different beast from energy, and the fact that bits can be delivered remotely as easily as watts is not sufficient to anchor an argument for utility-based computing. Portability is not the only (or even most salient) feature of bits.
3 Comments
This is a response to your latest "Slate" clip, but I thought I'd do it here.
Desktop apps and web apps are not as disjoint as the Slater imagines; they form a continuous spectrum.
On the one hand you have your traditional desktop apps, like Word and Photoshop which reside and operate exclusively on the desktop.
You might think that Firefox is a desktop app. But it checks for updates periodically, auto-downloads and updates itself. It is also capable of syncing and retrieving persistent state like bookmarks and options, with a central server.
So, is Firefox really a "web app", heavily cached and executed locally? More and more traditional desktop apps are sporting features like "Live" or "Online", to enable sharing and other net-centric behaviour.
At the other end of the spectrum you have the pure client-server web app, serving HTML to the client and keeping all the processing and persistent state on the server side.
This runs into serious speed-of-light limitations for many cases, which is why the next generation of web apps like Google Docs pass on some amount of code to the client to execute locally, while still keeping all the state and the bulk of the processing at the server. This improves responsiveness quite a bit, though we still have the issues of network outages.
Finally we come to the Google Docs + Gears model, which lets your web app store persistent application state locally, so that it can continue to work even if the machine is offline. From this to caching code is but a step, and then where are we? Picking our way through a continuum, where it would be difficult to point out where one stops and the other begins.
Hmm... I see your point about boundary blurring, but I think there could be a way to frame this issue in more fundamental ways, maybe flops/location as a measure of decentralization of computation or something. There's got to be a deeper way to analyze this stuff than in terms of specific example architectures/design patterns like auto-updates, Gears... there's an info-theory model somewhere here.
I personally do think localized computation has a role to play once you factor in hard-real-time control (which is tough to do over TCP/IP, but is apparently easier over raw UDP I am told). This will become increasingly important as we go from computers to mobile devices to robotic devices that sense/act. The heterogeneity and material interactivity of the hardware should drive some interesting dynamics.
Well, usually the way it happens is, someone solves a real life problem, then a grad student comes along to express it in terms of Greek letters and curlicues :)
But yes, there is an interesting theory buried somewhere.
One aspect of this theory is: trust boundaries.
You can encrypt data before sending it off to be stored in Amazon S3 or some other remote store. You don't need to trust the provider. But when you start doing any kind of non-opaque processing on the data offshore (like with Google mail or docs or spreadsheets or salesforce.com), you are trusting the provider not to misuse the data. A "Don't be Evil" mission isn't good enough.
Does this impose a fundamental limitation on utility computing? Perhaps not. My favourite theory is that Trusted Computing will be turned on its head.
Trusted computing refers to technology which can be employed to impose the will of the content owner (music/movie industry) on machines owned by consumers. Only signed software can be run on these machines and only signed software can process protected (DRM) content. A good example of a current implementation is the XBox.
Utility computing providers will deploy TC-enabled computers. The "trusted" software stack which will process end-user data needs to be subject to third party audit and signature (so I guess it will be open-source). Your local machine will verify that the remote machine is running a trusted stack (much in the way we identify websites with certificates signed by Verisign or other CA) before handing it a key to decrypt the offshore data and process it in some meaningful way.
I think there's a nice niche out there waiting for a first mover :)