Tangled in the Threads
Jon Udell, September 5, 2001Start Making Sense
Leading indicators of a return to rationality
In the dot-com bubble we pretended that the rules for cyberspace were different from the rules of real space. They're not. Fortunately, we're starting to come to our senses.Last week, an acquaintance called up to tell me about a new product from a company he's joined, FineGround Networks. The product, called the FineGround Condenser, uses both client-side and server-side strategies to speed up content delivery and cut down on bandwidth consumption. It does these things using a mix of strategies:
Centralized cache management. A proxy server, located between the content delivery server and the web, subsumes the many cache queries (e.g. If-Modified-Since request headers) coming from browsers into a single conversation with the delivery server. According to FineGround, this cuts down on a surprising amount of HTTP chatter.
Page rewriting. The proxy server rewrites outgoing pages to refer to its own cache of objects.
gzip compression. The proxy uses gzip compression when browsers indicate they will accept it.
Delta page delivery. Here's the big win. In a sequence of dynamic web pages, the text will often vary relatively little from one to the next. So the proxy establishes a "base page" and then delivers deltas to the client -- that is, just the changes. How do these change snapshots get converted back into complete pages? That's where the client gets into the act. JavaScript, running on the client, puts things back together.
This story is interesting on several levels. Technically, it's one more demonstration of the seemingly endless uses of HTTP proxying and pipelining. I'm sure FineGround's strategy raises a few compatibility issues (the company admits as much), but I'm also quite willing to believe that this can be made to work rather well. Pipes and filters are pretty straightforward, really, and to that we owe much of the web's power and flexibility.
But there was another aspect to this pitch that I found even more compelling. It wasn't a story about grand empires, or new paradigms, or if-we-build-it-they-will-come fields of dreams, or market-share land grabs. It just made sense. Your customers wait x seconds for pages, says FineGround, we'll get that down to x minus something. You spend y on bandwidth, we'll get that down to y minus something. By time z, your investment in this product will have paid for itself, and after that, it's gravy.
I found this incredibly refreshing. I'm no economist, and watch with amazement as experts in that field debate whether or not we are in a recession, and whether or not we have bottomed out and turned the corner. It all seems wildly abstract. But for me, the FineGround story was a leading indicator. It suggests that the "irrational exuberance" that inflated the economic bubble is now giving way to good old-fashioned business plans that deliver predictable returns on investment.
Common sense and cookies
This week, the New York Times has been running a three-part series on privacy online. When I saw that the first installment, focused on the role of cookies, I braced myself for yet another story about Big Brother, surveillance, and predatory corporate behavior.
Happily, the article was much more balanced than that. Cookies exist, after all, because we want them. The statelessness of HTTP is fine for some (actually, many) things, but statelessness is not a normal condition of human life. We live within historical contexts, and would find it intensely painful to be cut off from them, as Alzheimer's sufferers are. The same is true of the web, the Times article admits:
Cookies are not going away, said Koen Holtman, a Dutch computer scientist and privacy advocate who has fought to limit the expanding abilities of cookies. Web users "can't really live with cookies because of user-tracking issues," he said, "but also can't live without them because that would lose them some important functionality or reliability."
Here's the article's conclusion:
Over time, the views on cookies from privacy advocates have evolved. Richard M. Smith, the chief technology officer for the Privacy Foundation, a think tank in Denver, said that he now believed that most cookies were benign.
"My first reaction was, `Oh they're terrible!' Over the last year and a half as I've looked at the Internet and how it works, it would be very difficult to have the Internet without them."
I'm not interested in defending cookies per se. The technology is certainly a double-edged sword, raising valid concerns that nobody can adequately address. Rather, I want to explore the assumptions underlying the cookie-driven privacy debate. It seems to me that there has been a presumption that cyberspace was "supposed" to be an anonymous realm, and that cookies deprived us of the "right" to such anonymity. From the Times story again:
It was a turning point in the history of computing: at a stroke, cookies changed the Web from a place of discontinuous visits into a rich environment in which to shop, to play -- even, for some people, to live. Cookies fundamentally altered the nature of surfing the Web from being a relatively anonymous activity, like wandering the streets of a large city, to the kind of environment where records of one's transactions, movements and even desires could be stored, sorted, mined and sold.
The carnival atmosphere of the early web was, I think, another kind of irrational exuberance. Don't get me wrong, I love a good carnival. It's fun to wander in relative anonymity, once in a while, free of accountability for one's actions. But freedom from accountability is not a recipe for a healthy culture or society. Neither should it be a dominant principle of Internet culture. The golden era of anonymity whose passing is now so often mourned was, I think, better left behind. In meatspace as well as in cyberspace, people know an awful lot about our transactions, movements, and desires. That's just life. It's sometimes exhilarating to put on a mask, wander anonymously, and take a vacation from the burden of identity. But more often, we want to engage our authentic selves with other people and with the world at large.
There's no doubt in my mind that we will move towards what Lawrence Lessig calls a "certificate-rich" Internet architecture in which our web identities bind more tightly to our real identities. At issue, says Lessig, are the choices we make about how that version of the Internet will work. We have not been well served, to date, by naive thinking that equates privacy with anonymity. I have no "right" to be anonymous to my doctor, my accountant, or my grocer. Indeed, the notion that I could be anonymous in these situations is incoherent. I do, however, have the right to demand that these people respect my privacy -- that they ensure, for example, our dealings be kept confidential, or conveyed to others only on my authorization. Achieving that desired outcome is a hard problem, but asserting a "right to anonymity" is not going to help us solve it. So it's refreshing to see that the public conversation about cookies and privacy is beginning to modulate. We'll need to be a lot more rational about this issue if we're to effectively navigate the coming era of HailStorm, Passport, and web services.
Dealing with Internet vandals
Those of us who've been attacked by Internet vandals may take particular delight in the new movie Jay and Silent Bob Strike Back, in which the two heroes hunt down and beat up the 12-year-old kids who have been slandering them on Internet bulletin boards. But revenge fantasy is not, of course, going to solve anything. We've got to face up to the fact that the Internet isn't a small town anymore, and its social and technical fabric is now in a scarily fragile state.
A while back, Randy Switt began a newsgroup thread by calling attention to Steve Gibson's response to a denial of service attack:
Randy Switt:
Steve Gibson's grc.com site was DDoS attacked a month or so ago. In response (in typical Gibson fashion), Steve was more curious than upset, and decided to delve into the nuts and bolts of the script kiddies -- and their mentors, the 1% of them that can actually program. Unsurprisingly, he was able to snoop in fairly easily, and find out the exact methods being employed to attack vulnerable machines, and he used the opportunity to publicize and warn his readership and anyone else who would listen. The full description of his findings can be seen here:
Anyway, while I agree with most of Steve's assertions and findings, I find one *strong* comment of his to be hard to support. He blasts Microsoft for implementing the full Berkeley Raw Sockets version of the TCP/IP stack in WinXP. Without full Raw Sockets support, Windows cannot be made to spoof the source IP of compromised machine used in a DoS or DDoS attack, and is therefore traceable. With full Raw Sockets support, source address spoofing becomes possible, and makes compromised machines much more difficult to trace.
While all of this is true, I find it hard to fault Microsoft for implementing a fully standards-based TCP/IP stack simply because it is the most popular O/S out there. The vulnerability of untraceable DoS attacks stems *not* from bad implementations from Microsoft, but from the fully-standardized design of TCP/IP itself. Nearly every versions of *nix, including Linux and Mac OSX include full Raw Socket support, why must MS be any different? The responsibility, in my view, falls to the ISPs to police their own users, which, in general, they have refused to do.
It's ironic of course that Microsoft, often accused of dishonoring Internet standards, should come under attack for fully implementing this one. But Randy's posting provoked a strong and eloquent response from Bjorn Borud, who agreed that in this case it was questionable.
Bjorn Borud:
Back in the "good old days" the problem of malicious users was a lot smaller since most people did not have direct IP-connectivity from a computer over which they had total control. Most of the time you would have the OS between you and the IP-layer and in order to run programs that could generate arbitrary IP packets you needed root privileges.
Back then, this was not a problem since most people who were in a position to actually access raw sockets were system administrators and they would get themselves into trouble if they did something anti-social. You just didn't do it. You tried to be a good neighbor and keep your back yard clean and your peers happy by dealing directly with whatever abuse there was. Back then you *could* get rid of troublemakers.
Then came dialup access with direct IP connectivity over SLIP and later PPP. suddenly you had all these users with direct IP connectivity, with no sense of responsibility towards the Internet community and who simply thought it was their right to do whatever they pleased. I've been running an IRC server for about 8 years so to me that development was very real and very tangible.
Sure the full TCP/IP stack has raw sockets, but does that mean it is a good idea to implement it?
I don't know.
If the level of nasty attacks increases sharply I will have to close down for good because I can't possibly justify the amount of work, downtime and lost bandwidth hundreds of thousands of users will experience if one of my free services is attacked on a larger scale.
Are raw sockets in Windows really all that important? I'd like to hear a valid argument *for* it since nobody has yet clued me in on this.
The increase in this type of vandalism on the Internet worries me. There is usually no strong motive, no purpose for it. It's like gathering a bunch of guys to go clubbing in order to gang up on some stranger and then beat him to within an inch of his life. Just for fun. There are people who do that. Who don't understand that it is an anti-social act. Who do not understand that what is all fun and games for them is deadly serious to someone else. People are trying to make a living on the net. They try to run businesses. they try to create, to provide people with jobs and services. Most people who try to make a living on the net are not multi-billion corporations. They are ordinary people who struggle to make ends meet.
Bjorn went on to say that while ISPs are theoretically responsible for policing their borders, they are in practice often unwilling or unable to do so effectively. He suggested that router vendors could -- and perhaps should be required to -- ship their products with more restrictive default configurations.
That's one kind of accountability that could help a lot. But what about holding the vandals accountable too? Just as anonymity is not a normal feature of real life, neither is the action-at-a-distance capability that the Internet confers on vandals, empowering them to drop bombs from 10,000 feet without ever seeing their victims. This has always brought out the worst in human nature, and always will.
Does this mean, as Randy asked, that we might need to change how TCP/IP works, "perhaps requiring *every* packet to have a verifiable and traceable digital ID. Jon Udell would like that, though there are *major* privacy concerns." I don't know. Certainly it's true that every car on the road has a license plate. We shouldn't automatically assume that packets on the information superhighway should be licensed the same way that cars on the real highway are. But neither, I think, should we assume that the rules for cyberspace are necessarily different just because it is cyberspace. That irrational assumption has led to a lot of grief. Now, in the cold glare of the post-bubble era, a more rational sensibility may be emerging. I take that as a sign of better times to come.
Jon Udell (http://udell.roninhouse.com/) was BYTE Magazine's executive editor for new media, the architect of the original www.byte.com, and author of BYTE's Web Project column. He is the author of Practical Internet Groupware, from O'Reilly and Associates. Jon now works as an independent Web/Internet consultant. His recent BYTE.com columns are archived at http://www.byte.com/tangled/
This work is licensed under a Creative Commons License.