One learned critic referred to my last set of notes as a "philippic" ("a discourse or declamation full of bitter condemnation", originally referring to Demosthenes' tirades against Philip II of Macedon). I don't know if this was a rebuke or just a joke on my name. Whatever the case, I've resolved to tone down the ranting, which has started wearing thin. Some notes, then, mainly calm ones, albeit mostly about patterns of thought that have been bothering me, plus a bunch of book recommendations and some URL's. Some technical advice, please? I want to buy a new laptop so I can write books on islands and in hotel rooms, but above all I want to run Emacs. What are my options? I had a Macintosh port of Emacs; it made me happy, but it broke in OS 8. Linux is the emotional choice, but I'm afraid of the administration overhead. Who makes it simple? I'll gather the useful answers and make them available to the list. Conservatives often argue that communism, especially the Stalinist kind, was the logical consequence of Man trying to play God. Who originated that argument? Do you have a citation? Here is my new business plan. The company is called numerology.com. You go to the web site and you give it your credit card number. It charges $19.95 to your card, and then it goes off into the databases of the world, extracting all of the information about you that it can find. It feeds that information into our brand new 128-processor Linux supercomputer, and two minutes later you get a new Web page that presents a large number of creepy numerological facts about your life. Perhaps the letters in the name of your brother-in-law, when combined according to a numerical encoding, add up to 666. Perhaps the number of checks that you have ever bounced is the same as the number of courses you ever flunked in school. Perhaps the number of calories in the food you bought in your supermarket last month is exactly halfway between the number of miles that you drove on your old Pontiac and the zip code of your birthplace. For an extra $10 you get Numerology.com Gold Edition, which includes a nice set of suggested lottery numbers, each guaranteed to have the kind of deep personal significance that spells luck. Recent discussions have brought home to me the intellectual harm that can be wrought by simplistic dichotomies. I'm not talking about ordinary conceptual distinctions. I'm talking about the unreflective acceptance of huge vague partitions that falsely sort everything into category A and category B. The problem is usually not that anybody is consciously obfuscating, but rather that they have been socialized into a certain way of talking and thinking but have not acquired the critical tools to recognize the patterns. All of the ways of talking and thinking that they've learned will make local sense, and their conclusions will seem iron-clad precisely because their premises are hidden away in subtle abuses of language. At least that is the sympathetic intepretation. And much sympathy is needed, given the often extreme emotions and actions that these dichotomies can lead to in practice. I'll mention four of these patterns that have really gotten in the way lately. The first is the dichotomy between supporters and opponents of something called "technology". You may recall that I circulated a summary of an article by Hara and Kling describing some of the problems that they found in a field study of an online distance education course. They argued based on a survey of the literature that problems in the use of distance education technology are a "taboo" in the field. Well, their thesis was certainly supported by the response that their paper summary provoked. One guy called me a "saboteur" just for passing the summary to my list. (This particular guy eventually apologized once I explained things to him very slowly.) Another guy, from a senior professor of education, publicly issued a series of false accusations against the authors, and then started insulting me when I explained his mistakes. (This guy sort of apologized too. So there's hope.) What is going on here? I certainly don't believe that the majority of people involved in educational technology participate in this sort of true-believerdom. I have no idea of the proportions. But I do know that a substantial subculture does think this way. I spent some time trying to decode the underlying structure of the messages. The major part of it, I think, is precisely the broad, vague dichotomy between supporters and opponents of technology. These people -- and again I'm just talking about a certain subculture -- have an Enemy, namely the anti-technology forces who selfishly want to protect their own perks while preventing children from getting a proper technology-enhanced education. Faced with anything, they think: "Is this thing pro- or anti-technology? If it's not pro- then it must be anti-. And if it's anti- then it's wacko Luddism that is totally beyond the Pale. QED." The logic continues along the same track as the conversation proceeds. If you tell them, "No, it's not anti-technology", then they just get confused and say, "So what's the point?" -- they can only imagine two points, pro- and anti-. If you tell them, "Surely it's good to know some of the things that can go wrong", then they just get confused and say, "Well, that's just human error, not anything that's inherent in the technology" -- again as if the only possible issue is whether the technology is good or bad. Or they look at you funny and respond, "Well okay, just fix that" -- as if any issue beyond the technology is necessarily trivial. Or they say, "That's just because those people haven't learned how to use the technology yet", or else, "That's just that one technology, not technology in general". In each case, their "listening", as Werner Erhard would say, is "Is this pro- or anti-?". No other question can get on the agenda until that one is decided. It sort of sounds like a syllogism, this parsing of all things into pro- or anti-, so what's wrong with it? Well, first, nobody has ever encountered Technology in general. One only encounters particular technologies. If schools are told that they much buy Technology and lots of it, the money will almost certainly be wasted. It matters which technology one chooses. You would think it an obvious point, but I have often been unable to get it across. Second, the very word "technology" can mean a lot of different things. If you are David Noble, who really is an anti-technology Luddite, then technology means a certain package of machines, social relationships, industrial practices, political economy, and so on. Many such packages are indeed bad and deserve to be outvoted, if not necessarily smashed. To the pro-technology subculture, by contrast, technology usually refers to machines in a narrow sense. This is dangerous because -- and this is the third problem -- particular kinds of machines can be either useful or useless depending on how they are used in practice. Economists observe that computers require a lot of complementary resources such as skills, spare parts, and maintenance. What is more, computers usually do not improve productivity unless organizations and their work practices are redesigned, and redesigning organizations and work practices is a whole lot harder than installing computers. Kling and Hara's paper, which grows out of a longstanding research tradition, points out a few of the complementarities between computers and their contexts, and it suggests something of the difficulty that would be involved in changing a whole institutional system over to computer-mediated distance education, not just a set of computers or a single classroom. Anybody who reflexively regards this sort of thing as Luddism is setting up their movement for a serious fall down the road. The story had a happy ending, though: the New York Times published a brief article summarizing Hara and Kling's findings and noting the controversy they provoked online. Notch one more taboo. The second dichotomy is between centralization and decentralization. For the past ten years, a whole elaborate libertarian discourse has been organized around such a dichotomy, with the Internet and the market on the decentralized side of angels and with the telephone company and the government on the centralized side of the devils. Part of the problem in evaluating the arguments of this movement is that they often use words in fancy ways that render their claims tautological, for example using the word "monopoly" to refer only to government-created monopolies, or using the word "hierarchy" to refer only to the government. Thus the Internet can be held to smash all monopolies and hierarchies even in the midst of a vast episode of industrial consolidation. This sleight-of-hand covers up some uncomfortable truths, and I have tried in my own small way to uncover these truths in a couple of my recent essays, including "Designing the new information services" and "The self-limiting Internet". The argument, briefly, is that decentralized coordination requires aframework of institutions and standards that, at least in many cases (not all, but many), can only be created through some kind of centralized coordination. This coordination can be provided by government, or by the sponsor of a new technology, or by a monopoly, or through negotiation in a standards body, or by the interaction of these. I warn against the dangers of getting "stuck", whether in a centralized mode that arises to coordinate an institutional or standards transition, or in a decentralized mode that cannot move forward because its institutional or standards framework cannot be upgraded. Now, you would think that such an argument would be welcomed by the fans of a decentralized society, inasmuch as it begins to chart the rocky waters that their social project will be required to navigate. But no, in fact these essays have met quite a bit of incomprehension and hostility from the libertarian world. I am accused, for example, of advocating centralization. Some of them even jump vociferously to the conclusion that I want "top-down leadership", "central authority", "legislating", "dictating", and so on. That, after all, is how the arguments will parse to someone for whom the major conceptual scheme is centralization equals government equals bad versus decentralization equals market equals good. None of those equations is necessarily either right or wrong. It all depends. It would be a shame if this allergy to anything that even rhymes with a center derails needed efforts at coordination, whether public or private, and brings about the very scenario that I described in the "self-limiting Internet" paper: the market working in its normal way, through property rights as an incentive to investment, to produce a proprietary outcome. In any case, it should be said that one of my skeptical correspondents did describe a plausible mechanism for a transition to IPv6, which is the version of the Internet Protocol that can support a 128-bit address space of hosts. On his scenario, IPv6 is pushed by the self- interest of the manufacturers of embedded Internet devices, which many people expect to be vastly more numerous than personal computers and mainframes. Once a sufficiently massive network of IPv6-enabled equipment gets established in this fashion, everyone else will be pulled along. This trick probably doesn't work much more than once, so I hope we get all the upgrades we need at that time. If that particular dichotomy tends to be found mostly among those on the libertarian right, the third dichotomy that I have encountered lately is found largely on the left. This is the dichotomy between economics and power. Someone from Germany was most upset, though much more politely than the others I've mentioned, that my "self-limiting Internet" essay talked in economic language about the relationship between technical change and institutional change. Surely, he says, institutional change is primarily about power, not about economics, as if the two could be separated. I am afraid that this fellow is not an aberration among my academic friends on the left. The left has changed. Marx, despite his many failings, understood that power is bound up with economics. He saw history unfolding through a dialectical clash between the forces of production and the relations of production, whose articulation in any given historical period can be described most fundamentally by the institutionalized fashion in which surplus value is extracted from the core productive activities of societies and diverted in the interests of capital. Marx felt that society is founded in its means of making a living. This is way too simple, but the point here is that, according to Marx, if you know how people get fed, clothed, and housed, then you know an awful lot. This view is no longer widespread. Part of the problem pertains to academic disciplines. The economists and the sociologists fight with one another, each insisting that it can absorb the other's territory. In particular, economics has been largely taken over by a technical style of reasoning whose imperialism most sociologists dislike. Thus the appeal of discursive theories of power such as that of Foucault, which are valuable when they are appropriated with a dose of scholarly context and common sense, and sometimes disastrous when they are not. Learning the economics requires a little math, it makes your head hurt, and it tries to brainwash you with its deceptively all-embracing view of the world. But that should not be a problem to someone who reads seriously. Fortunately, a movement has arisen of economic sociologists who have engaged in a dialogue with the economists who tread on their turf, and I think that the results are among the most interesting of scholarly movements right now. (See Richard Swedberg, Economics and Sociology: Redefining Their Boundaries, Princeton University Press, 1990.) My own essay on Internet standards problems in their institutional context was intended as a small example of how to analyze technology by crossing the borders between these two seemingly very different worlds, which I argue are closely connected underneath. I don't pretend to be anything but an amateur, but it seems to me that someone has to try. The fourth and final dichotomy that's been bothering me is between "real" and "virtual". For example, people often ask, "will virtual businesses cause the real businesses to shut down?". The point is not that "virtual" equals "fake", but rather that the world is sorted into two bins, the entirely online and the entirely offline. For example, one speaks of "virtual community", presupposing in a tacit way that the members of such a community do not have any interactions in other media, much less face-to-face. This is unfortunate. It is much more useful to imagine a continuum, with "entirely virtual" at one end and "entirely real" at the other end. Much better, imagine a whole vast multi-dimensional design space, with "entirely virtual" in one small corner and "entirely real" in another. "Real" businesses can use "virtual" media in hundreds of targeted ways, and vice versa. Better yet, make a clear analytical distinction between "communities" and the technologies that those communities happen to use. I probably don't have to spell out the point any further. In sum, I want us to get beyond simplistic dichotomies: technology pro versus con, centralized versus decentralized, economics versus power, and real versus virtual. I don't think that any of these dichotomies is useful at all. Critically examined, they dry up and blow away, whereupon they can be replaced by concepts that are more contextual, more integrative, and more dynamic. When you break open a simplistic dichotomy, you usually find that you need numerous brand new concepts to describe the whole massive space in-between. So it's natural that people will stick to the slogans. They're easy to say. But they're also mistaken, and they lead to unpleasant interactions and misguided prescriptions. The general point is that we need a critical technical practice: our talking and doing about technology needs to be informed by ideas about ideas, and by critical thinking. We need to examine the history of our concepts, the metaphors we use to express them, the assumptions they leave articulated, the distinctions they conflate, the contextual variation they hide, and the alternatives they foreclose. We need to do this now, while things are in flux. If we permit our world to be remade by people who are in the grip of simplistic dichotomies, whether innocently or not, then someday soon we are going to be sorry. I have a scenario: Amazon.com drives all of the world's independent bookstores out of business, and then it goes bankrupt. What a privilege it is to live in these times of rapid improvements in technology! The technology we take for granted today is incomparably better than the stuff that seemed so wildly impressive even two years ago. I refer, of course, to cheap pens. The two main lines of evolution in cheap pen technology are, as you know, liquid-ink and gel pens, and lately I have come across especially evolved examples of each. I bought a Zebra Zeb-Roller DX7 in Ireland -- actually I bought the only two I could find and then lost one. It's a liquid ink pen with a cylindrical tip that is similar in philosophy to the Pilot Precise V7 but works a lot better. It is downright therapeutic to write with, totally unscratchy, like gliding on a lubricated surface, without the uneven lines that the V7 often produced. Because its 0.7mm tip produces so much ink, it can bleed when writing on absorbent surfaces. It's not especially precise. Its cheesy brown plastic is not fashionable, though it does have some cool translucent blue near the tip. And like all such pens it runs out of ink fairly quickly. But despite these liabilities, when judged strictly by writeability it's clearly one of the best ever. If I manage to get it into a write-off with one of my Reynolds Ink Balls (which I deliberately left home because they tend to puke on the road) then I'll let you know. My other discovery is the Paper-Mate Gel-Writer. I don't think of Paper-Mate as a maker of quality cheap pens, and I certainly don't recommend any of the other Paper-Mate pens I've tried. But this one is different. It is a gel pen, somehow like writing with liquidized plastic more than any feeling one would associate with ink. So you have to like that feeling in order to appreciate it. That said, though, the Gel-Writer is the most effortless (or, as we would have said at MIT, least effortful) gel pen I've tried. Its faux-marble barrel is not as cool as it would have been last year, but it's okay. What's really special is what happens as the pen starts running out of ink. I don't understand this technology and wish I did. As best I can understand, the ink is chased down toward the tip by a wad of clear plasticky stuff, which may or may not have compressed air behind it. I wanted to saw the pen open to find out, but I'm traveling and it's not normal behavior to ask the hotel manager if you can borrow a saw. So that particular experiment will have to wait until I get home. Maybe you can try it yourself. Remember your safety goggles. I've tried several other cheap pens lately that are worth writing about, but they will have wait for another time. I have been keeping a list of foundations that I wish somebody would start. Here are a few of them: * Signbusters. Groups of traveling retirees who take a training course and then file reports on the signs wherever they go on their travels. If they get tripped up by a misleading sign, they'll go back and examine it according to their training, and if it turns out to fail one or more of the criteria of good signage then they'll write it up. The home office will then research the design company that produced the signs, and the winners/losers will be laughed at. * Sandblasting International. The world is full of excellent cities, Budapest for example, that would be even more excellent if someone spent a moderate amount of money to sandblast a couple hundred of the most interesting old buildings. Surely there is a cosmopolitan plutocrat out there who thinks this would be a good idea. * Stats Watch. Public discourse is full of terrible abuses of statistics, especially statistical correlations that are interpreted as proving otherwise implausible causalities. Stats Watch would be a volunteer association of statisticians who notice these things, file reports, and ridicule the worst offenders. In this case, however, the "honors" should not be announced at the end of the year, but right away, preferably within the same news cycle in which the bogus statistic is used. Otherwise few people will ever make the connection, and the stats-abusers will get the rhetorical effect they want. My informal sense is that these worst offenders are political think tanks that are paid to come up with arguments for preconceived positions whether the arguments really make sense or not. Some libertarians were upset, as you might imagine, at my essay on the ideas of John Commons. They were upset partly because I disagree with them, but partly I was unclear on one point. That point concerns a guy named Friedrich Hayek (often called von Hayek). If you don't know who Hayek is, Hayek is to capitalism what Marx was to communism: both its most important theorist and its most important political activist. Maybe "most important" is not the right phrase, but with each author, Marx and Hayek, a political-economic ideology reached its mature form and laid the intellectual basis for itself as an activist movement. Marx you know about. When I mentioned Hayek's "Individualism and Economic Order" a while back, one scholarly RRE reader said "[t]his is Hayek-the-social-scientist-of-genius, with only occasional appearances by his evil twin, Hayek-the-right-wing-ideologue". Hayek grew up in Vienna and moved to the Britain and then United States (I don't recall exactly when) to escape the ominous political climate of the 1930s. He argues for a particular sort of laissez-faire: minimal government regulation of the economy, but a strong legal system that applies laws with rigorous disregard for the identities and attributes of the people they are applied to. This is the "rule of law". The Hayek whom my correspondent called the social scientist of genius had very interesting things to say about the role of information in the economy. Whereas the currently dominant neoclassical tradition of economics tends to ignore problems of economics, or only to deal with them as incremental modifications of an idealized world in which information is infinite, perfect, and free, Hayek regards the production of information as the central problem of the economy. He believes that individuals accumulate a great deal of information about the circumstances that they know best -- this is called "local knowledge", although the locality need not be narrowly defined in geographic space. He also believes that it is impossible in principle for anybody, including a government, to gather enough of this vastness of information into one place to direct the economy in a centralized way. This is what I had in mind a few months ago when I jokingly mentioned the cheap Linux supercomputers that might have saved the Soviet Union. That wouldn't be a joke to Hayek, who would not have been impressed by Linux supercomputers one bit. The Hayek whom my correspondent called the evil right wing ideologue had very interesting things to say about political institutions. He believed that the "spontaneous order" of the nongovernmental part of society is infinitely rich and unpredictable in its ability to solve the problems of people's lives, and he was deeply distrustful of political institutions that attempted to substitute themselves for the emergent and self-organizing wisdom of the spontaneous order. He was so distrustful of such things that much of his mature writing was centrally concerned to place conceptual and institutional limits on the actions that the state could take. Hayek thought of himself as a supporter of democracy, but that is only true in the narrowest conceivable sense, given that his main purpose in life was preventing democracies from doing things that he disapproved of. His famous tract, "The Road to Serfdom", was written in the context of 1940s British politics, but it is still worth reading for its highly cultivated sense that any initiative to regulate the spontaneous order of society beyond the extremely restrictive neutrality of the rule of law is necessarily a step onto a slippery slope that leads to communism. That is what I meant when I said that, for Hayek, any amount of democracy necessary leads to harder stuff. Despite the importuning of my correspondents, I still have no problem describing Hayek as an opponent of democracy. He grew up in an antidemocratic climate, and even if he broke with the authoritarian conclusions that most others in that climate drew, the antidemocratic aspect of this political orientation did not fundamentally change. Where I was unclear is in connection with Hayek's relationship to anarchism. Hayek was not an anarchist because anarchists do not believe in a legal system. My purpose in my first few paragraphs was to set up an analogy, with Hayek being analagous to Luther and modern-day anarchists being analogous to the militant Protestants who smashed icons and overturned authority. Both of them opposed the Church hierarchy, but only the militants were opposed to all spiritual and temporal authority. Luther wanted to remove some of the mediations between the worshipper and God, but the militants wanted to remove them all. I wanted to describe the many people who indiscriminately denounce "government", democratic or not, as analogues of those militants: they took a certain tendency from the likes of Hayek and simplistically went nuts with it. This is what I meant by these people being Hayek's "legacy", not that I thought that Hayek would be one of them. I did make this clear later when I wrote of people who seem to speak Hayek's language but really don't. The point is this. A lot of the people these days who denounce government and plot to undermine intermediaries in every sphere (the two often go together even though they are not logically connected, and this is what provoked the analogy to the icon-smashers) are not especially philosophical, and many of them just look confused if anyone draws out for them the logic of what they are saying. They may not think of themselves as anarchists. And just for that reason, their indiscriminate attack on "government" is dishonest, because they reserve to themselves the right to denounce those manifestations of government that they dislike, and to do so with the full fury of a simple set of slogans, and then to embrace other aspects of government that they happen to like, such as stringent divorce laws or a large military, without feeling any sense of contradiction. (They also tend to be indiscriminate enemies of "lawsuits", even though conservatives such as Hayek have commonly thought of the common law as the best possible institution of government.) This is not reasonable or fair, and I think that one should be able to take exception to it without being labeled a communist. As regular readers of this list will be aware, I do not believe that information technology generally or the Internet in particular create much that is new in the world. Instead they provide ways to amplify social forces that already exist. The intuition is that people take hold of any technology within their framework of life: their existing system of concepts, motives, habits, and relationships. Of course, once they do take hold of the technology in those ways, the equilibria of forces that had formerly shaped the existing institutions might break down, eventually causing the institutions to seek a new form. But it's not the technology that did that; it's a particular kind of interaction between the technology and the workings of the society. Well, in response to those ideas recently, a Eastern European guy who is a prominent figure in the democratization of his country expressed disagreement. Looking a little bewildered, he said that the Internet had created freedom of information, and that freedom of information destroys authoritarian governments. I responded no, that Eastern Europeans had created freedom of information, that the Internet among other technologies were tools for this purpose, and that no freedom of information would have been created if Eastern Europeans had not already had the concept and the cultural forms of free information. For the contrasting case, look at Belarus, where only the most miniscule stratum of intellectuals seems to care about freedom of information. The commonplace idea that the Internet causes things, it seems to me, is disempowering. It encourages a cargo cult, waiting around for the Internet to bring you good things. It also dulls analysis, flattens out distinctions between cases, and directs attention away from most of the factors that interact to create either good or bad outcomes from the use of the technology. Part of the problem is that the Eastern European guy heard me to be saying that the Internet leaves everything the way it is, amplifying everything but leaving all the relationships the same. But again we have to analyze (what social scientists call) the mediations that shape how the Internet is used. For example, few communists are enthusiastic about the Internet. Perhaps it is like vampires and garlic, the liberatory potentials of the technology causing an allergic response in authoritarian hearts. (Note that I say potentials, not implications.) Communists in the West often regard the Internet as simply one more capitalist technology, a means of production like any other. This is not such an inaccurate view of the first forty years of computing, but it is wildly inaccurate as a view of the Internet. To be sure, the Internet is a great big means of production, and its interaction with the rest of the political economy of globalization is well worth studying regardless of one's politics. But the Internet is a lot more complicated than that, assuming again that the forces of good have some clue about what to do with it. Once we open up -- that is, make intellectually visible -- the mediations that shape how the Internet is used, we can start to notice some interesting things. We might ask, for example, what assumptions different societies hold about the processes of collective cognition through which shared ideas arise. Some societies assume that ideas are the realm of the intelligentsia. I don't know if anyone has ever written a good comparative study of this concept, but my sense is that it varies from country to country. It sometimes seems to refer to intellectuals -- people who make their living doing intellectual work, usually including scientists. It can also refer to all educated people, the idea being that educated people are a definite social layer. Neither of these is at all related to the idea, widespread in the United States, that intellectuals are ipso facto a bunch of traitors. Think for example of George W. Bush, who dislikes people who did better than him in school. I have even encountered the idea, most recently in a message from a guy in Russia, that shared human values derive from the discussions of "prominent human beings". This Russian guy figured that the Internet would help the world by enabling these prominent human beings to converse. I doubt this, given that prominent human beings are already served by foundations that fly them en masse to catered resorts in places like Aspen and Davos. All of which tends to point up the cultural specificity of the theory of collective cognition that I ascribed to John Commons the other day. The flip side of American anti-intellectualism is the Jacksonian populism that makes Americans figure that they are just as good as any intellectual. This tension between professionalized and popular forms of knowledge is basically a good thing. And even though I would be the last person to impose the fine detail of the United States' often bizarre political culture on other countries, I do think something is wrong when ordinary people assume that coming up with ideas is someone else's job. Societies work right when most everyone has acquired the habit of reaching out laterally to people like themselves, forming associations and companies and committees and parties and clubs and mailing lists and so on. That kind of lateral reaching-out can't do everything, and (contrary to the think tank fashions of the moment) beyond a certain point the proper goal of such organizing is to get good laws passed. Nonetheless, the habits of association are a necessary condition of a healthy society, acting together begins with thinking together, and thinking together begins with the idea that one's own ideas might be worthwhile. In France once I encountered the idea that building a professional network around shared ideas and values constituted the sin of pride, inasmuch as it presupposed that one's own ideas had any value. It took me a week to figure out what was wrong with that, and by then the moment had gone. You will recall from last time that one rhetorical strategy of public relations is associationism: coming up with factoids and arguments that tend in a vague way to build convenient mental associations betwen concepts and break inconvenient ones. I promised that once you understand this you'll see it everywhere. Well, here is an example. It is a quote that the Irish Times ascribes to one Gillian Kent, group marketing manager for MSN, on the occasion of the recent howling security bug in Hotmail: "Someone hacked into an older server carrying older code. We resolved the problem immediately, but it is the case that wherever software is in the world, someone will find a way into it." (9/1/99, page 17) If you're thinking rationally then you'll find this statement surprising. Microsoft is going on record as stating that networked services such as its own are inherently insecure. But you are not supposed to interpret this statement with your rational mind. Instead, you are supposed to interpret it with your lizard mind, the one that knows only vague associations. From that perspective, the strategy of the statement is clear: the associative bond between "Microsoft" and "security problems" is being split open using two crowbars. One crowbar is time: that association, they're saying, belongs in the past. It was "older" code on an "older" server that was hacked into. By way of comparison, I once saw a quote from a spokesman from a mining company who said that associations between mining and pollution may have represented the past, but nowadays mining companies are completely different and use clean technologies such as computers. Do you see how he was trying to break apart an existing bond and create a new one? In the Microsoft case, you might be asking why they put an insecure "older" server online that has access to all of its' customers' personal mail. But that would be your rational mind acting up again. The second crowbar is space: the system that the hackers go into, they're saying, could have been anywhere, and has no special association with us. It's code on the network that poses the problem, not Microsoft. This sort of thing might not work on computer industry insiders who have specialized knowledge of the matter, not to mention mental antibodies against the spokesbabble of Microsoft. But in the long run, Microsoft doesn't live or die on the opinion of such people. It lives or dies on the thinking of normal people, people who have better things to do and don't yet have settled ideas on these topics, and so it is following the advice of its media advisors about how to shape the thinking of those unsuspecting folks. Multiply this kind of neurolinguistic programming by hundreds of thousands and you get the corrosion of reason that is intrinsic to public relations as it is currently practiced in commerce and politics alike. These days it's fashionable to hate college professors. I don't like this, and I don't have much respect for the college professors such as myself who are too busy with serious work to speak out about it. To give some idea of just how routine and unquestioned the nasty stereotyping of college professors has become, let us consider one simple example, which I have deliberately selected from the editors of a reputedly liberal publication, Salon: After a century of communist atrocities, why do American academics still worship Karl Marx? Notice the main grammatical device here: an equivocation between "some" and "all". Which way should the factual presupposition within this question be interpreted: as "why do some academics worship Marx", or "why do all academics worship Marx"? Well, if it were "some" then the answer would be, "because academia finds it valuable to include a great diversity of opinions, some of which are extreme". But that's not what the author meant. Did the author mean "all"? Well, that assertion would obviously be false. Rather than choose between the triviality and the lie, therefore, the author chose the equivocation that insinuates the lie while reserving the triviality as a fallback position for deniability. This is a common technique. Now some people, reading this discussion, will be up in arms about the massive numbers of Marx-worshippers who supposedly fill the universities. They've heard all about them on the radio. But it's easy to create such an impression if you have the will and resources to do so. Just set up a stereotype -- dirty Jew, communist college professor, whatever -- and then feed your audience a steady diet of examples. A certain small percentage of Jews happen to be dirty people, and a certain small percentage of college professors happen to be communists. But if you control the media, or if you are simply willing and able to be more brazen than others in your use of it, then you can serve up the dirty Jew of the week, or the communist college professor of the week, until everyone who lacks first-hand knowledge of the situation, or who is for some reason predisposed to agree with you, has quite firmly associated "dirty" with "Jew" and "communist" with "college professor" in their heads. Polemical vocabulary items like "atrocities" and "worship" support this psychological process by replacing clear thought with strong feeling. Does a scholar who writes about Marx necessarily worship him? Does a department whose students must read Marx alongside the other major figures of Western thought necessarily worship him? The intensity of the words short-circuits any such making of distinctions. It is then only a couple more steps until the purges. Recommended: Lowell Bryan, Jane Fraser, Jeremy Oppenheim, and Wilhelm Ball, Race for the World: Strategies for Building a Great Global Firm, Boston: Harvard Business School Press, 1999. This is the latest book by Lowell Bryan, who was also the lead author of "Market Unbound", the book on global financial markets that I recommended a couple of years ago. Bryan and his colleagues are consultants at McKinsey, and their book has all of the usual limitations of books by consultants: it is not well footnoted, it drives home one single idea relentlessly on page after page, it overgeneralizes its analysis, it puts fancy names on old ideas, and it is basically (as Rob Kling says) a book-length business card that aims to sell you McKinsey's consulting services. That said, it does offer a compelling picture of the world. According to this picture, almost all industries are going to integrate on a global basis. "Integrate" means that one company will dominate the whole business in every country. Companies will merge across national boundaries. Every company will have to decide: buy your opposite numbers in other markets, get bought by them, or (the most usual case) break yourself into several parts, sell the ones that have no chance of expanding into a dominant global position, and then establish such a position for the part that remains. Now, this is a view that you have already heard before on this very list, and it is present in my article from last year in the Times Literary Supplement. And I doubt if I'm the first person to have thought of it. But if it is true then it will be an amazing phenomenon to watch, and it will have profound consequences for everyone. An economy of monopolies, global in nature but relatively narrowly defined. Is this healthy? Does it bear any relation to the theories of the market economy that economists teach, newspapers preach, and governments pretend to be guided by. In any case, the strength of the book is analytical. It is a particular style of analysis -- it doesn't feel scholarly, and in fact the authors misuse some technical words. Being consultants, their method is to generalize from the analyses of their present and prospective customers, and in so doing they retain a sense of the fine grain and intensive cultivation of their customers' analyses of their own lines of business. They have particularly interesting things to say about the role of intangibles in the new economy. This theme is not new in itself, of course; it was central to Kevin Kelly's book among others. What's new is what they do with it. Authors like Kelly can't bring themselves to face the centralizing, monopoly-creating implications of their own logic. They really believe that information technology is inherently a force for decentralization. But McKinsey's customers would like nothing better than learning how to become a monopoly, and Bryan and company are not squeamish about the matter at all. They throw around phrases like "unfair advantage" with no ideological pretenses or no moral difficulties. By intangibles they mean a broad and somewhat shaggy range of phenomena, such as social networks, brand names, intellectual property, and employee talent. What these things have in common, so far as their argument is concerned, is that you can use them without using them up. They reward scale. The more global you get, the more money you make with them. Their most interesting argument, which I've certainly heard in Silicon Valley but have not seen generalized into a prescriptive theory before, is that companies seeking to leverage their intangibles should be "asset light", that is, they should structure their business as a relatively small firm that maintains alliances with other firms that bend metal, own things, face liability, take risk, and so on. A good example would be Intel, which does own factories, but which is renowned for its ability to structure relationships with other firms in a way that provides Intel with all of the profit and the other firms with all of the hassle. The whole personal computer industry is an example of this: your PC might be made by an alliance of several different companies, but only Intel and Microsoft, and I guess Dell though I haven't looked lately, have major profit margins. Another example is Coca Cola, which owns the secret formula and brand name but lets its captive bottlers do the heavy lifting and take the risks associated with particular markets. This story is no doubt too simple, but it does explain a lot. For example, it explains the explosion of cross-border mergers in Europe right after the euro kicked in. It also explains a lot of the merger action within the United States over the last few years. So now think about the situation of a country like Bulgaria. Its companies have no chance of becoming dominant global leveragers of intangibles anytime soon, and so we can expect them all to be bought out by foreign firms. Is this good for the Bulgarians or not? Should they resign themselves to simply angling to get the best terms when they are bought out? Somehow these are not the exciting questions that one expects to be asking when using phrases like "global integration". Recommended: Cristiano Antonelli, Localized technological change and the evolution of standards as economic institutions, Information Economics and Policy 6(4), 1994, pages 195-216. This is a smart and insightful paper about the problems of changing standards in the presence of switching costs. You're using Apple, the rest of the world starts using Windows, it will cost you money and effort to change, you have some sense of the costs and benefits, all of the various vendors also have a sense of the same thing, everyone makes guesses about what everyone else is going to do, and the action unfolds from there. At the end of the analysis, Antonelli presents a list of the factors that can affect the outcome, and it is impressively both long and plausible. This paper also appears in a new collection of Antonelli's papers, The Microdynamics of Technological Change, London: Routledge, 1999. I also recommend Antonelli's own contribution to a book that he edited: The Economics of Information Networks, Amsterdam: North-Holland, 1992. Recommended: Robert E. Goodin, ed, The Theory of Institutional Design, Cambridge University Press, 1996. This is a useful book about the design of social institutions, which has become a fashionable topic in the 1990s as the countries of central and eastern Europe rebuild their institutions with the help (not always welcome or productive) of Western scholars. The best paper in this volume is by Claus Offe, who is best known for ponderous books about the welfare state but who contributes to this book a very imaginative and useful framework for thinking about the function and legitimacy of institutions. In one especially striking image, he refers to social institutions as the "exoskeleton" of social life inasmuch as they set down formal or informal rules about what kinds of activities are supposed to take place where and when. These rules, like all institutional structures, both enable and constrain. Their constraints are obvious, but they also make life easier by letting everyone focus on particular areas of their lives and letting the other areas happen in conventional ways. Recommended: Etienne Wenger, Communities of Practice: Learning, Meaning and Identity, Cambridge: Cambridge University Press, 1998. The phrase "communities of practice" originates with a short book of that title by Jean Lave and her then student Etienne Wenger. It has become widely used in recent years, in large part because of the thoughtful popularizations of John Seely Brown and Paul Duguid, who have their own book on the subject coming out soon from Harvard Business School Press. The idea is that a community of practice is a social group that is organized around some activity, prototypically an occupation. The theory of communities of practice describes how knowledge and learning is embedded in the community processes, so that, for example, newcomers enter the community and are progressively socialized into its ways, including both its skills and its rituals. In recent years many people have applied these ideas to the study of online community, and the study of online communities of practice has led to the useful realization that the distinction of online versus offline does not make a big difference: most communities of practice have both online and offline dimensions to their collective lives. So now this, at last, now that the world is good and ready for it, is Etienne Wenger's theoretical tract about communities of practice. It is not easy reading, but it is a deep and sophisticated analysis of various dimensions of working together and learning together. It will be an especially valuable resource in the future as people look for more sophisticated ways in which the Internet can support the patterns of thinking and working that are cultivated and shared by far-flung communities. Recommended: James N. Danziger, William H. Dutton, Rob Kling, and Kenneth L. Kraemer, Computers and Politics: High Technology in American Local Governments, New York: Columbia University Press, 1982. This unjustly overlooked book, by a group whose members were then located at UC Irvine, is perhaps the founding study in organizational informatics, the unjustly overlooked field that uses serious empirical methods to study of what people in organizations do with computers. They studied a large sample of American local governments, and they asked what factors determine the path that computerization takes in the various contexts. Ethnographic field methods are strong on questions and statistical survey methods are strong on answers, so they used both methods. This is remarkable enough in itself, but what I find most remarkable is the complexity of the variables that they coded for the various governments, for example the relative political positions of the various factions in each organization. To make a long story short, they discovered that everyone played some role in influencing the directions of computerization, but that the outcomes of computerization tended strongly to favor whichever coalition was in power. If the financial people were in charge, for example, then the financial people got the major benefits of the systems once they were in place. This may not seem like such a terribly surprising result, except that it is not what any of the standard theories would have predicted. Those theories typically predict that top managers run things, or that the demands of technological rationalization run things, or that information technology is a revolutionary force that sweeps away entrenched power centers. None of those hypotheses was remotely supported by the data. The authors refer to their result as reinforcement politics. They do not mean to claim that reinforcement politics will be found in every workplace in every sector of society. That's not how this kind of research works. But their results do argue that anybody who studies the use of computing in organizational settings should at least check whether reinforcement politics is going on, because if they don't then they may well be falling for some kind of cover story. It is also worth wondering how their ideas translate to the much more globalized process by which packaged software and computing standards are shaped. Standards in electronic commerce, for example, can easily be biased to favor one player over another, and it is not completely inevitable that the technology will revolutionize anything. Recommended: David G. Messerschmitt, Networked Applications: A Guide to the New Computing Infrastructure, Morgan Kaufman, 1999. This is a very good textbook to networked applications for non computer people. Messerschmitt is a hard-core computer guy at Berkeley, and this text grew out of a course that he cotaught with Hal Varian for students in the Berkeley SIMS program. It provides a clear explanation in plain language of concepts like network protocols, layering, components, objects, and transactions. I am probably going to use it for a course on systems analysis and design for information studies students that I am teaching at UCLA in the spring. Such courses have historically been based on thought patterns from industrial automation, but we want to do something more current. My radical plan, which will presumably get moderated, is to focus the course on networks not mainframes, and on industrial design not industrial automation, and on portable and site-specific devices rather than applications on fixed monitors. We shall see. Messerschmidt's book is part of a larger movement to train students who are going to be critical consumers of information technology, so that they can work with the technical people and put the technical issues in a larger context of economic, organizational, community, cultural, etc issues. It also includes a bit of useful economic analysis of the virtues of layering. Readers of this list will be familiar with the general idea, which I called the "platform cycle": new layers emerge from a mass of special-purpose "stovepipe" systems because of the economies of scope they provide through all of the applications that can be built on top of them. Recommended: John McWhorter, The Word on the Street: Fact and Fable about American English, New York: Plenum, 1998. Those with good memories (and bad powers of forgetting dumb things) will remember the flap that erupted at the end of 1996 when the Oakland school board tried to introduce a program called Ebonics to teach mainstream English to students who grow up speaking other dialects. The talk radio brigade used misleading language to make it sound like the people in Oakland were going to teach in these dialects, encourage their use in school, and so on, when in fact the goal was to draw explicit attention to the differences between the dialects so that the children could learn how their own dialect differed from the mainstream dialect that they would need out in the world. The school board people, for their part, engaged in a lot of imprecise rhetoric about African-American dialects constituting a separate language, thereby handing all sorts of rhetorical ammo to the permanently distraught people who believe that there is One Objectively True Right Answer To Every Question, Namely Mine. Now, if your memory is especially excellent then you will remember that the only person who talked any sense during this controversy was a Berkeley linguistics professor named John McWhorter, who happens to be black and who happens to have applied the modern tools of creole studies to the historical analysis of African-American dialects. This is his book on the matter, and the reason that you have never heard of it is that his views are certain to make nobody happy. He affirms that African- American dialects really are distinct dialects, and that they are not (for example) just bad or sloppy language. But he also affirms that African-American dialects really are dialects of English, and not of any other language. Perhaps most disorientingly, he claims that most of the distinctive features of these dialects, for example their verbs (technically, the aspect of their verbs) descend not from African sources but from English sources -- that is, from the language of English working-class people in the 17th and 18th centuries. You can see why nobody is holding parades for the guy. His book includes a lot of other counterintuitive or otherwise unpopular opinions, such as a continual and strong assertion of the linguists' creed that there is no such thing as "good" or "bad" language, just the empirical facts of how people actually speak. You might have heard on the radio that relativism and science are opposites, but it's not always so. All of that being said, his book is also a little frustrating. It is not well edited, and sometimes repeats bits and pieces. It is also short on evidence. The modern tools of linguistics are quite technical, and I can imagine that he wanted to keep the notation out of it. But the result is that you really have to believe him, rather than making up your mind for yourself. I found the effect unsatisfying. He also includes some chapters on other topics that don't fit well into the whole. Nonetheless, if you want your assumptions punctured then this book is a good way to go about it. Recommended: Norbert A. Streitz, Shin'ichi Konomi, and Heinz-Jurgen Burkhardt, eds, Cooperative Buildings: Integrating Information, Organization, and Architecture: First International Workshop, CoBuild '98, Darmstadt, Germany, February 25-26, 1998: Proceedings, Berlin: Springer, 1998. An awful lot of technical conferences describe gadgets that, if they worked, would be bound up tightly with the social world: gadgets that are portable, that are sewn into clothing, that support cooperative work or communities, and so on. Most of those conferences are technology-driven -- they are based on a rough idea of how the devicse would be used, an idea that is drawn more from plausible incremental extensions of existing devices than from empirical investigation. This is one of the relatively few such conferences that incorporates interesting ideas about the social world. Most of the papers reach, in one way or another, for an original understanding of what it means to inhabit a building, and what therefore it might mean for a building to "cooperate" with its inhabitants. Some people will be uncomfortable at the idea that the building is being anthropomorphized, but I don't get the sense that these authors have gone overboard in that direction. I just got a sense of fresh questioning about real human issues from this conference proceedings that I don't usually get from technical work. Recommended: Susan Leigh Star and Karen Ruhleder, Steps toward an ecology of infrastructure: Design and access for large information spaces, Information Systems Research 7(1), 1996, pages 111-134. Even though it is written in reasonably plain language, this is a demanding and complicated paper about the deep meaning of "infrastructure". One conception of infrastructure -- one which, in fact, I falsely ascribed to these authors in one of the drafts I sent out on my list -- is that infrastructure is the technology that disappears in the background and lets real work get done. While this is sort of true, it tends to gloss over the many ways in which infrastructure structures the work that it supports. If you get your power from electricity, for example, then certain configurations of artifacts and methods are practicable, but if you get your power from gas, then different configurations will be practicable instead. The particular case that Star and Ruhleder investigated is an Internet-based "collaboratory" that was intended to support the work of a large global community of biologists who are systematically studying a small worm in order to develop methods and concepts that might scale up to the study of larger organisms later on. The collaboratory basically failed, and this paper explores why. The basic reason is that it was too hard to integrate into the diverse multitude of local work practices in the various worm labs. This leads into the deep question of the tension between global and local that is central to the use of networks to support real activities. As their title suggests, Star and Ruhleder take their theoretical inspiration from Gregory Bateson, whose own particular version of cybernetics was based on abstract, in-principle kinds of distinctions such as first- order learning (learning how to do something) versus second-order learning (learning how to learn). I have to say that I have never gotten very much out of cybernetics, partly I think because you can't appreciate it abstractly, but instead need to apply the concepts in a sustained way to the analysis of a series of particular cases. That's what Star and Ruhleder do here, and I'm not quite sure whether the credit for what they have noticed through their analyses should belong entirely to them or whether part of it should go (as they intend) to Bateson. In any case, they notice a lot of interesting things, such as the ways in which the designers' abstract ideas about worm science fail to correspond to reality. My favorite example concerns so-called annotation systems, which have been a favorite of computer scientists for many years, most recently with the press attention to the Third Voice system for publishing one's own annotations to other people's Web pages. Even though this seems like something that people would want, in fact nobody seems to want it. This is an interesting puzzle. What is the problem? Star and Ruhleder's answer, at least in the case of the annotation system that was provided as part of the worm collaboratory, is that the scientists need to stay on good terms with one another, and even more importantly they prefer to publish their commentaries in journals so that they can get professional credit for the effort that any serious commenting takes. Annotation, then, turns out not to be simply a technical concept, but also a social concept that is embedded in the social system of the worm scientists. Other communities, organized within the confines of other institutions, will find that annotation is embedded in some different set of relationships and incentives. Noticing that sort of thing is good social science, and it is one reason why one needs good concepts and critical thinking to do social science well. Recommended: Mark Mazower, Dark Continent: Europe's Twentieth Century, New York: Knopf, 1999. This deservedly celebrated book provides a sweeping, compelling, and wildly original picture of the history of Europe in this century. Today we often think of Europe as the home of democracy, human rights, good living, social welfare, economic cooperation, and a certain humble if occasionally spineless preference for sweetness and light. The Nazis? Well, they were an exception. Mazower will have none of this, and his book adduces a great mass of evidence, albeit much of it synthesized from secondary sources, to suggest that Europe's current happiness was very far from inevitable. The core of the argument is that the Nazis were not an aberration in the context of early 20th century European society. They were an extreme expression of that society, to be sure, and as the inherent logic of their project spiraled into war and the wartime society spiraled out of control, things happened in the German-controlled areas that did not happen anywhere else. Nonetheless, all of the component parts -- antipathy to democracy, scientific quackery made into social control, racism, and much else -- were nearly universal throughout the continent, and the people who marched into the Nazi fold did so against a background of both reality and belief that made its evil easy to ignore. Much the same was true of the Soviet Union as well. He is especially interesting on daily life within those totalitarian regimes, and on which wild theories the people did and did not believe, and which social control projects did and did not work, and just how the downward spirals got going when they did not. Perhaps Mazower's most provocative argument is that many features of post-war Europe, once the Nazis blew their chance to reorganize Europe to their own liking through their beastly behavior as occupiers, grew naturally out of pre-war strands of European thought. The European Union, after all, is a continent-wide economic zone dominated by the Germans, just as Albert Speer had planned. Recent revelations also make clear that the European welfare states did not completely shake off their eugenicist origins until decades after the war. Much else about post-war Europe makes sense as a continuous development from pre-war Europe. I find the book particularly useful as a stimulus to clear thinking about the contemporary resurgence of anti-democratic sentiment in the United States, where merely advocating democracy on the Internet can draw flaming denunciations, and where very much the same critiques of the legislature that united such otherwise disparate figures as Carl Schmitt and Friedrich Hayek can be heard articulated with elaborately vernacular indignation by commercial parties who want to get government off their backs. America and Europe are different societies, of course, but it is interesting to reflect on the complex transformations that European ideas from Anabaptism to Enlightenment to deconstruction have undergone on their way over here. Some URL's. I should note that my lists of URL's always start with the most recent ones. I've been gathering this list for a couple of months, and so the later URL's might be a little out of date. Once again most of these URL's come from RRE subscribers, and I appreciate their effort. Cyber-Conversation Research http://www.stolaf.edu/people/roberts/psych-121/evaluation98.html Where the Web Leads Us http://xml.com/pub/1999/10/tokyo.html Anatomy of a Network Intrusion http://www.networkcomputing.com/1021/1021ws1.html Trusted Computing Platform Alliance http://www.trustedpc.org/home/home.htm An Introduction to the IP Infrastructure http://www.frankston.com/Public/Essays/IP%20An%20Introduction.asp Interview with John Daniel of the Open University http://www.highereducation.org/crosstalk/ct0799/interview0799.html Global Knowledge Partnership http://www.globalknowledge.org/ Anarchism Triumphant: Free Software and the Death of Copyright http://firstmonday.org/issues/issue4_8/moglen/ Library Juice http://libr.org/Juice/ Internet Content Summit http://www.ksg.harvard.edu/people/jcamp/munichnotes.html Local versus Global Electronic Commerce http://www.electronicmarkets.org/netacademy/publications.nsf/all_pk/1336 maps of the Internet http://www.cybergeography.com/ http://www.peacockmaps.com/topframe.html http://www.cs.bell-labs.com/who/ches/map/index.html http://www.newsmaps.com/ http://ai.bpa.arizona.edu/Visualization/demos3_intro.html http://www.bcpl.net/~lboot/webmap2 http://www.plexus.org/omnizone/ http://cyberatlas.guggenheim.org/home/index.html http://invisible.net/ Defect Tolerant Molecular Electronics Algorithms, Architectures, and Atoms http://www.stanford.edu/class/ee380/ Technoscience, Citizenship and Culture in the 21st Century Vienna, 27-30 September 2000 http://www.univie.ac.at/Wissenschaftstheorie/conference2000/ proceedings of CPSR domain name conference http://www.cpsr.org/conferences/dns99/dnsconf99.htm http://www.heise.de/tp/english/inhalt/te/5345/1.html Intellectual Property in the Age of Universal Access http://www.acm.org/pubs/property/ very cool eclipse photo http://antwrp.gsfc.nasa.gov/apod/ap990830.html compromise database bill http://www.databasedata.org Corporate University Xchange http://www.corpu.com/ report on the CPSR meeting on ICANN http://www.heise.de/tp/english/inhalt/te/5345/1.html Everything you know about the Littleton killings is wrong http://www.salon.com/news/feature/1999/09/23/columbine/ NY Times section on e-commerce http://www.nytimes.com/library/tech/99/09/biztech/technology/ NSI complaint messages hacked by 2600 --very funny http://www.2600.com/2600new/092099-mail.html http://www.salon.com/tech/col/rose/1999/09/21/network_solutions/index.html East Timor sites http://www.etan.org/ http://www.etan.org/ifet/ http://www.un.org/peace/etimor/ http://www.peg.apc.org/~asiet/ http://www.easttimor.com/ Merriam-Webster Dictionary http://www.m-w.com/dictionary.htm review of "The Weightless World" by Diana Coyle http://www.santafe.edu/~shalizi/reviews/weightless-world/ The Cybercommunist Manifesto http://www.nettime.org/nettime.w3archive/199909/msg00046.html outtakes from Richard Nixon's resignation speech http://www.rvi.com/nixon/nixon.ram Current Developments in Internet Privacy http://www.anu.edu.au/people/Roger.Clarke/DV/ICurr9908.html The Coming Software Patent Crisis: Can Linux Survive? http://linuxjournal.com/articles/currents/003.html http://slashdot.org/articles/99/08/28/1610205.shtml Burn All GIFs Day http://burnallgifs.org/ Reputation Managers are Happening http://www.useit.com/alertbox/990905.html Filtering the Internet: A Best Practices Model http://webserver.law.yale.edu/infosociety/filtering_report.html Development Informatics http://www.man.ac.uk/idpm/idpm_dp.htm#devinf_wp Beyond Black Boxes http://el.www.media.mit.edu/groups/el/papers/mres/bbb-jls/ Tomorrow's Professor Listserv http://sll-6.stanford.edu/projects/tomprof/index.html Your Rights Online http://slashdot.org/article.pl?sid=99/09/09/1046218&mode=thread Defending the Internet Revolution in the Broadband Era http://e-conomy.berkeley.edu/pubs/wp/ewp12.html Papyrus News: Global impact of IT on language, literacy, and education http://www.lll.hawaii.edu/web/faculty/markw/papyrus-news.html Computer Programming for Everybody http://www.python.org/doc/essays/cp4e.html Automatic Wireless Transmission of Serious Injury Probability Ratings http://www.nhtsa.dot.gov/cars/problems/studies/acns/champion.htm strange court decision claiming privacy invasion is free speech http://www.kscourts.org/ca10/cases/1999/08/98-9518.htm The Evolving World of E-Tailing http://www.news.com/Perspectives/Column/0,176,355,00.html end