Some notes on the history of unreason and the futurology of the
Internet, plus follow-ups and URL's.


Let me repeat my cheap pen deal: if you send me a check for US$20 (or
the equivalent in another currency) made out to Amnesty International
then I will send you an interesting cheap pen.  Here is my address:
Dept of Information Studies; UCLA; Los Angeles, CA 90095-1520; USA.
You may have heard about the arms race that's been going on between
the doctors who have made a specialty of diagnosing the signs of
torture and the torturers who have made a speciality of torturing
people in ways that the doctors can't diagnose.  I want the doctors
to win, but that'll be tough.  The big fashion in torture is to devise
situations in which the victims inflict pain on themselves.  These
techniques have spread around the world.  You can confine the victim
in a small box for a few days so that the pain is inflicted through
muscle cramps, or you can shut the victim in an outhouse or in a box
with a decaying corpse so that the pain is inflicted through nonstop
retching.  Another approach is to put the victim in a room that has
nothing but a small platform and six inches of cold water; the pain
is inflicted by sleep deprivation and the endless changing of posture
to try to avoid having all of one's body heat sucked out.  The great
virtue of these techniques is that they don't leave marks.  If you
tie someone's wrists behind their back and then hang them up that
way, you'll cause plenty of pain, but you'll also yank the nerves
hard enough that the doctors have something to testify to in court.
Electricity burns the tissues.  And so on.  But there are gray areas.
For example, the doctors are trying to figure out why victims from
some of the more advanced countries appear to have deep brain damage
that is not consistent with anything so crude as blows to the head.
So it's a serious thing.


I've now sent out all of the leftover readers from our "Information
and Institutional Change" course.  I wish I could use UCLA's course
reader publishing service to produce and sell anthologies of the work
that I recommend here on RRE, but I don't know what the law would say
about it if I did.  We do pay the copyright fees, but I'm not sure
whether that would suffice for something that I would advertise on the
Internet, even if on a non-profit basis.


The year 2000 is an arbitrary marker, yet the passing of centuries
helps us to get a distance on the past.  Looking at the last few
centuries of Western intellectual and political life, we see a war
over rationality.  In the Enlightenment the idea had arisen that
social progress would be promoted by a norm that all disputes should
be settled by reason.  This prospect of a universal civil society of
reasonable people appealed to people who had lived through religious
wars, and it is unfortunate that the losing side in the American
revolution keeps pretending that American political institutions
were founded by religious warriors and not Enlightenment philosophers.
But reason has also taken on different and stronger connotations.  The
French revolution was a revolution of reason -- in fact a religion of
reason -- and lacking the institutional conditions to actually run the
society along Enlightenment lines, it rapidly devolved into a tyranny
of those who purported to impose reason on everyone else.

Watching this calamity, conservatives throughout Europe but especially
in England were reinforced in their belief that the common people
should not be engaged in rational thought.  This terrible belief was
a commonplace of conservatism until it was driven underground by the
spread of egalitarianism in recent times.  Conservatives have looked
in horror as the religion of reason, far from being discredited by
the French experience, has taken on new and more entrenched forms.
New religions of reason arose and multiplied as the prestige of
technology faced the turbulence of the market, and all of these
religions assigned to the engineer a God-like role in discovering
a rational order and then imposing it on the world.  Even as the
religious language faded away, a thousand metaphors and conventions
of technology preserved this background assumption of the God-like
engineer.  So complete did the victory of rationalizing philosophies
become that they were found at every point in the political spectrum,
from communism to fascism to industrial management to Cold War
government policies to professions from medicine to urban planning.

In response to this alarming development, conservatives and democrats
paradoxically found themselves on the same side of the war against
rationalization.  Thus, for example, one sees in the literature of
both conservatives and the democratic left an emphasis on the tacit
knowledge that so often escapes the authoritative but superficial
representational practices of rationalization.  Of course, the two
literatures draw quite different conclusions from the prevalence of
tacit knowledge: for Michael Oakeshott, it argues that politics should
be left to the hereditary aristocrats who have handed down a body
of tacit political knowledge, whereas for union activists it argues
against the practicability of management schemes for rationalizing
work by capturing the workers' knowledge.  One also encounters in both
literatures an emphasis on decentralization; the New Left version of
decentralization was once quite prominent, but now has to be written
out of history in order to sell a simple story about liberals as the
friends of centralized big government.

All of this history having gone down, the present moment is one
of disarray.  Rationalized urban planning has been discredited by
the failures of "urban renewal" and the rise of community-based
political power, yet many other rationalizing practices continue at
full strength.  The overt religion is gone and the florid ideology
of the "one best way" has withered, but in many settings these sorts
of practices continue to shape language, thought, representational
conventions, methods of coordinating work, and the relations between
the powerholders and the people whose lives get rationalized.

Society must learn to renounce the arrogance of rationalization
without renouncing whatever is healthy about rationality.  This is not
a problem for conservatives, whose vision of a stable social order and
a culture of deference has no need for rationality of any sort, except
perhaps among the narrow councils that periodically adjust the system
in light of changing conditions.  Conservatism operates by disparaging
and destroying the capacity for rational thought among the majority
of people, notwithstanding their efforts to educate themselves and
insist on the equality of treatment that a rational society requires.
So, for example, conservatives now insist that mathematics education
return to the rote learning of algorithms, and that it be cleansed
of every element of understanding of mathematical concepts.  They
likewise insist that history be turned back into the memorization
of historical facts with no real study of the reasons for historical
processes.  Conservative publications and pundits try to popularize
styles of speaking and thinking that are irrational and arbitrary.
Of course, some people who regard themselves as conservatives --
perhaps most of them -- are unaware of what conservatism really is;
they represent themselves as followers of such deep philosophers as
Ronald Reagan while remaining oblivious to the large-scale project
of restoring the philosophy and social order of Edmund Burke.  The
"conservatism" that these people know is basically a public relations
campaign aimed at persuading them to lay down their capacity for
rational thought.  We could pity them if they weren't so dangerous.

Caught in the middle of this conflict are those sane people who
wish to retain a meaningful sense of rationality without subscribing
to the secularized religion that has long shaped technological ways
of thought.  Philosophies tend to become defined by the things they
most strongly oppose, and many democratic philosophies have allowed
themselves to be defined in opposition to rationalization.  Because
the ideology of rationalization crossed so many political lines, it
can be terribly confusing to find solid ground for the reconstruction
of public norms of reason.  Moreover, the conservative revival has
sought to confuse matters by using the rhetorical devices of reason --
accusations of double standards, for example -- in dishonest ways as
methods of undermining both the devices themselves and the democrats
who have insisted on them.  Conservative pundits have likewise made
a practice of accusing democrats of every crime that conservatives
have ever been accused of.  Thus an aristocrat such as George W. Bush
can, in upside-down fashion, throw the word "elite" at the generation
of scholarship kids who were accepted to Yale instead of "legacies"
such as himself.  Precisely because his allies know that he doesn't
mean it, his elitism can dress itself in antielitist vocabulary.

How, then, to recover a public norm of rationality without becoming,
or credibly being accused of becoming, the sort of rationalizer who
wrote so much of 20th century history?  A starting-point, it seems
to me, is to learn to recognize irrationality when one sees it.  The
whole industry of public relations, from which many of the rhetorical
devices of modern-day irrationalism descend, is organized around the
same mechanisms of irrational "primary process" thought that Freud
identified.  The alignment is not accidental, given that the theory
of public relations was first codified by a nephew of Freud's, Edward
Bernays, whose intentions were quite explicitly to use Freud's method
to manipulate public opinion so that public decisions could be made
by a narrow elite.  This is why so much public relations work tries
to dissolve public issues into simple concepts and then fashion or
break mental associations among them; that is how Freud believes the
primary process works.  Likewise, much current political discourse
exhibits a mechanism that Freud called projection -- falsely accusing
others of doing what you are in fact doing to them -- that is perhaps
the most primitive aspect of primary process.

Thus the Republican Party and its allies in the press have been waging
a campaign of character assassination against Al Gore by the simple
method of falsely accusing him of being a liar.  A recent study of
media content by the Pew Research Center found that an astonishing
76 percent of the media's coverage of Gore focused on suggestions
that he lies and exaggerates, and that he is associated with scandal.
That the accusations against Gore are false has been well-documented
here and elsewhere.  This is projection at the most primitive possible
level, and I am not sure that most people appreciate the magnitude of
the situation.  The problem is no longer that the media are printing
tendentiously selected facts fed to them by think tanks, although
that still happens.  The problem, rather, is that the media routinely
prints things that are not true, and that it continues to repeat them
and elaborate on them well after they have been corrected.  The level
of outright psychosis in the media today is far beyond anything that I
have ever seen in my lifetime, and it also goes way beyond the already
overhyped accusations of "media bias" beloved of punditary projection.

I find this frightening.  Even rational people who plan to vote against
Al Gore should find it frightening.  What could possibly explain it?
The simple fact that conservatism has historically been at war with
rationality is not enough of an explanation.  What we are watching
is an institutional shift.  Confronted with the popular mobilizations
of the 1970s, a coalition of interest groups and ideologically minded
wealthy people began to build a network of think tanks with which to
train sympathetic activists and pundits.  Over the years this coalition
and its clients have grown into a considerable network whose ambition
is to regain control of every institution in society.  Their work is
coordinated not by a hierarchical board of directors, though they are
highly connected among themselves, but by an evolving strategic sense
that is periodically crystallized by movement intellectuals and spread
about by movement bulletin boards such as the editorial page of the
Wall Street Journal.

As this movement has developed critical mass, it has attracted the
same set of opportunists as any promising movement.  Those wishing to
get in on the ground floor of the new order have made their move long
ago, and those institutions wishing to retain their position in the
new order are already feeling out their accommodation to it.  In the
case of the New York Times and the Washington Post, this accommodation
first took the form of credulous and one-sided reporting of the most
incoherent accusations against Bill Clinton, and now it takes the
form of the most astoundingly biased reporting on Al Gore's campaign.
The Times and Post's correspondents on the Gore campaign, Katherine
Q. Seelye and Ceci Connolly, publish day after day the most distorted,
skewed reporting that I have ever seen.  Both reporters have fabricated
quotes from Gore, and both of them regularly frame their reporting
in terms of Gore's supposedly phony motives for even the most trivial
actions.  (This has been thoroughly documented by the Daily Howler
, and I have a large library of further
examples of my own.)  Their writing is nasty and sophomoric to a
degree that I have never seen in those newspapers, and it is backed
up by similar tactics from most of those papers' other political
reporters.  The effect is especially striking when the Times and Post
are compared with newspapers that have not signed on to the new order,
such as the Los Angeles Times and the news sections (as opposed to
the opinion pages) of the Wall Street Journal, both of which are much
more neutral in presenting and restrained in interpreting the campaign
news.  Even National Public Radio has signed on; the nastiest political
interview I have ever heard was the avuncular Bob Edwards' 7/19/00
interview with Gore ,
which literally consisted of nothing but loaded questions about why
nobody likes him.

The media sees a profound change in the works.  You may recall
the unprecedented political dynamics in the Republican Party when
George W. Bush first emerged as a potential presidential candidate.
Politicians usually withhold their candidate endorsements until
the contest takes form; they are hoping to extract political capital
in exchange for their support.  But more importantly, they know that
they will make a long-term enemy if they endorse the wrong candidate,
and so they want to wait until they can be sure that they are going
to guess right.  With George W. Bush's candidacy, however, a vortex
developed in which politicians rushed to be on board as soon as
possible.  Positive feedback set in: once Bush's victory became a
safe bet, those who had intended to wait before endorsing him became
concerned that they would be left in the wilderness.

This coalescence of political power swept outward in successive waves,
and when the Bush coalition finally crushed John McCain in its savage
campaign in South Carolina, the Bush vortex consumed the whole party.
But it didn't stop there.  People in the media don't just see a change
in political majorities; they see a movement that plans to secure
dominion over the major institutions of society.  Put yourself in
the position of the New York Times: you are a major force in society
because of your central location in so many elite social networks,
so you push the issues that keep this elite consensus together: free
trade, for example, but cultural liberalism.  If a different elite
network should capture the major institutions of society, however,
you risk being an outsider.  Conservatives would rather talk to the
Washington Times than the New York Times -- or Washington Post --
but so long as conservative networks don't control the institutions
then the Times and Post will live.  But if that changes then, given
the economics of newspaper publishing, both papers' very survival
might be in doubt.

When Bush won the primary in South Carolina, the press swung violently
away from McCain -- the safe conservative choice from their point of
view -- to Bush.  The before-and-after contrast in coverage is amazing.
Bush's evil campaign tactics did not change after South Carolina; what
changed is the media's willingness to print the Republicans' attack
messages as truth.  So now Bush can pretend to be above the "politics
of personal destruction", and he can express disgust whenever someone
reveals his running mate's voting record, even though the same politics
persists and even escalates.  Polls show that voters agree with the
Democrats on the issues, but they are willing to consider a vote
for the party of tobacco, guns, impeachment, and the religious right
because of "character" -- because of the slime they hear in the media.
We are not talking about a matter of interpretation and judgement.
If Gore were truly an evil person then we would have heard some factual
evidence of it by now.  But we have heard only lies -- literally dozens
of them.  The people who are ambitious enough to succeed in the media
are also ambitious enough to see which way the wind is blowing and get
on the side that's winning, and they clearly see something that most
of us do not.

What are sane people to do?  First, I think, they must prepare to go
into opposition.  The purges have hardly begun, and the same pundits
who make a living slandering Al Gore will seek out new targets as long
as they live.  The elderly can remember a time when people like that
ran the world, but the rhetoric that they confronted in their day was
bright sunshine compared to the professionalized nonsense that fills
the media today.  In order to function as an opposition for the long
haul, sane people must protect their sanity.  They must learn names
for the varieties of craziness that fill the airwaves today and the
lawbooks and schools soon enough.  This is the truth: no matter how
hard you ignore it, no matter how hard you dismiss it, the insanity
will invade your mind if you can't name it -- if you can't explain
concisely and in plain language what is wrong with it.  You might
think you're immune, but you're not.  Rational thought will disappear
from the world unless you put down what you're doing and equip yourself
to preserve it.


Let us continue our examination of futurological predictions for the
year 2000 with the most successful prediction that I know about.  It
comes from the late J.C.R. Licklider's "Libraries of the Future" (MIT
Press, 1965), and I think the original date of the report was 1961.
I'm quoting from the excerpts in Mark Stefik's book "Internet Dreams"
(MIT Press, 1996), starting on page 27:

  Economic criteria tend to be dominant in our society.  The economic
  value of information and knowledge is increasing.  By the year
  2000, information and knowledge may be as important as mobility.
  We are assuming that the average man of that year may make a capital
  investment in an "intermedium" or "console" -- his intellectual
  Ford or Cadillac -- comparable to the investment he makes now
  in an automobile, or that he will rent one from a public utility
  that handles information processing as Consolidated Edison handles
  electric power.  In business, government, and education, the concept
  of a "desk" may have changed from passive to active: a desk may
  be primarily a display-and-control station in a telecommunication-
  telecomputation system -- and its most vital part may be the cable
  ("umbilical cord") that connects it, via a wall socket, into the
  procognitive utility net.  Thus our economic assumption is that
  interaction with information and knowledge will constitution 10
  or 20 percent of the total effort of the society, and the rational
  economic (or socioeconomic) criterion is that the society be more
  productive with procognitive systems than without.

This is pretty impressive.  The main problem, apart from the "average
man" thing, is that he was not optimistic enough: most people spend
more like a tenth as much on their personal computer as they do on
their car, and so there's little need to rent them out.  And instead
of public utilities we have monopolies.  But he was darn close.

He later provides a series of 25 criteria for "procognitive systems".
This list is fascinating.  Most of the criteria require only a little
charity to be considered true about the Internet today:

  1. Be available when and where needed.

  3. Permit several different categories of input, ranging from
     authority-approved formal contributions (e.g., papers accepted by
     recognized journals) to informal notes and comments.

  5. Facilitate its own further development by providing tool-building
     language and techniques to users and preserving the tools they
     devise and by recording measures of its own performance ...

 12. Provide flexible, wide-band interfaces to other systems, such as
     research systems in laboratories, information-acquisition systems
     in government, and application systems in business and industry.

 16. Evidence neither the ponderousness now associated with
     overcentralization nor the confusing diversity and provinciality
     now associated with highly distributed systems.

One prediction that is arguably not yet true is surprising; in a
data-centered world Licklider failed to foresee the document-centered
nature of the Web, and the concomitant slowness of reintroducing
structured data to this model using XML.

  2. Handle both documents and facts.

Several of the criteria do fit many parts of the Web, but nonetheless
miss the spirit by importing a "library" model.  In fact the Web is
much less structured, controlled, standardized, organized, etc than a
library, and digital library services -- though surely needed -- have
yet to be implemented and used on a large scale on the Web.

  9. Permit users to deal either with meta-information (through which
     they can work "at arm's length" with substantive information),
     or with substantive information (directly), or with both at once.

 13. Reduce markedly the difficulties now caused by the diversity of
     publication languages, terminologies, and "symbologies".

 14. Essentially eliminate publication lag.

 15. Tend toward consolidation and purification of knowledge, instead
     of, or as well as, toward progressive growth and unresolved

In fact, criteria 18-23 are all "criteria that are now appreciated
more directly by librarians than by users of libraries": cataloguing,
keeping track of users' interests, bookkeeping and billing, tools for
knowledge organization, facilities for implementing system policies.
All of these are coming, but none of them are here on a large scale.

Recall that Licklider wasn't offering "predictions" but "criteria"
for "procognitive systems", and one really can argue that the relative
lack (so far) of library-related functionalities implies that the Web
is not yet adequately procognitive.  But one cannot say the same for
a final category of criteria that do not fit the Web at all.  Whereas
the library functionalities are all definitely coming, this last set
of completely absent criteria are precisely the ones that require the
network to exhibit intelligence of any kind:

  5. ... and adapting in such a way as to maximize the measures.

  7. Converse or negotiate with the user while he [sic] formulates
     requests and while responding to them.

  8. Adjust itself to the level of sophistication of the individual
     user ...

 17. Display desired degree of initiative, together with good
     selectivity, in dissemination of recently acquired and "newly
     needed" knowledge.

 25. Handle heuristics (guidelines, strategies, tactics, and rules of
     thumb intended to expedite solution of problems) coded in such a
     way as to facilitate their association with situations to which
     they are germane.

Licklider was a prominent figure at ARPA, which was also funding the
original research on AI at the time, and so it is understandable that
such ideas show up in his list.  Yet although AI has found its uses in
particular industrial niches, it has become nowhere like as pervasive
as people in that place and time had assumed.  The problem with the
AI criteria is partly the user-interface need for what Alan Kay called
the "user illusion": people hate it when computers try to be smart,
because (among other things) routine work requires the computers to be
predictable, and thus to seem like they are under the user's control.
But another part is that, to really be useful, most AI methods require
large amounts of structured data, including structured representations
of the intentions, goals, actions, knowledge, etc of the user.  This
is not the direction that the technology has taken, for several reasons:
(1) it's hard to devise a data model that captures the full complexity
of human life; (2) even if you had a realistic data model, it's hard
to capture this sort of information in a structured way; (3) many of
the methods require complete representations of activities that people
want to conduct in many media where the practicalities of capture are
inconsistent; (4) metadata standards have taken longer than expected
to define and institutionalize; and (5) as mentioned above, the Web's
document model has meant that structured data has generally not been
a high priority.  There are some settings in which large amounts of
data are captured about human work activities, for example in workflow
systems, and in those contexts it's easier to imagine the AI methods
being useful.  But those are also the most regimented of settings, and
it's entirely unclear whether the methods will also be useful in the
less regimented settings that computers are also supposed to facilitate.


The issue of the month has been the etiquette of using one's cell
phone in a public place.  Let's explore a technological solution to
the problem.  With Bluetooth, devices will soon be able to talk to
another on the basis of proximity.  So we could equip all cell phones
with a Bluetooth device.  This device automatically talks to devices
in the walls that explain the "house rules".  A Bluetooth device on
an airplane, for example, could instruct all the cell phones to shut
themselves off as soon as the plane pushes away from the gate.  In
a theater, all cell phones could be instructed to turn their ringers
off, and the smarter phones would automatically shift to vibration
mode.  Quiet restaurants could instruct phones differently from noisy
restaurants.  Phones could be shut off on dangerous streets or even
dangerous traffic conditions.  In exchange for these constraints,
the phones could also be provided with "location-based" services,
such as closed-captioning or an on-screen display in a theater of the
approximate time until intermission (assuming anyone would want such a

It sounds nice, but the problems are obvious.  One is spoofing: it's
already trivially easy to build a cell-phone jammer, and it wouldn't
be hard to capture the shut-off signals and play them back on a street
corner.  You'd need some kind of challenge protocol where the cell
phones send coded queries to the device that's trying to shut them
down to make sure it's real.  But even then someone could simply buy
such a device and hide it somewhere.  (It wouldn't need any displays,
moving parts, switches, or connectors.  And because its broadcast range
would be so small, that being the whole point, its batteries probably
wouldn't need changing very often.)  So maybe we'd have to outlaw the
use of such devices in public places.  They'd be easy enough to find,
so maybe the law would pretty much enforce itself.  But already we're
getting into multiple layers of hair, as the hackers used to say.


In response to my notes on journalistic cliches, one reader observed
that most of the verbs that I wished to exterminate were actually good
solid Anglo-Saxon one-syllable words.  He accused me of taking sides
in the ancient class war between the short-word-speaking Anglo-Saxons
and the long-word-speaking Normans.  I have to admit that I hadn't
noticed the pattern.  But his argument can't be right.  Writing can
be bad in more ways than the numbers of syllables in the words.  Even
so, his argument does suggest an explanation for the peculiar norms
of modern journalistic writing: perhaps they were originally formed
by an ideology of "one syllable good, four syllables bad" that evolved
into a rigid system of cliches.  If someone objects to the cliches,
says the ideology, well, that's just because they want to write in
Latin and not English.  I have nothing against reporters as a class.
In fact I've generally found the ones I've dealt with to be serious
people.  It's just that I read five newspapers a day, so I find the
onslaught of cliches oppressive.


In response to my exercises in literary analysis of both technical and
political texts, several people have asked for a list of accessible
readings about the methods of literary analysis.  Alas this is easier
said than done.  Literary critics, like every group, are a world
of their own, and they write largely for one another, presupposing
a shared background knowledge of past movements and their fights.
I am not a part of that world and don't care about its fights.  Nor
do I make any claim to being at the cutting edge of literary analysis,
or even knowing where the cutting edge is.  It's a tool, and I care
about what I can find with it.

That said, a little history.  What makes the study of language
hardest and most interesting is that all language has two faces.
One is formal: the structures of grammar and poetics that are entirely
internal to the text.  The other is material: the way the text is
embedded in the real world, including the life of the author, the
institutions and political economy of the production and consumption
of the text, the real things in the world (if any) that the text was
about, and so on.  Schools of criticism tend to make strong claims
about the proper object of literary study, usually in reaction to the
shortcomings of the school that had reigned before them.  As a result,
literary study tends to lurch between formalist and materialist

In mid-20th century United States, the New Criticism was a reaction
against an especially woolly school of literary study that subsisted
on dim, untheorized tales about the author's life, and on equally
vague and ungrounded aesthetic judgements.  The New Criticism was
therefore stoutly formalist, concerning itself in a quasi-scientific
way with the text alone and in isolation from all material things.
This was actually a good thing in the long run, because it raised the
standards of rigor in the field.  The New Critics' methods of "close
reading" are clear ancestors of the kinds of reading I do here on this

Come the 1980s, various influences came together to start an explosion
of very creative critical work in the United States (and not just in
the United States, but that's what I know).  This work can be arrayed
all along the formalist-to-materialist spectrum, but it all had that
sense of being "technical", driven at all times by close and sustained
attention to the actual workings of the text.  Whatever this movement's
excesses might have been, it represented humongous progress over the
woolliness of the former establishment.  That sense of rigor fell away
in the 1990s as attention turned more toward substantive themes in the
texts and away from, ahem, textuality as such.  So it is the work of
the 1980s and its lineage back into the New Criticism that I draw on.
Here are some relatively accessible readings that may give some idea:

William K. Wimsatt, Jr. and Cleanth Brooks, Literary Criticism: A
Short History, New York: Knopf, 1957.  This is the New Criticism's
history of the world.  It is not brief, but it is a compelling story
of the history of close attention to the way texts work.

Jane P. Tompkins, ed, Reader-Response Criticism: From Formalism to
Post-Structuralism, Baltimore: Johns Hopkins University Press, 1980.
If you had to pick one school that my close-reading methods most draw
on, it would be reader-response criticism, which as the name suggests
interprets texts in terms of the responses they elicit in readers.
This can involve an empirical investigation of readers, or it can
reconstruct the "implied reader" that the text itself seems to call
for; other methods are possible.  Tompkins' is the standard anthology
on the subject.  The main problem with reader-response criticism is
probably obvious: which reader?

Frank Lentricchia and Thomas McLaughlin, eds, Critical Terms for
Literary Study, Chicago: University of Chicago Press, 1990.  This
is a book of great smartness, a clear and compact summation of the
best work of the 1980s by the people who did it.  Each brief article
explains concepts and applies them to an actual text.  There was a
second edition in 1995 but I haven't read it.

Imre Salusinszky, Criticism in Society: Interviews with Jacques
Derrida, Northrop Frye, Harold Bloom, Geoffrey Hartman, Frank Kermode,
Edward Said, Barbara Johnson, Frank Lentricchia, and J. Hillis Miller,
New York: Methuen, 1987.  This is an interesting set of interviews
with many of the prominent critics of the period.  An amusing feature
of the interviews is that each critic is set to analyzing the same
text.  It's interesting to see what they notice, though they aren't
given enough time to get into it.

Jonathan Culler, On Deconstruction: Theory and Criticism after
Structuralism, Ithaca: Cornell University Press, 1982.  For those
who are motivated to study Derrida, this is the best place to start;
Christopher Norris' more polemical books are widely recommended as
well, as Geoffrey Bennington's book is good at an advanced level.
Culler has the advantage of including a long chapter on reader-
response criticism as well.  (Note that deconstruction and reader-
response criticism are often confused in the opinion pages.  They
are different things.)

Jacques Derrida, Positions, translated by Alan Bass, Chicago:
University of Chicago Press, 1981.  I don't recommend that beginners
even try to read Derrida.  Everything he does is quite rigorous, but
if you don't understand what on earth he's doing then it will all seem
like complete mishegas.  But if you do want a starting place, this set
of interviews is it.

Mark C. Taylor, ed, Deconstruction in Context: Literature and
Philosophy, Chicago: University of Chicago Press, 1986.  Derrida was
such a huge success in the United States (as opposed to France, where
he was merely a well-known outsider) because Americans were generally
not familiar with the long line of philosophers and social theorists
that he evolved from.  This anthology provides a striking fast-forward
through the intellectual history.

A last word about literary analysis.  All human things are complicated,
and texts are especially so.  Neither these theorists nor their methods
have any authority; the argument isn't "Derrida says it so it's true",
and deconstruction (to take one method among many) offers no hard-and-
fast generalizations.  You just have to apply it to each text and see
where it takes you.  At the end of the day, if you don't have an actual
argument in which conclusions follow defeasibly from evidence then you
have nothing.  And once you do have such an argument, it doesn't matter
whether you got your methods from Derrida or Stanley Fish or Cleanth
Brooks or Aristotle: you should give the people credit if your logic
parallels theirs, but it's the text that's doing the work.  You're just


A while back I asked if anyone could identify the locus classicus of
the theory that the United States' industrial leadership originated in
large part in its large, homogeneous domestic market, which provided
American industry with the earliest experience using scale-dependent
production technologies.  Discussion with various online friends of
friends made clear that this theory is false through the nineteenth
century, but that it becomes true with the consolidation of a national
political and technical infrastructure around 1900.  Here is at least
one version of the story:

  During the later decades of the nineteenth century, the completion
  of the new transportation and communication systems based on steam
  power and electricity led to a recurrent wave of technological
  innovations in the industrial processes and their products.
  The potential of these technologies for growth rested on the
  unprecedented volume of production and distribution made possible by
  the new railroads, steamships, telegraphs, and cable networks. ...

  Because the United States was favored by an abundance of raw
  materials and a large and growing population, it took the lead in
  commercializing the new technologies as its railroad and telegraph
  networks neared completion.

  Alfred D. Chandler, Jr., Franco Amatori, and Takashi Hikino,
  eds, Big Business and the Wealth of Nations, Cambridge: Cambridge
  University Press, 1997.  Introduction, pages 8 and 9.

The core message of this book is that big business played a central
role in economic growth by providing a nexus for the interaction of
capital, technology, and organizational knowledge.  In particular,
the new technologies both enabled and required economies of scale and
scope, so that only a large firm could invest the capital needed to
use them, and they also required large amounts of technology-specific
organizational learning -- not just how to make the stuff but how to
sell it, coordinate it, adapt it to new purposes, and so on -- that
then created important barriers to entry for the largest successful
firms.  The message is summarized at the end of Chandler's own chapter
on the United States:

  This essay has reviewed the role of the large industrial enterprise
  in making the United States the leader in technological change
  during the twentieth century.  Its firms did not necessarily so
  become by inventing or even pioneering in the new technologies.
  During the first wave most of the initial innovations came from
  Europe, in the interwar years from both Europe and the United
  States, and after World War II more from the United States.
  The US industries became worldwide technological leaders ...
  by commercializing these innovations on a scale made possible by
  the size of national and global markets.  It was this large-scale
  commercialization that created the initial learning base and then
  the core of a larger industrial nexus, each of which, in turn,
  became a dynamic element in continuing learning and growth.  But,
  as the rise and decline of companies and even industries such as
  consumer electronics indicate, in these high-tech industries, and
  also in medium technology ones such as motor vehicles and steel,
  the initial advantages did not insure continued strength.  Learned
  product-specific organizational capabilities had to be maintained
  and enhanced.  Once capabilities disintegrated, competitive power
  rarely returned (pages 99-100).

Along the way, Chandler draws on Flamm and Cortada, among others, to
remind us of the important role of US government antitrust action in
creating the conditions for competition and technical progress in the
computer industry.

  IBM's stiffest competition in the 1970s came from its own products
  made by other companies, some under license, but more often not.
  (Under a US Department of Justice antitrust consent decree in 1956
  IBM had agreed to license its innovations to all comers) (page 92).

  The transformation in software from customized to standardized
  application packages soared after IBM "unbundled" its software
  from its hardware in 1969 under antitrust pressure.  By selling
  it to any user, IBM saw its software quickly become the worldwide
  standard to which applications software makers shaped their packages.
  Thus in 1968, of the $400 million in the US revenues in software,
  $300 million came from customized programming.  But in 1978 when
  the total was $1.5 billion, $1 billion of it came from packaged
  software and only $500 million from customized products (page 94).

This history is worth recounting in the context of the Microsoft trial
appeals, as critics compare the government's antitrust suit against
Microsoft to the one against IBM that the Reagan administration
abandoned in the early 1980s.  The truth is that, even despite IBM's
years of delay, government antitrust action had managed to extract
concessions from the company that produced long-term benefits for


My close readings of professional nonsense have brought the sorts
of reactions you would expect, though not so many of them.  The most
common objection from the left is my characterization of the current
jargon as "new".  After all, I'm told, "these tactics" go way back.
The fallacy here is that anything can be made to seem "old" if you
define it vaguely enough.  Propaganda, sophistry, doubletalk, and
all-around confusion go way back.  But I'm describing something much
more specific.  It has a lineage: public relations.  But it is far
more developed and systematic than the public nonsense of even ten
years ago, and I have tried to enumerate some of its properties.

The most common objection from the right is actually similar; I am
told that my failure to identify any examples of "these tactics" from
liberals marks me as "partisan".  Liberals have certainly said dumb
and even twisted things, but I am not aware of any liberals who use
the specific repertoire of rhetoric devices that I have described.
In fact, I have provided a couple of examples of what that would
be like, and I think I've shown that it would be most unfamiliar.
Should liberals learn these techniques?  Yes, they should, but only
to immunize themselves, not to try to match the slickness of the pros.


My 6/12/00 notes quoted the following passage from Paul Gigot's 6/9/00
Wall Street Journal column on the Microsoft antitrust decision a little
out of context:

  The Microsoft chairman didn't bow enough to bureaucrats, or canoodle
  enough with politicians.  He gave away too little money too late,
  when it only looked cynical.  His e-mails were too tart, or too
  honest, or something.

Gigot was attributing these views to unspecified others, embracing at
least their spirit without quite saying so.  This, together with the
sarcastic tone, is a common device on the Journal's editorial pages.
But my overall characterization of his column is unaffected: he views
the trial as a game, and not as something governed by law.


My comments on the predictions from The Futurist claimed that mass-
produced prefabricated housing was something that society had never
even expressed much interest in making true.  Architects have told me
that because of transportation costs, "stick-built on the site" still
beats prefabrication for most purposes.  But maybe that's just wishful
thinking on the part of the architects, most of whose buildings are so
heavily constrained by economics and zoning that the architect serves
mostly as a decorator who give an artistic facade to an otherwise raw
expression of political economy (see, for example, Carol Willis, Form
Follows Finance: Skyscrapers and Skylines in New York and Chicago,
Princeton Architectural Press, 1995).  I'm told that there are plenty
of developments being constructed using pre-fab homes.  They build the
walls in the factory and truck them in.  It is not a happy thought.


On the subject of cheap pens, I received a message from a real, live
cheap pen magnate.  He tells me that his company imports ten million
pens a year to the US, mostly brass pens from Taiwan but also plastic
pens from China and Germany.  He wrote me to share the information
that the Beifa pen that I bought in Sofia was actually made in Ningbo,
Zhejiang, China, where (he tells me) they actually have a sizeable
assortment of small and medium factories.


Some URL's.


EU accepts US "safe harbor" regime

Politics and the English Language

Search Warrants for Online Data Soar

an especially bad MS Outlook security vulnerability

Workplace Monitoring and Surveillance

Privacy Sleuthing Goes Pro,1294,37812,00.html

UK Passes E-Mail Snooping Bill into Law,1151,17179,00.html

Kitchen: Selecting Blendolini Causes Choco-Banana Shake Hang

Voters Unmoved by Media Characterizations of Bush and Gore

Suit Says Police Violated Protesters' Rights

Phone Firms Accused of Deception