Some more notes, a little less ponderous this time, on subjects such
as scholarly publishing, propaganda, community networks, interactive
design, transaction costs, and Privacy Chernobyl.

Does anybody have a stack of unloved back issues of Wired sitting
around?  I somehow don't have issues 6.09 (September 1998), 7.01
(January 1999), and 7.05 (May 1999), and I'd like to keep a complete
set for research purposes -- starting next week, those fabulous
cyber-predictions will be testable.  So if you felt like sending me
your own copies of those issues then that would be great.

By popular demand I have added maybe 15 pages of material on the
subjects of dissertation-writing and job-hunting to "Networking on
the Network".  You can find the new stuff in sections 7 and 8 at:

I've also made small revisions elsewhere, including some conservative
advice for constructing a home page, but sections 7 and 8 are the
major new additions.  They're still in alpha test, so please don't
publicize them yet.  Instead, please read them and offer comments.
When they're done, I'll send an advertisement for the whole article
to the list, and invite you to forward it to every graduate student
in the world.

Have a look at the bottom half of the RRE home page:

In dribs and drabs over the last several months I have gathered links
to all (I think) of the longer essays by other people that I have
circulated on RRE, something like two dozen of them.  I've mentioned
them before, but the list is more complete now.  Let me know if I've
missed anything.  I think that these essays would be excellent for
any reading packets that you might be putting together for college
courses.  Check with the individual authors about the copyright deal.

You may also conceivably find some interest in the following:

It is a draft of a long report that I wrote for a foundation on the
subject of the place of the Internet in the making of foreign policy.
I don't claim to know much about the formal processes by which foreign
policy gets made, so I have defined the topic widely in terms of the
broader social dynamics around public policy and how they may or may
not be globalizing.  Lacking the evidence that one would need to make
an overall judgement on the matter, the main goal is simply to clarify
the questions.  I am hardly the world's greatest social theorist, but
social theory has been too content with average ideas about technology,
for example equating technology with bureaucratic rationalization.
By bringing to the discussion some possibly better-than-average ideas
about the Internet's place in social processes, I can hope to make
some small contribution.  Even if you're not interested in the policy
process, may I recommend Section 2, "Preliminary cautions", in which
I warn against a batch of unwarranted assumptions that people often
fall into.

Bonnie Kaplan  is gathering a directory of
people who study information and whose graduate degrees are in social
sciences or humanities, or other disciplines (such as education) that
use field-based ethnographic methods or textual analysis.  If that's
you, do send her a note.  She'll be happy to send you the names that
she has so far.

My essay about (de)centralization and the Internet ended with the
triumphal conclusion that ...

  All of this and more becomes conceptually possible once we start
  dividing the world along simple centralized-versus-decentralized

... except that, duh, I meant "stop", not "start".  That's what I
meant about these notes being first drafts.

Every day, smart people tell us that the Internet creates a conflict
between individuals and institutions, and that the individuals win.
In particular, we are promised that the Internet is a way to destroy
institutions.  You have an institution you don't like?  The Internet
will destroy it for you.  Don't like the government?  The Internet
is shutting it down.  Sick of the phone company?  It's toast.  Hate
college professors?  The Internet is going to put them on the street.
It's a happy thing, this discourse, because everyone has an institution
that they want to destroy.  Me, I want to destroy academic journal
publishers.  It's a slight ambition, I know, but it would give me real
pleasure.  The job of an academic journal publisher is to take my work
for $0, add almost no value to it, and sell it to my library for way
too much money.  Why can they do this?  Well, I have to give my work
away to these parasites because the resulting publications are how
the university knows that I've been working.  The parasites then have
a monopoly on my work, because it's intellectual property and because
it is not substitutable -- buying someone else's article would not
serve the same need as buying mine, that having been the whole point
of the refereeing process that caused my article to be accepted by
the journal in the first place.  And the poor librarians are caught
in the middle, since the people who edit and consume the journals are
not the ones who have to pay for them.  If I were a librarian then I
would engage in a lot more guerrilla theater to dramatize the severity
of the problem, but then most librarians are a lot nicer than I am.

The right answer is for researchers to publish their journals online
in a cooperative fashion, either on their own or through universities
and professional organizations.  Some of this is happening, of course,
but not enough, and not very fast.  I contribute my two cents when I
am invited to serve on an editorial board, but that's only two cents.
What I can do, though, is resist the parasites at the point of the
copyright grab.  When I have an article accepted for publication in an
academic journal, the journal publisher sends me a legal form in which
I sign over the copyright to my article.  This is reasonable in the
case of a newspaper or magazine, since they buy intellectual property
for real money and then sell it to pay their bills.  But it is not
reasonable for a journal publisher, which does not pay its authors and
which contributes nothing that a cooperative publishing organization
could not do much more cheaply on the Web.  So whenever I get one of
those form contracts from a journal publisher, I always get out a pen
and edit it.  Mostly I write in the margin, "I reserve the right to
post the paper on my Web site".  That's why all of the papers that I
have published since I got tenure are available on my home page, where
they are much more widely read than they would be in a journal alone,
and where they publicize the journal as well.  None of the publishers
has ever refused, although I gather that a couple of them might have
complained to the journal editors.  But just to make sure, I sit on
the form for a while so that the journal's production process would
be fouled up if the publisher did not accept my terms.  If a publisher
actually did reject my terms, I would just say "tough" and walk away.
There are plenty of other journals out there, and I have plenty of
other publications in the pipe.  Besides, that's what tenure is for.

I have a hokey economic theory of this situation.  The academic world
would save tons of money if it moved to a cooperative publishing
model, so why doesn't it?  Some people claim that the real problem is
irrational prejudice against online journals, but I don't believe that.
I have published in a perfectly respected online journal, the Journal
of Artificial Intelligence Research, whose editors saw a political
opportunity -- a generational shift, really -- to create a new center
of gravity in their field to compete with the establishment journal,
Artificial Intelligence, and whose publisher, Morgan Kaufmann, was
enough of a tech-oriented newcomer to be interested in pioneering
a smart new publishing model.  Now, JAIR is kind of a special case,
given that AI's formative history of centralized management by ARPA
left it with a highly developed, highly standardized infrastructure,
whose universal adoption makes it easy to build things like online
journals.  Still, I don't believe the irrational-prejudice theory.

Instead, start with coordination costs -- the money and effort that
it would take to get all of the players at all of the universities to
move to a new model.  The real problem is that the existing journals
are more prestigious because they've been around for a long time and
have accumulated the best editors and the longest history of quality
work.  A new journal might have an incentive to go online in order
to be cheap and promote sales.  But the editors of the established
journals don't suffer from the high cost of their journals, so they
have no incentive to move.  Their own universities benefit more from
having control of the prestigious journal than they do from the cost
of subscribing to it, so no institutional lever can make them move.
And even those editors who really want to move to a cooperative model
(including some on this list) face massive first-mover disadvantages
because the necessary software and institutional routines have not
developed.  So here's my hokey theory: the journal publishers are able
to extract rents -- that is, profits that would not exist in a genuine
competitive market -- that are equal to the coordination effort that
would be required to move to a cooperative model, minus one dollar.
In other words, the publishers can turn up the pain to just below the
level that it would take to compel the editors and universities and
professional associations to finally get moving and get organized and
solve the problem.

Lest this sound too depressing, Hal Varian points out that, according
to this model, the rents extracted by journal publishers ought to
go down as the technology for online cooperative publishing improves.
What this suggests to me is that a university or foundation that
wanted to make a long-term investment in solving the problem could
sponsor the creation of an open-standard software platform for solving
all of the coordination problems in academic publishing.  This would
include submitting and refereeing articles, managing the editorial
workflow, maintaining up-to-date contact information for everyone,
implementing site licenses etc if money changes hands, selling printed
and bound copies of the journal for people who want them, and so on.
It would also include legal and institutional arrangements to ease the
coordination problems of moving existing journals to the new systems.
The idea would be to enable the editors of existing journals to move
over to the emerging cooperative mechanisms just by hitting a switch.
The good news is, if Varian's argument is right, the mere existence
of this infrastructure would reduce the magnitude of the problem --
even if nobody ever used it!  Again, I realize that a lot of this is
already happening.  But perhaps if consciousness became general of the
central role of coordination problems then more activity and resources
could be focused on building consensus and writing code in this area.

I see a much bigger lesson here.  Many of the efficiencies that become
possible with cheap, pervasive networking cannot be captured by a
single organization.  Instead, they cut across the analogous parts
of all organizations.  Thus a single university would save almost no
money -- indeed, it would probably lose huge amounts of it -- if it
tried to reform journal publishing just for its own people.  Thus the
savings will only come if a critical mass of universities moves in a
coordinated fashion in the same direction.  Likewise, imagine if all
the universities merged their bursars' offices into one organization
that handled tuition payments for everyone according to a uniform set
of practices.  Of course, that kind of standardization could be a bad
thing if there were an educational advantage to running one's bursar's
office in a distinctive way, but let's put that issue aside right now.

The question is, assuming that important efficiencies can be captured
by a coordinated institutional transition, by what mechanism will the
necessary transition be organized?  One option is that the various
higher education professional associations will house the coordination
activity.  There is certainly a university presidents' association,
and I assume that there is a bursars' association as well.  But what
if no single individual staff person or member of the association has
the necessary incentives to dedicate ten years of their career to that
particular coordination project?  It would be like herding cats, after
all, getting all of the various universities to agree on anything.

There is another option: outsourcing.  An entrepreneur could see an
opportunity to approach universities and say, "I can save you $XXX per
year if you simply let me run your bursar's office".  This would only
work if a sufficiently large number of universities moved at once, but
predicting in advance how many customers will buy your proposition is
what entrepreneurship is all about.  So now some very large number of
universities all decide to outsource their bursar's office.  Then they
decide to outsource something else, like perhaps Chemistry 101, which
the faculty usually aren't enthusiastic about teaching and where huge
opportunities exist for economies of scale.  Then something else, and
then something else, and so on.  It should be said that outsourcing
is hardly a new phenomenon in the university.  After all, that's what
textbooks are: outsourcing the reading materials for one's course
(albeit having the students buy them and not the university) rather
than writing all of the necessary reading materials oneself.  And it's
hard to see what on balance is lost by outsourcing the food service in
the student union, which was never very good anyway.

The question is how far this trend will go.  Will the Internet (and
a dozen other important information and communications technologies)
facilitate a major wave of university outsourcing, and if so should
we care?  Perhaps while we're all off worrying about the university
collapsing in the face of no-frills online competitors, the real
danger to the university institution and the values it serves will be
a hollowing-out from outsourcing, like being pecked to death by ducks.
It will be important, in my view, for the university community and
the broader society to decide now which kinds of efficiencies through
outsourcing serve the public good by making education more accessible,
which kinds (e.g., scholarly journal publishing) should be resisted in
favor of a cooperative model organized by the universities themselves,
and which kinds should be resisted altogether because of the social
harm that large-scale standardization would bring.  It is not a simple
question.  Standardization can be bad without anything being wrong
with the standard itself; as a result, uncoordinated institutions that
make outsourcing decisions may, in violation of Kant's categorical
imperative, focus too much on their own interests and not enough on
the collective harm that would result if everyone followed their lead.
That's why, even when outsourcing does appear to be an economically
viable mechanism of institutional transition, it's important for the
universities' own coordination mechanisms to keep working, both at the
level of the administration and that of the faculty.

Although I haven't been able to monitor the LA Times' coverage of the
Seattle post mortem in the detail that I would like, I gather from the
12/16/99 article on the Times' Web site that it is improving, and has
reached the level of crude propaganda.  The 12/16/99 article, while
not as bad as the 12/8/99 article that opened with outright falsehoods
about the actions of the police and protestors, is still quite odd.
It begins by twisting language to blur the entirely clear distinction
between the legitimate protesters and the nutcase vandals, and then
toward the end, what is probably a couple of column-feet later, it
backs off.  The blurring between legitimate protesters and vandals
starts in the first sentence:

  When anarchy takes over the streets, the streets become the
  anarchist's toolbox.

The vandals are already known to everyone, correctly or incorrectly,
as anarchists.  So this sentence blurs the matter by referring to
the legitimate, nonviolent protests as "anarchy".  This is not quite
false, just because it's so vague, but it does start to create an
association between the legitimate protesters and the anarchists.
The article then quotes from an "Internet communique" from a group
that, it asserts, "liberated a host of downtown Seattle stores from
their windowpanes" during the protests.  Then it says this:

  Operating in small cells known as affinity groups and aided by
  masses of people who flooded into the streets in support, a new
  style of urban activists presented themselves in Seattle, employing
  civil disobedience techniques honed in the Pacific Northwest timber
  wars and an increasingly militant style of old-fashioned anarchism.

If you read this one-sentence paragraph out of context, you could
think that the article was referring to individuals who were engaged
in legitimate civil disobedience, and who just happened to be
gratuitously described in vague terms as manifesting "an increasingly
militant style of old-fashioned anarchism".  But in fact that sentence
appears directly after an inflammatory glorification of vandalism
by the "black-hooded anarchists".  The result is to make it sound as
though the civil disobedience and the vandalism were one and the same,
and that the "masses of people" in the streets of Seattle were there
"in support" of the anarchist-vandals.  The article then continues as

  Their success in disrupting the first day of WTO talks on Nov. 30
  -- despite months of official planning -- stunned even protest
  organizers.  "That night, hundreds of people went to bed on futons
  ... who had been planning on being in jail.  They were surprised,"
  said Mike Dolan of Public Citizens' [sic] Global Trade Watch, whose
  protests had legal permits.

The first word, "Their", is an anaphor.  Does it refer backward -- to
the "new style of urban activists" who were described in the previous
paragraph -- or does it refer forward -- to "protest organizers" --
or does it imply that the two groups -- urban activists and protest
organizers -- were the same?  Reading this two-sentence paragraph out
of context, one would probably infer that Mike Dolan of Global Trade
Watch was one of the "protest organizers" who was responsible for
the protesters who expected to go in jail.  Even then it's not clear
because of the final clause incongruously asserting that Mr. Dolan's
group had legal permits.  Read in context, the effect of these
several successive paragraphs is to create a chain of associations:
from the anarchist-vandals to the practitioners of civil disobedience
to the "protest organizers", who had legal permits.  At each step in
the chain, a grammatical device blurs two distinct groups.  But at
each step, the blurring is done deniably, through juxtaposition and
ambiguity rather than through clear assertion.

Having thus cleared its throat all over innocent people, the article
proceeds to its claimed news:

  ... [Police] were vastly outnumbered by organized protesters who
  used their bodies, wittingly or unwittingly, to shield about 200 or
  300 anarchist vandals rampaging behind demonstrators' locked arms.

Later on, Rich Odenthal, who is identified as the "veteran Los Angeles
Sheriff's Department officer who oversaw deployments during ... the
1992 riots that followed the Rodney G. King beating trial" (oh, great),
is quoted as saying:

  "The police force physically could not move through the protesters
  to do anything with those people who were breaking windows."

Now this doesn't make a lot of sense to me, given that the vandals
were breaking windows at Starbuck's and Niketown, not at the WTO
meeting sites that the protesters were blockading.  Eye-witnesses
tell me that the vandalism happened at a considerable remove in both
time and space from the legitimate protests.  But even Dolan agrees
the vandals were able to use the legitimate protesters as "cover".
So maybe it happened once and someone is trying to make it sound like
the norm.  We can never tell from reading the article, which says
essentially nothing about the geography of the events it describes.

As the article starts to wind up, a coherent picture finally emerges:
The protesters who were engaged in civil disobedience had warned the
police long in advance of their plans to shut down the meeting, and
the police had failed to get the help they would need to arrest them.
Anybody who hasn't been living in a cave for the last thirty years
is aware of the way this is supposed to work: protesters sit down and
make their moral statement, police arrest them, people get charged
with crimes like trespassing, they sing songs in prison, they get
sentenced to probation or something, and everyone goes home.  But the
police evidently weren't aware of this plan, to the complete surprise
of the legitimate protesters, and that (says the article) is why the
vandals were able to operate.  A couple of inches from the bottom of
the long article, one finally reads that "[l]eaders of the peaceful
demonstrations have lashed out of the anarchists", and that "[t]he
anarchists in turn accused the Seattle protesters of protecting the
same private-property interests that the WTO represents".  In other
words, the legitimate protesters and the anarchists were on different
teams and hated each other's guts.  The possibility of collusion that
is held out in the earlier phrase "organized protesters who used their
bodies, wittingly or unwittingly, to shield about 200 or 300 anarchist
vandals" turns out to be completely unfounded.  That's the opposite of
the impression that one would have gotten by reading the first half of
the story.

Bad as it is, this doubletalk is not the most basic problem with the
Times' article.  The most basic problem is back toward the top, right
after the chain of associations that links the legitimate protesters
to the vandals.  Here it is:

  How did Seattle manage to get shut down despite months of warnings
  that tens of thousands of protesters would seek to block the WTO
  talks.  How did police end up having to tear-gas the entire downtown
  core ... ?

This is called agenda-setting, and the problem is what questions are
not being asked.  Try this:

  How did large numbers of innocent citizens end up getting savagely
  beaten and hosed with noxious chemicals by out-of-control Seattle
  police officers who rioted in the face of protests that they had
  been informed about for months in advance?

A democratic country should want to know the answer to this question.

The recent wave of alliances between online services companies and
retailers is further evidence that the parallel distinctions between
"online" and "offline", "virtual" and "real", and "cyberspace" and
"meatspace" are a transient artefact of the Internet's early days.
The Internet is not someplace else; it is part of the regular world.
The possibilities for integrating the Internet with the rest of the
world are incredible.  Let's take the case of the most celebrated of
the alliances, that between AOL and Wal-Mart.  The initatives they've
announced so far as part of this alliance are pretty generic: AOL ads
in Wal-Mart, Wal-Mart links on AOL.  Not very exciting.  But let's
look more deeply at one small example of the complementarity between
the two firms.  In an incredibly heads-up marketing move, Wal-Mart
has made a policy of welcoming RV'ers who want to park in their
lots overnight.  (Non-Americans may need it explained that RV stands
for "recreational vehicle", a hulking aluminum home on wheels that
many American retirees make a hobby of driving around the country.
Americans regard RV's as a little strange, not to mention inefficient,
especially when one is stuck behind one of them on a mountain road.
But they make sense in an expansive country with good roads, bad
railroads, and expensive hotels.  RV's would make no sense in Europe,
much less Japan.  Do they even exist there?)  RV'ers are a community
unto themselves, and the Wal-Mart parking lots have become part of
that community's culture.  It's free, for one thing, and you know that
you will meet others like yourself.  RV'ers need to buy all sorts of
stuff that Wal-Mart sells, of course, and so everyone is happy.  Some
free coffee might even be involved.

Now let's wire this picture.  Wireless packet networking, which lots
of people want very badly, isn't taking off because of the costs of
blanketing a geographic area with specialized infrastructure.  They
can focus on high-intensity regions like the San Francisco Bay area,
but that's not enough to give them the economies of scale they need.
So how about focusing on place-specific networking?  A single wire-
less packet transponder in a Wal-Mart parking lot could serve a bunch
of people, and it could be plugged into the same wire that connects
each store's inventory computers to the mother ship.  RV'ers can
probably afford laptops, or the necessary computing power can simply
be built into the dashboard, and they have huge needs for Internet
access while they're on the road.  AOL could then build a community
of RV'ers, and a marketplace could get going to provide them with
additional services.  RVers would make friends in WalMart parking
lots, and they would keep in touch.  The RV world would become one
huge floating party as the bonds among its members deepened and
persisted.  You'd see a whole new subculture growing up.  Maybe they'd
start forming RV convoys.  And the marketing opportunity for Wal-Mart
and AOL is that they'd become to the RV'ers what Harley-Davidson is
to bikers.

The model could spread to other user communities whose members happen
to circulate among a finite set of spaces.  Business travelers are an
obvious next step; Manuel Castells' "The Rise of the Network Society"
(Blackwell, 1996) provides a funny/scary description of the global
network of homogenized places that business travelers shuttle through.
Build scale for wireless packet networking by putting transponders in
all large airports, and then form alliances between ISP's and hotel
chains to install transponders in them as well.  (Data ports on hotel
room phones are not good enough, not least because of the overhead
of plugging in and getting the darn things working every time you need
to move any data.  This is bad enough for running a web browser on a
laptop, but it's nowhere for a PDA or other device that's only useful
if it can exchange packets in small numbers on no notice.  The economics
of hotel phones stink as well.)

Once this model got established, it would spread.  Each user community
would develop a collective cognitive map of which branded places are
bathed in Internet rays and which are not.  Of course, the advantage
of such a system to a Wal-Mart would be transient, because it wouldn't
work economically on an exclusive basis.  Truck stops, for example,
would probably be eager to sign on as well, both for the RV'ers and
for the even bigger market of truckers.  So Wal-Mart would have to
find a way to create a lasting benefit from the effort it has spent
in pioneering the service.  But that should not be hard.  Once it has
really focused attention on the RV'er community, Wal-Mart should be
able to come up with a dozen other ways to institutionalize Wal-Mart
in that community through specialized services, events, publications,
and so on.  After all, the RV'ers will be right there in the parking
lot.  For example, each Wal-Mart store could net-broadcast an alert
every time it brews a fresh pot of coffee; RV's that are part of the
AOL/Wal-Mart community and that happen to be in the lot or otherwise
nearby and in transponder range could get an alert on their computers.
It's a small thing, sure, but it gets people into the stores.  And it
builds community by getting them in there at the same time.  Wal-Mart
could plug its famous inventory computers into the AOL link as well,
so that RV'ers out in the parking lot can know that they supplies they
regularly need are available and how much they cost.  The interface
for this service would be a nice design problem.

The lesson generalizes.  You will recall Chip Steinfield's article a
couple of months back about local retailers and electronic commerce.
An awful lot of Ed's Hardware Stores have been sold on the idea that
they can sell their goods globally by investing $50,000 or more in a
Web site that's integrated with their inventory system.  Unfortunately
it's not true.  Very few local stores are making any money this way
unless they genuinely have some global distinctiveness and the budget
they would need to communicate that distinctiveness on a global basis.
The firms -- IBM, for example -- that have promoted this false idea
should be ashamed of themselves, and Ed should kick himself for buying
it.  What's most unfortunate about this situation, however, is not the
wasted money but the wasted opportunity.  As Chip points out, and as
common sense points out to anyone not befuddled by the online/offline
dichotomy, local businesses have huge opportunities to deepen their
relationships with their geographic communities.  They will certainly
lose some business to online firms.  But they can compete by using
both Internet and face-to-face interaction to maintain relationships
with customers, build the local community, and generally to integrate
themselves into the social system.  Ed could help organize clubs and
lessons around hardware-intensive activities such as DIY home repair.
He could involve himself in the local high school's shop classes.
Think of all the people whose jobs require them to buy their own
tools.  (This makes sense in economic and safety terms because it
creates an incentive to take care of the tools.)  There's a whole
industry of tool trucks, like lunch wagons but for tools, that show
up on work sites and sell tools to the workers who need them.  What
else could work like this if people could submit orders over the net?
Those online firms need local distribution systems that require an
incredible amount of fixed costs and local knowledge, and people like
Ed could provide them, like the FTD system for florists.

It really is a new world -- it's just a different new world from the
one we've been hearing about.  In the boring and false new world of
yesterday's tomorrow, we were invited to destroy existing institutions
and replace them with all-digital new ones.  In the real new world,
we get to think about the melting boundaries among things.  How can
we describe the architecture of the interfaces between Ed's Hardware
Store and everything else around it?  How can we amplify the lives-
together of ten thousand interest communities, from RV'ers to stamp
collectors to DIY home-repair hobbyists to shop teachers to single
moms to urban planners?  How can we use information technology to
deepen the values of place instead of automatically trying to tear
those values down?  Where are the dividing lines -- within an Ed's
Hardware, for example -- between the (rather few) things that can be
done on an all-virtual basis and the (much more common) things that
cannot?  As the physical and informational dimensions of activities
are increasingly codesigned, how will the architecture of physical
buildings change?  How will people get together?  Less to exchange
information, most likely, but more for every other purpose.  Getting-
together will not have to be so thoroughly planned.  Declare a party,
and ten seconds later the news will spread to all of the like-minded
potential party animals who happen to be in the neighborhood -- think
of the system for announcing raves, radicalize it, and then make it
available to everyone.  Once your neighbors all have home pages, you
won't have any more excuse for not saying hello.  Boundaries -- a
better word than privacy for most purposes -- will be a major issue.

The cultural impact of these developments will be enormous, although
I think we can see it foreshadowed in the growth (in the US anyway)
of very large churches with 10,000 or more members.  If I had to list
the top five forces affecting American culture, those churches would
be number one.  (Up until a couple of years ago I would not even have
put the Internet on the list.  By now it might be number four.)  These
churches are moderate-scale metacommunities that have enough members
to assemble a wide variety of interest communities within themselves.
Organizations such as the auto repair ministry are not at all unusual.
They resemble the Internet in this way, and as the Internet acquires a
critical mass of users in each geographic community, the two analogous
effects can start to multiply.  Internet-based mechanisms may take
over some of the people-aggregating function of the large churches,
many of whose members find them a little too impersonal despite their
enormous scale and network advantages, producing a shift back toward
smaller, more intimate congregations.  These large churches address a
deep question: how much potential association does not happen because
people have no good mechanism for hooking up with one another?  Ads
in the newspaper are a very poor way to put an association together in
any context (nearly any) where trust is important.  The membership of
a large church self-selects for people who share fundamental values --
a crucial ingredient for successful association -- and the density of
their social network connections serves a social-regulatory function
that makes many situations safe that would be unsafe among people who
have no other mutual bonds.  Purely online Internet communities serve
some of these functions, but only up to a point.  Hybrid institutions
in a local community that cross online/offline boundaries could serve
them much more effectively.  This is a matter of technical design, of
course, but the technical issues are only one dimension of the larger
institutional design problem.  It's an area where we will see a great
deal of experimentation, and it will be important to watch all of the
experiments and spread analytical stories about what works and under
what conditions.

Where did the association between computer networks and decentralized
markets come from?  With some authors, such as George Gilder, little
in the way of a coherent argument joins them: they observe that modern
digital networks put the computer power on your desk, not in the guts
of the network, and so they suppose that social power will therefore
become equally distributed (see, for example, Life After Television,
Norton, 1992, pages 47-48, 126).  But this hardly follows; while the
basic architecture of the network is surely important in political
terms, the architecture of the applications that run on it is more
so (see, for example, Larry Lessig, Code and Other Laws of Cyberspace,
Basic Books, 2000).

A more common, more serious argument is economic.  According to this
argument, computer networks reduce the costs of doing business, this
will improve the efficiency of markets, and as markets become more
efficient hierarchical firms will wither away from competition and
governments will become both unnecessary and impractical.  In more
technical terms, computer networks are said to reduce transaction
costs: the costs of buying and selling things in the market.  These
transaction costs are likened to friction, and computer networks
are supposed to reduce that friction to zero, thereby perfecting
the market and dissolving all of the remaining "islands of conscious
power" into a great sea of freely contracting individuals.

This argument sounds compelling in the abstract, but it is actually
quite false.  To see this, it will help to return to the origin of
the concept of transaction costs, Ronald Coase's paper "The nature of
the firm" (Economica NS 4, 1937, pages 385-405; reprinted in Oliver
E. Williamson and Sidney G. Winter, The Nature of the Firm: Origins,
Evolution, and Development, Oxford University Press, 1991).  Coase's
paper asks a deep question: if the market is the most efficient way
to organize economic activity, why do firms exist?  Why aren't the
activities that take place within the firm coordinated by markets?
Why aren't these zones of hierarchical command obliterated through
competition?  The reason, Coase suggests, is that the mechanisms of
the market have costs of their own.  Buyers and sellers expend time,
effort, and resources to find one another, negotiate a deal, enforce
the deal, and so on.  Coase refers to these costs as marketing costs,
but they have come to be called transaction costs.  And he suggests
that firms arise when the costs of coordinating activity through them,
which might be called organizing costs (the concept tellingly does not
yet have a single widely accepted name), are less than the transaction
costs of coordinating activity through the market.  (For another view
of the nature of the firm, one wholly compatible with my own argument
below, see John Seely Brown and Paul Duguid, Organizing knowledge,
California Management Review 40(3), 1998, pages 90-111.)  Transaction
costs are both numerous and diverse, and Coase's most sophisticated
follower, Douglass North, has estimated that they amount to nearly
half of the economy.  This is amazing, given that the neoclassical
theory of economics presupposes perfect markets and thus assumes that
transaction costs do not exist.

Framed in this way, Coase's theory would seem to suggest that markets
would work better if transaction costs were reduced.  And indeed a
large and vigorous school of economic and legal theory has grown up
around the idea.  This "law and economics" school is predominantly
conservative in its politics, inasmuch as reduced transaction costs
would seem to obviate the need for government intervention to correct
defects in the market.  Law and economics scholars have been immensely
influential, and they include some of the most prominent contemporary
judges and legal scholars.

Unfortunately for the law and economics school, and for the country
whose legal system they have helped to shape, their entire project
is based on a logical fallacy and a misinterpretation of Coase.  The
argument in Coase's paper is not that lower transaction costs imply
a greater reliance upon the market and a lesser reliance on the firm.
It is true that, on Coase's argument, an economy with zero transaction
costs would behave according to the neoclassical ideal.  But a world
of zero transaction costs is surely impossible (not least because of
the irreducible problems of market information), and when transaction
costs are not zero, Coase's argument is much more complicated.  We do
see spectacular effects when new technologies enable intermediaries
to operate on a wider geographic scale, thus reducing the transaction
costs of long-distance trade and increasing the efficiency of markets.
EBay is a good example.  But as Burt Swanson once pointed out to me,
and as Coase himself observes, technologies that reduce transaction
costs usually reduce organizing costs as well.  According to Coase,
the size of firms is determined by a balance between the costs of
coordinating activities in the market and the costs of coordinating
the same activities within a firm.  It is the comparison that matters,
not the magnitude of one side or the other.  Let us listen to Coase:

  Changes like the telephone and the telegraph which tend to reduce
  the cost of organizing spatially will tend to increase the size of
  the firm.  All changes which improve managerial technique will tend
  to increase the size of the firm.

There then follows an important footnote:

  It should be noted that most inventions will change both the costs
  of organizing and the costs of using the price mechanism.  In such
  cases, whether the invention tends to make firms larger or smaller
  will depend on the relative effect on these two sets of costs.
  For instance, if the telephone reduces the costs of using the price
  mechanism more than it reduces the costs of organizing, then it will
  have the effect of reducing the size of the firm.

So if we want to know what will happen to the economy as the Internet
becomes widely adopted, we must compare the effects of Internet use
on transaction costs to its effects on organizing costs.  How can we
make such a comparison?  It would seem impossible, given the diversity
of both kinds of costs and the difficulty of comparing them.  But I
would argue that the global economy is now experiencing extraordinary
reductions in organizing costs, and that this effect swamps anything
relating to transaction costs.

Why?  Consider the benefits to a firm from a cheap, pervasive digital
communications network.  These benefits are numerous, of course, but
we need only focus on a single category: benefits associated with
economies of scale.  A computer network enables a firm to distribute
the same information (product designs, policies, marketing materials,
training materials, and so forth) to a vast number of locations for
almost no marginal cost.  It also enables a firm to gather information
in a standardized format (sales figures, market information, customer
surveys, and so forth) into a centralized organization for analysis
and action.  As the world becomes more homogenous -- shared tastes,
language, infrastructure, weights and measures, currency, regulations,
accounting standards, and so on -- the benefits of this informational
circulatory system increase.  A business can double the number of its
branches with little or no change in the size of the central office.
A competing firm with fewer branches will suffer a disadvantage; it
will experience the same central-office costs, but will have fewer
transactions over which to distribute them.  The Internet might play
a different role in a world of infinite diversity.  But in the world
we actually inhabit, the forces of homogeneity are operating at full
fury.  These include the globalization of media and culture, regional
and global treaty organizations, the advantages of compatibility of
technical standards, and the mutually reinforcing effects of economies
of scale in a hundred industries.  Thus, when the European Union
introduced the euro as a common currency across several countries,
the costs of running a business in several markets at once dropped
substantially, leading to an unprecedented wave of cross-border
mergers.  In fact, I am not aware of any evidence that transaction
costs as a proportion of the overall economy are falling *at all*.
(Indeed, North suggests that they have been rising.  See Douglass
C. North and John J. Wallis, Integrating institutional change and
technical change in economic history: A transaction cost approach,
Journal of Institutional and Theoretical Economics 150(4), 1994,
pages 609-624.  They also observe that "the firm is not concerned
with minimizing either transaction or transformation costs in
isolation: the firm wants to minimize the total costs of producing and
selling a given level of output with a given set of characteristics"
(page 613).)  Transaction costs are like highway safety: when highways
become safer, people drive faster; when transaction costs are lowered,
people engage in more complicated transactions.  But when a firm
doubles its number of branches, one can readily compute the reduction
in organizing costs: the cost of running the central office are now
distributed over twice as many transactions.

If decreased organizing costs through economies of scale are the most
important factor influencing changes in the economic role of firms,
what follows?  First, because economies of scale require a strong
element of homogeneity, it follows that reduced transaction costs
have a chance of making a firm smaller when dissimilar activities are
being coordinated.  This is one reason why firms sometimes break apart
when they find themselves spanning two or more dissimilar markets.
It is also where outsourcing comes from.  An obstacle to outsourcing
is the complexity of the contractual relationship, and communications
technologies can be used to help manage the complexity, for example
by connecting the parties' computerized accounting systems.  Yet in
many cases the main reason for outsourcing is not these transaction
costs, but rather the economies of scale that the outsourcing firm
itself can achieve.

This leads us to the second consequence: far from conforming to Adam
Smith's idealized market of individual artisans, the networked economy
is organized mainly by large firms that enjoy vast economies of
scale.  These firms are then supplied by large numbers of small firms
in conditions that approach monopsony, that is, competitive sellers
facing a single buyer with overwhelming market power (see generally
Bennett Harrison, Lean and Mean: The Changing Landscape of Corporate
Power in the Age of Flexibility, Basic Books, 1994).  Market power
will flow to whichever firms control the intangible resources --
intellectual property, for example -- that fuel economies of scale
in a given industry.  As economies of scale become globalized, the
result is a breathtaking concentration of power in a small number of
firms, each of which controls a certain "slice" through the global
economy (see Lowell Bryan, Jane Fraser, Jeremy Oppenheim, and Wilhelm
Rall, Race for the World: Strategies to Build a Great Global Firm,
Harvard Business School Press, 1999).  Firms will not tend to be
involved in diverse activities.  Instead, they will choose one single
activity and manage all of that activity throughout the world.  This
picture is emerging very rapidly and very clearly, and it is visible
to anyone who reads the business pages and ignores the abstractions of
the enthusiasts.

Let's invent something.  A new kind of cookbook.  Most cookbooks are
minor variations or permutations of existing recipes, and it is most
frustrating that they disclose so little about what can be substituted
for what, or which measurements are crucial and which are approximate.
Some recipes work just fine if you throw in all of the vegetables you
have in your refrigerator, and others do not.  Recipes are superficial
in these ways, among others, and we need a new kind of cookbook whose
recipes are a lot deeper.  A good starting place would be Bob Scher's
fabulously wonderful but unknown and out-of-print "Fear of Cooking"
(Houghton Mifflin, 1984).  Though it has recipes, "Fear of Cooking" is
not a cookbook but rather a book about how to cook on your feet: how
to improvise, how to taste things as you go along, how to reason about
what goes with what, how to vary a recipe, and how to cook without a
recipe.  Its central dictum is, "It's right in front of you".  You can
look at it, taste it, do whatever you want with it.  No great culinary
deity is going to whack you on the head if you are a bad boy or girl
who does not follow the recipe exactly.  Do you want to be sure that
it tastes good?  Taste it!  This approach makes a lot of sense.  It
probably increases the sales of cookbooks in the long run, sort of
the way that public libraries increase the sales of other kinds of
books.  And it certainly produces better cooking than the would result
from the habit of following the recipes in a timidly superficial way.

So let's radicalize Scher's insight, and see if we can invent a genre
of interactive cookbooks that dance with you rather than dictating.
The starting point is to acknowledge that recipes follow certain broad
schemas.  There are only so many ways to cook meat, only so many ways
to cause meat to taste like anything, only so many ways to transfer
heat to food, and so on, and the system would explicitly represent
this finite space of possibilities.  Likewise, some things go together
(e.g., chicken and tarragon), many other things do not (e.g., beef
and strawberries), and the world will not end if we write down a list
of these pairings in computer-readable form.  Yes, of course the cook
should be tasting things to see what else might go together -- Scher
has a whole chapter on this -- but it's simple elitism not to write
down the five hundred taste pairings that the great majority of actual
recipes are based on.

The first step should be what the cook has in the refrigerator.  This
could all be done with voice recognition, not least because it's next
to impossible to type while you're cooking: your fingers are messy
and you need to move around.  You start with the refrigerator because
that is where the major perishables are, and all real cooks have stuff
in the refrigerator that needs to be cooked today or else it will rot.
Lots of people who know nothing about cooking have proposed that your
refrigerator should automatically know what it has in it, but that's
ridiculous.  Maybe it's somewhat useful for the interactive cookbook
to know the oven's settings in real time.  But forget that Buck Rogers
stuff.  It's not the essential thing.  Instead, start with basics:
the cook opens the fridge, looks inside, and says "we need to cook the
asparagus".  Then to the freezer and, "we have some frozen chicken".
The system, having kept track of which staples, spices, and condiments
the cook appears to keep in stock, might respond, "you have onions and
curry powder, no?".  Or it might say, "are we going French or Indian
tonight?".  Or perhaps "we did that cold chicken/asparagus salad thing
last time; shall we go with that again?".  Or "Oprah's favorite would
be ...".  Or maybe the organizing metaphor is not a conversation but
a diagram: the system could display a nice, bold, graphically-designed
diagram of the whole enormous space of recipes, and as constraints
become known it could demonstratively narrow down this diagram in a
way that makes clear what possibilities are left and what else needs
to be known.  Cook and system can narrow down their strategy in a lot
of different ways, and it will be up to the cookbook author to build
and implement a strategy that a particular sort of cook will find
congenial.  Our task here is not to invent the particular cookbooks,
but to provide the platform upon which such cookbooks can be built.

Another important ingredient would be a vast corpus of recipes.  You
can cook pretty darn well from first principles if you have mastered
Scher's philosophy, but first principles do take heavy thinking and
only get you so far.  Having established an overall strategy for the
dish, the system can then look up all of the recipes that are in the
same general space.  The recipes would need to be indexed by something
deeper than the usual superficial indicators -- ingredients and steps.
The world will not end if recipes are indexed by categories such as
"this is one of those recipes where you cook meat slowly in liquid
for a long time until it falls apart" or "this is one of those recipes
where you put something hot or messay in the middle of a bunch of
starch so you can pick it up".  Having retrieved a batch of recipes
based on the available ingredients and the general cooking strategy,
the recipes can then be compared for fine details.  Is the quarter-cup
of currants optional?  Well, none of the others have it.  It must be
something that this one particular author included in a sorry attempt
to differentiate his own recipe from a hundred others pretty much like
it.  Is it crucial that the sauce seem just a little bit dry at this
point because of the liquid that the mushrooms are going to release
later?  Well then for heaven's sake say so -- although this is an area
where existing recipes are often pretty good.

Then comes the crucial matter of the ordering of steps.  The greatest
creativity in these new interactive cookbooks will probably lie in the
conversation you have with them as you go along.  Maybe you say, "now
I'm mixing the whatever and the whatever".  The vocabulary is going to
be restricted, both because of the limits of speech recognition in a
noisy kitchen environment and because of the machine's limited ability
to understand telegraphic utterances in context.  This conversation
could easily become a pain.

  Machine: "Add the sugar."
  Cook: "Adding the sugar."
  Machine: "Stir in the sugar."
  Cook: "Stirring in the sugar."

Argh!  This is where interface is crucial.  Each cookbook will have
to develop a visual and auditory vocabulary for communicating its own
sense of what state things should be in.  Simply marching step-by-step
through the usual superficial kind of recipe would be a disaster.
(Introductory textbooks of computer science -- Knuth, for example,
in his "Art of Computer Programming" -- always seem to introduce the
concept of an algorithm by explaining that an algorithm is like a
recipe.  Not!)  Instead, the conversation should be organized by the
general strategy that was selected at the outset.  It's that strategy,
and not the numbered steps of the recipe, that will let everyone keep
track of where they are.  Sometimes the ordering of steps matters and
sometimes it doesn't.  Sometimes the ordering of steps in a recipe is
completely unrealistic in a context where you're cooking three other
recipes in a kitchen with only one oven and one cook.  Each cookbook
author will develop a distinctive interactional language for dealing
with this problem.  Reminders would be nice: if the sauce is supposed
to simmer for ten minutes, it might actually help for the machine to
wait for nine minutes and say, "looking after the sauce?".  You'd need
to be able to turn that off.

This brief sketch hardly begins to survey the difficulties that real
interactive cookbooks would face.  In her influential book "Plans
and Situated Actions", which recently came out in a second edition
from Cambridge University Press, Lucy Suchman describes some of the
many problems that confronted a system at Xerox that was supposed to
instruct photocopier users in the step-by-step use of the copier's
more advanced features.  The system was a dreadful flop for lots of
interesting reasons, not least the programmers' theory that human
activity is organized by plans, when in fact activity is fundamentally
a moment-to-moment improvisation that might occasionally draw on
plans for some heuristic sense of how things are supposed to turn
out.  Tremendous effort has gone into AI systems for automatically
understanding human utterances in context, and some of that work
is interesting for the dynamics of its struggles with the gnarliness
of the actual phenomena of real human meaning-making in context.
My own sense is that it's a mistake to anthropomorphize these things.
So I am assuredly not recommending that the design of an interactive
cookbook follow the metaphor of a helpful "agent" with a name (e.g.,
"Bob") and a schmaltzy personality.  I think we need to accept the
artificiality of these things, and instead of following tired old
stereotypes we should invent the discipline of interactive information

Lots of good people are out there approaching this problem from
various disciplinary directions, such as the information designers
who are transferring their expertise from three-fold brochures to
Web pages.  This work has hardly begun, and little of it yet reflects
in its basic interactional strategy the kind of deep model of the
information that I'm talking about here.  The good news is that the
narrowly technical obstacles to this kind of densely interactional
system are rapidly falling away.  It will soon be entirely feasible
to equip an ordinary kitchen with a ruggedized four-square-foot flat
panel display -- maybe a wireless, flexible one that can be rolled
up and stored in the closet when it's not being used, or several
smaller displays that can be spread around the kitchen so that they're
always in sight as you move around -- and to devote a gigahertz micro-
processor to speech recognition -- an area where brute computational
force seems to pay off.  It is also becoming feasible to construct
and render quite complicated informational displays fast enough to
be useful as part of a flowing interaction.  We can get rid of the
tired interactional vocabulary of the Mac and Windows, and instead we
can render shaded objects, rotate things in space, integrate dynamic
typography with moving pictures, integrate photography with animation,
and so on.

Heaven knows I'm not an artist, so I have limited ability to evaluate
the various proposals in this huge design space.  What interests
me is how the design process can analyze the structure of a design
problem and relate it systematically to the structure of the design
space.  Analysis of design problems for system design has historically
been pretty thin, not least because the design space afforded by the
available technology has been thin as well, and that even the more
adventuresome proposals have not gone far into the social organization
of the activity.  That was the point of my paper -- really a manifesto
for a class -- on "Designing Genres for New Media".  That paper didn't
really explain how to design genres, and I wish I had a more creative
title for it.  But it did draw on a wide variety of social-sciences
literatures to provide a good set of analytical tools for debunking
shallow designs and inspiring others.  I'm hoping to do more of this.

Many people have remarked on the discrepancy between the very high
levels of concern about electronic privacy invasion that Americans
express in opinion polls and the lack of any great political movement
to protect privacy.  Various explanations have been offered for this
phenomenon.  A favorite of the information traffickers is that privacy
regulations would not fit with the laissez-faire individualism of
American culture.  This deep fact about American culture must explain
why 70%+, and in some cases 95%+, of Americans opine in favor of
such regulations when someone actually asks their opinion.  Privacy
activists, for their part, often point to the nebulous character of
the problem.  With environmental pollution you can at least see the
smoke and oily seabirds, but with invasions of privacy the information
flows silently, out of sight, and then you can't figure out how they
got your name, much less which opportunities never knocked because
of the bad information in your file.  But get a couple beers in them
and they will fantasize about what they call "Privacy Chernobyl" --
the one privacy outrage that will finally catalyze an effective social
movement around the issue.

My candidate for Privacy Chernobyl is the widespread deployment in
public places of automatic face recognition.  Industrial-strength
face recognition is almost here.  Digital cameras are almost free,
many public spaces are already festooned with them, the control rooms
to monitor them already exist, and the bureaucracies that prosper by
multiplying them are already in place.  State governments have built
digitized databases of photographs of people's faces -- captured for
purposes of creating drivers' licenses and then spread around at US
government expense for other purposes (Washington Post, 1/22/99 and
2/18/99).  The needed computational power will become cost-effective
over the next few years.  What's to stop it?  Once the possibilities
become clear, reporters will call college professors on the phone and
say, "I want to write about the privacy aspects of this, but my editor
says it isn't a story until someone gets hurt".  College professors
will reply, "once people start getting hurt by this it will be too
late to stop it".  They will resolve to revisit the issue in a year.

The consequences are so vast that they need to be explained slowly.
Any organization that has access to the database of face photos will
be able to stick a camera anyplace it wants, and it will know the
names and identifiers of all the people who go past.  Gaze in the
window of the jewelry store and you'll get personalized junk mail
for diamonds.  Go to a part of town where your kind isn't thought
to belong and you'll end up on a list somewhere.  Attend a political
meeting and end up on another list.  Walk into a ritzy boutique and
the clerk will have your credit report and purchase history before
even saying hello.  People you don't know will start greeting you by
name.  Lists of the people you've been seen with will start appearing
on the Internet.  The whole culture will undergo convulsions as taken-
for-granted assumptions about the construction of personal identity
in public places suddenly become radically false.  Pundits will take
note.  Doonesbury will devote a Sunday comic to the problem.  Yet
it will be argued that nothing has really changed, that public places
are public, that your face has always been visible to anyone with
a camera, and that you can hardly prevent someone from recognizing
you as you walk down the street.  Besides, what do you have to hide?

And that's just the start.  Wait a little while, and a market will
arise in "spottings": if I want to know where you've been, I'll have
my laptop put out a call on the Internet to find out who has spotted
you.  Spottings will be bought and sold in automated auctions, so that
I can build the kind of spotting history I need for the lowest cost.
Entrepreneurs will purchase spottings in bulk to synthesize spotting
histories for paying customers.  Your daily routine will be known to
anyone who wants to pay five bucks for it, and your movement history
will determine your fate just as much as your credit history does now.
Prominent firms that traffic in personal movement patterns will post
privacy policies that sound nice but mean little in practice, not
least because most of the movement trafficking will be conducted by
companies that nobody has ever heard of, and whose brand names will
not be affected by the periodic front-page newspaper stories on the
subject.  They will all swear on a stack of Bibles that they respect
everyone's privacy, but within six months every private investigator
in the country will find a friend-of-a-brother-in-law who happens to
know someone who works for one of the obscure companies that sells
movement patterns, and the data will start to ooze out onto the street.

Then things will really get bad.  Personal movement records will be
subpoenaed, irregularly at first, just when someone has been kidnapped,
but then routinely, as every divorce lawyer in the country reasons
that subpoenas are cheap and not filing them is basically malpractice.
Then, just as we're starting to get used to this, a couple of people
will get killed by a nut who been predicting their movements using
commercially available movement patterns.  Citizens will be outraged,
but it will indeed be too late.  The industry will have built a public
relations machine which will now swing into action by communicating
the benefits of movement tracking (which will have gotten a new name
that connotes safety and intimacy rather than spying and deceit).
All celebrities who could give the movement-tracking industry a hard
time will quietly be assured that their faces have been removed from
the database.  Forbes will publish an article, based on material from
an industry-funded think tank, that denounces "hysteria" and mocks the
neurotic paternalism of the privacy weirdos.

Campaign contributions will pour in.  A member of the Federal Trade
Commission will express concerns on the matter, build a network among
industry executives, give speeches at industry association meetings,
call elaborate hearings, generate a lot of publicity, wait a couple
of months, issue a report that whitewashes the hearings and recommends
voluntary self-regulation, step down from the commission, and found a
lobbying group that represents the movement-tracking industry.  Masks
will become fashionable, pioneered at first by "face-hackers" but then
moving into the mainstream.  Facial piercings will now be useful for
something, and quick-release mask fasteners will go through several
rounds of product recalls.  A third-rate fashion designer will become
famous for about two minutes by putting masks on his runway models.
Bill Gates will be spotted walking into saloons and road houses from
coast to coast.  Conspiracy groups will go bonkers.  President McCain
will release a hundred pages of his movement history.  The editors of
Mondo 2000 will claim to have predicted the whole thing back in 1988.
And at long last, normal Americans, their patience having been worn
clean out, will get pissed off at the whole stupid thing.  Privacy
Chernobyl will now finally have arrived.  The revolution will begin.

Happy Y2K.