Some notes on site-specific information services and the year 2000,
plus follow-ups and URL's.


With 2001 coming up, let's do the numbers on a seriously strange year.
In the year 2000, RRE distributed about 215 messages, which is toward
the low end historically.  But it also distributed 900 URL's about
the elections, 1000 URL's about everything else, references for 1300
books, about 300 pages' worth of notes, and 10 excerpted book chapters.
All of those numbers are much higher than any other year of the list.
The list still has about 5000 subscribers, with about a third of that
total having turned over in the course of the year, which is typical.
So there you go.


Last week, in writing about public design, I recommended that you go
looking for ideas about institutional forces in the world, and then
imagine what it would be like for information technology to amplify
those forces.  The ideas should feel like levers: they should feel
like they pry the lid off a whole area of social life.  I gave one
example of such a lever-idea, the role of peer review in research
and other professions.  Information technology might amplify the role
of peer review in two ways: by intensifying the dynamics of existing
professional communities, and by generalizing the institutions of peer
review to other kinds of communities.  Of course, it's not information
technology as such that effects these changes.  In an existing
peer-review profession, the incentives created by the institution
motivate people to appropriate the new tools to do more of the things
they already want to do.  In a newly created peer-review community,
some kind of institutional design has to happen, whether by visionary
leadership, top-down lawmaking, or bottom-up organizing.  Whether the
new institution takes hold will depend on a lot more than technology.

Several other institutional lever-ideas are found in the short piece
called "Imagining the Wired University" that I sent in early October.

One such idea is economies of scale.  It's an old idea, of course,
but it has always been the shadow side of modern economics, given the
tendency of economies of scale to produce monopolies.  Most industries
would be monopolistic in a homogeneous world, but the world has been
heterogeneous enough that monopolies are not a problem in most areas.
Economists often talk as if a market with completely uniform products
and customers could be stably organized as large numbers of small
suppliers, but in most real cases different companies serve different
markets, or overlapping ones.  Information technology amplifies
economies of scale in two ways: by making it easier to monitor and
control uniform activities across a wide geographic area, and by
helping to standardize the world.  It does not logically follow that
every industry will become a monopoly, but it's certainly a matter of
some concern.

The "wired university" paper also summarized an earlier argument about
modularity: as electronic distance eduction programs enable students
to assemble their own degree programs from courses offered by various
different universities, pressure grows to standardize the courses and
the frameworks into which they fit.  This force for modularization is
already present, and the University of California goes to some lengths
to allow students to transfer from community colleges, bringing their
credits with them.  But transfers are nothing compared to the kind of
modularization that is envisioned by distance education enthusiasts.
The consequences for institutional and intellectual diversity are not
promising.  This argument is different from the one about economies of
scale, which is indifferent to the granularity of units over which the
global systems of monitoring and control operate.  Economies of scale
depend on uniformity of substance; modularization is about uniformity
of the framework within which the substance is organized.  The force
that drives modularity also drives unbundling in several other areas,
for example banking.  Because no organization can excel at everything,
the technology-enhanced ability of individuals to shop around for each
element of the bundle separately creates a pressure toward a uniform
framework for assembling modules, which then creates a pressure --
maybe largely unintended -- for homogenity of the modules themselves. 

Another institutional lever-idea is found in the draft paper about
context-aware computing that I sent out the other day: the mapping
between institutions and the buildings that accommodate them.  In
the case of context-aware computing, information technology has its
effect in a different way, by allowing the logic of each institution
to spill into places that had not been built for it, and conversely by
frustrating each institution's efforts to protect against incursions
into its territory by other social logics.  Institutions bind us all
together whether we are actively tending to them or not; we all have
bank accounts even when we're in the gym, and families even when we're
at work, and careers even when we're at the movies.  Architecture and
urban design had formerly more or less partitioned these institutional
concerns, but now they all of them can happen everywhere all the time.
My paper sketched some of the consequences for the design of computing
systems, but consequences obviously follow as well for architecture.

Following on the theme of architecture, I want to talk about another
institutional force affecting the structure of the built environment
and the ways that information technology can help to amplify it.  The
story begins with what sociologists Walter Powell and Paul DiMaggio
call "institutional isomorphism".  This is a range of institutional
forces that encourage all the organizations in a given institutional 
field to converge toward a single model of operation.  Some of these
forces are legal (regulations that apply to everyone in the industry).
Another force is the tendency of organizations to hire one another's
employees, which makes sense because an employee who learns all of the
knowledge available in a given organization becomes much more valuable
to a competitor, and so changing sides is often a good career move.
Employees cycle among the organizations in an institutional field all
the time for this reason, and this tends to average out the knowledge
and skills, and thus the practices and strategies across the field.

Something important follows from the various dynamics of isomorphism.
It is often remarked that the manufacturing department of competing
companies can usually communicate with one another more readily than
either manufacturing department can communicate with its own marketing
department, and the same goes for most of the specialities in any
organization.  Isomorphism, in other words, creates a kind of matrix
structure, with professions along one axis and organizations along the
other.  Some theories treat organizations as organic wholes, but in
fact (as Powell and DiMaggio point out) most organizations are loosely
assembled from more or less standardized elements, each corresponding
to one of the professions within it.  The finance people have their
way of doing things, as do the research people, the technical people,
the legal people, and so on.  A strong organization will have a kind
of culture that cross-cuts the disciplinary cultures, but that is only
a matter of degree.  In no way is even the strongest organization any
sort of holistic organism.

What does this have to do with architecture?  Recall that architecture
has historically been driven in large degree by a mapping between
institutions and places.  An architect's job, much like the business
executive's is to make each building into some kind of organic whole.
And since it's mostly architects who tell the history of architecture,
we tend to assume that they have succeeded.  But another way to look
at it is quite different: that buildings, like organizations, are in
practice loose assemblies of standardized elements that are embedded
in different institutional logics.  Now, to some degree this problem
is overcome by the simple expedient of putting different institutions
in different buildings: the hospital in one building, the market in
another, the bank in another, and so on.  And this does mean by Powell
and DiMaggio's point applies to cities, despite the best (usually not
very effective) efforts of urban planners.

But Powell and DiMaggio's point also applies to a great extent even
within individual buildings.  Even though we speak of a hospital (for
example) as "an institution", any given hospital-building will contain
quite a number of diverse activities, each with its own connections to
its counterparts in other hospitals.  The medical records department
is a whole different deal than the neonatal unit.  Why mix all of the
different functions together in one building?  Several reasons, none
of them terribly compelling on its own, together with the lack of any
strong reason not to.  For example, even though the various medical
specialities are institutionally somewhat separate from one another,
medical training emphasizes cross-training doctors in several fields, 
not least so they can postpone their decision as to which field to
specialize in.  The specialties can also share support services that
are not specialized, and the medical fact is that patients (who refuse
to conform to institutional categories of any sort) are often sick in
multiple ways at once.  The hospital building, in short, reflects the
loose articulation of a variety of sub-institutions.

To understand what information technology "does" to the buildings
that support institutions such as hospitals, the right place to start
is with the forces already operating within those institutions.  Thus,
for example, the forces that push the medical system toward a degree
of unbundling, such as specialized oncology centers, are facilitated
by technologies that make medical records more portable, or that allow
paperwork-related activities to be done at more of a distance from the
treating of patients.  The general point, as articulated by Mitchell's
new book, "E-topia", is that information technology loosens some of
the bonds that had formerly kept different activities in nearby places,
thus allowing other bonds to be strengthened, in this case bonds such
as proximity to patient populations or parking facilities.  Of course,
one could not begin to understand the medical system or the uses it
makes of information technology without looking at the dynamics of
institutions like Medicaid and HMO's.  That's the point.

Instead, I want to look at the problem from another angle.  We have a
kind of matrix structure, organizations on one axis and professions on
another.  We have a mapping within each institution between activities
and places.  We have a tendency for the corresponding activities
within different organizations, and different buildings, to be related
to one another.  And we have the tendency of information technology
to amplify all of this.  I want to try out the radical suggestion that
the notion of place changes as a result.  We need a new word: let us
say that a "site" consists of all of those places where a particular
institutionally defined activity customarily takes place.  (So I am
talking about physical places, not Web sites, much less places in
virtual reality.)  All of the hospital reception desks in the world
are a single site, as are all of the gas stations, flower shops,
college physics labs, train stations, or mayors' offices.  The radical
suggestion is that in a wired world every site -- that is to say,
every set of places where analogous activities take place -- becomes
steadily more interconnected.

This is a fancy suggestion, and it should be qualified.  First, the
"interconnection" among the various places that comprise a site will
differ from one institution to the next.  The places might get audio
or video links, or perhaps the connections will all be mediated by
a headquarters or other central location.  Think, for example, about
the site that consists of the world's train stations.  These places
are connected in the sense that they are places for the practice
of a profession, or perhaps several professions.  Whenever you have
a profession, you have knowledge being shuffled back and forth; you
have benchmarking, notes being compared, experiments being watched,
disasters learned from, and so on.  The people who design and run
the stations presumably travel among the stations every chance they
get for these purposes.  The people who sell things to train stations
probably make directories of them, and they want to stay connected
to them for the same reason any business wants (to the extent it's
economically possible) to keep a continual eye on the market.  The
stations are also interconnected for operational purposes; trains
that run late at one station will have consequences for people and
activities in other stations.

All these kinds of interconnection are driven by the incentives of
the institution, and as the technology of interconnection improves,
the same incentives will naturally amplify the interconnections.
Of course, the interconnections won't increase by magic; there will
presumably be politics and consensus-building and early experiments
that fail and late adopters who wait around for standards.  Each of
the experiments in advanced interconnection will run into unexpected
kinds of hetereogeneity that nobody had thought about before -- for
example, differences of language, climate, organizational culture,
or legal environment.  Turf wars will presumably shape the trajectory
of the interconnection architecture as well.  "Interconnection" can
mean a million different things, and it will be the fine details of
history that determine what form the interconnection will take.  The
only generalization we can make is that it will happen.

Train stations are obviously an easy case; railway people have been
coordinating their activities for centuries, and were early adopters
of the telegraph and computer.  But similar dynamics apply to a wide
range of sites -- transfer of knowledge, surveillance of the market,
operational coordination, and so on -- and you can easily spell out
similar analyses for any other institutionally defined place you can
imagine.  In some cases the results may not seem compelling, but that
might simply be the result of our own ignorance.  How much knowledge
can there possibly be to transfer between different gas stations or
hospital reception desks?  Probably a lot more than we know.

But I would suggest that the idea is unsatisfactory for a different
reason.  Notice the following analogy between sites and communities of
practice: a site consists of all those places where certain activities
in an institution take place, and a community of practice consists of
all those people who occupy a certain role in an institution.  What's
important about a site is largely that a community of practice lives
there.  Train station managers spend most of the day in their train
stations, but the station managers' community happens in other places
as well, such as the hotels where they have their annual conferences.
The community is analytically related to the site, but it is not stuck
there.  So the question arises: when should we think of places being
interconnected, and when should we think of people being interconnected?
In particular, when should information be delivered to a place, and
when should it be delivered to a person?  The information displayed
on train station arrival/departure boards is meant for individuals who
have specific relations to those trains: intending to depart on them
or meeting someone arriving on them.  Perhaps we should forget about
the arrival/departure board, and instead identify the individuals who
play those specific roles in the institution.  Likewise, the station
managers might want to remain in real-time contact wherever they are,
whether for operational purposes, or to share knowledge, or to gossip.

Okay, then, why *should* information ever be delivered to places?  Why
not deliver all information to personal information devices attached
to individual people?  I can think of several reasons.  Because of
the mapping between institutions and buildings, and between activities
and places, your current location narrows down the information you are
likely to need.  Once you are standing in the train station, putting
the departure information on an overhead board versus your PDA is not
that big a decision.  It's an interface question.  The information may
not be simple, and the overhead board has the advantage of being big.
Perhaps the information will be routinely available in both forms.
Your PDA need only display your own train's departure time, but that
assumes that your PDA knows what train you are on, either because you
have an e-ticket (and the e-ticket system is interoperable with the
appropriate piece of software on the PDA) or because you have entered
the information by hand.  On the other hand, you have reasons to want
to know about your train when you are located in places other than the
train station, for example when you're traveling to the station and
want to know if your train is going to be delayed, or indeed earlier
in the day when information becomes available about track conditions.

Information about the scheduled and expected departure time of your
train, then, is only loosely coupled to a particular place: you will
need that information at the station, but you will also need it along
a whole trajectory from place to place.  Other information, however,
is more strongly rooted in a particular place.  I need to take apart
the starter motor of my car this week, but I won't need information
about how to do that until I have my tools out and the hood open.
Then it would be nice to have an augmented reality scheme to project
the necessary diagrams onto the actual motor.  The Haynes manual with
its step-by-step photographs is a good thing, but the motor will be
covered with grime, partly obstructed by hoses, and generally hard to
reconcile with the photographs in the Haynes manual.  I do need some
information before I get to the car, such as the list of tools that
will be required.  I actually wouldn't actually need that information
except in the car, since I don't have a hardware store within walking
distance.  But I'd prefer to have it at some point in time before the
repairs are about to begin.

The same analysis applies to my (ahem, apprentice) membership in the
community of auto mechanics; if I could have another mechanic looking
electronically over my shoulder while I was working on the car then
that would be great, maybe pointing to things in my augmented-reality
display.  That would be a hybrid: an informational connection between
community members that is predicated on one of us being located in a
certain place.  On the other hand, informational connections among the
cars themselves can be useful.  Some junk yards make a fleet of dead
cars available for cannibalism by mechanics, but it would be nice to
know ahead of time whether the parts you're looking for are available.
A teleoperated Webcam in each car wouldn't be as hard as all that, or
else an inventory of the available parts like they have at Wal-Mart.
And we could have a club: every Saturday at 10am Pacific Time, every
Honda Accord owner who's planning repairs can get on the party line.
Football games and carburetors can be discussed, and anybody who needs
help can ask the others to dial up real-time video of what they're
seeing.  The whole world's fleet of Honda Accords -- model year by
model year -- is thus an excellent example of an interconnected site.

Notice the institution: the amateur auto mechanics' club.  This is
not the sort of institution that's going to evolve a large bureaucracy
or inspire a lot of dues-paying.  The train station managers have had
their professional association for a long time (Al Chandler observes
that it was the railroad occupations who started the modern wave of
professionalization in the late 19th century), and they can mobilize
the organization and resources they need for cooperative projects.
Amateur auto mechanics are another story.  The necessary institution
might grow out of enthusiast car clubs, which have historically been
limited to the really hard-core hobbyists, or out of the Auto Club.
But if the component technologies are all standardized and universally
deployed, then the Saturday-morning amateur mechanics' club could be
a much more spontaneous affair, much like the online discussion groups
that are made tolerably easy by AOL or egroups.  Institution-building
is thus related to standards dynamics: the more standards you have,
the less of a special-purpose stovepipe the institution has to build
and keep alive.  But this also means that site-specific interconnection
technologies can't be as site-specific as all that.  The analysis will
need to be done separately for every site, given the specific activities,
kinds of information, and community dynamics that are happening there.

This kind of analytical framework has many advantages, and one of them
is heuristic.  Start with a list of institutions (research university,
medical system, kids' sports leagues, investment banking, labor unions,
treaty organizations, emergency dispatch services, contract law, music
industry, technical standards), then spell out the various roles that
people play within them (student, nurse, coach, borrower, shop steward,
office manager, paramedic, appellate judge, A&R, committee chair, etc),
list the various activities that people in those roles engage in, list
the places where they customarily engage in them, identify the genres
of documents and interactions that they use to exchange information,
investigate the cognitive workings of the communities of practice that
go with each role (the world's students, nurses, coaches, borrowers,
and so on), watch how they fulfill the various generic purposes that
community members can have for interacting with one another (transfer
of knowledge, surveillance of the market, operational coordination),
and pretty soon you will have a long list of quite specific proposals
for new systems and services that wire the communities and sites more
thoroughly, together with plausible explanations of why these proposals
ought to work in practice, including a general indication of the kinds
of standards that each of them will require before they are practical.
If you spell out a hundred such proposals, then one of them might work.


Gosh darn, I haven't seen *one* conference assessing the predictions
for the year 2000.  Not one!  What do you suppose happened?  Maybe not
enough advance notice?

Well, we can't let the year 2000 go past without considering the most
important of all the year-2000 prediction projects:

  Herman Kahn and Anthony J. Wiener, The Year 2000: A Framework for
  Speculation on the Next Thirty-Three Years, Macmillan, 1967.

As with the issue of The Futurist that I described back in July, the
Kahn/Wiener book was very much part of the state-centered right-wing
culture of the Cold War.  As such it seems impossibly anachronistic
today.  It is a huge and ambitious book, and assessing it as a whole
would be an equally ambitious project.  Suffice it to say that among
all of their numerous scenarios they couldn't even imagine the fall
of the Soviet Union.  They've got holographic television (page 104)
and all of the other yesterday's tomorrows as well.  I just want to
focus on a small part of their book, the discussion of information
technology over a couple of small subsections totalling ten pages.
This discussion is fascinating, obviously, for the window it provides
onto a very different time.  But more important is the window it
provides onto our own time.  We've gotten beyond some of the more
superficial misunderstandings that plagued that long-gone era, but
I want to argue that we are still plagued by some deeper assumptions
that the technocratic fundamentalism of the Cold War took for granted.

Kahn and Wiener first ask how powerful computers are likely to be
in 2000, and they extrapolate from the improvements in the technology
to date.  They clearly understand that the technology was improving
exponentially, and they ask about the rate of exponential growth
and how long it will last.  (We think of these as modern assumptions,
but they are already clearly present here in 1967, though the precise
formula of Moore's Law comes later.)  Here are some quotes:

  If one uses as a standard of measurement the size of the memory
  space divided by the basic "add time" of the computer ... then over
  the past fifteen years this basic criterion of computer performance
  has increased by a factor of ten every two or three years (this is a
  conservative estimate) (page 88).

I want to remark in passing on their formulation of the exponential
growth law.  People without mathematical training will be puzzled by
the quotient.  What is memory space divided by add time supposed to
measure?  But anybody who works with math every day will understand
what they are doing.  They are really using two separate performance
measures, increase of memory density and decrease of the time it takes
to execute an operation.  To combine these measures, they take the
inverse of the "add time", in other words number of math operations
per second -- this is the thing we measure in "flops".  They believe
that both measures will increase exponentially, and so it makes
sense to multiply them.  By formulating their measure in this way,
they assumed that their reader was part of the Cold War technocratic
culture centered on natural science and engineering.  No culture of
information technology had arisen yet as distinct from that larger
world -- computer science uses discrete math, formal language and
automata theory, and that sort of thing -- so those natural-science
mathematical ways of thinking were taken for granted.

In any case, here we have the authors' clearest statement of their
predictions for the increasing power of computers:

  If computer capacities were to continue to increase by a factor
  of ten every two or three years until the end of the century ...
  then all current concepts about computer limitations will have
  to be reconsidered.  If we add [advances in other areas] these
  estimates of improvement may be wildly conservative.  And even if
  the rate of change slows down by several factors, there would still
  be room in the next thirty-three years for an overall improvement
  of some five to ten orders of magnitude (page 89).

This is pretty good, considering the information they had available.
They were not right about the factor of ten every two or three years.
It was more like every five years, which in exponential terms across
thirty-three years makes a big difference.  But their low-end estimate
of five to ten orders of magnitude by the year 2000 was correct.  They
also had a decent sense of the factors that would affect the rate of
technological improvement, but they had a poor understanding of which
factors would end up making a difference.  Like most people at that
time, they greatly overemphasized the kind of data-parallelism used on
the ILLIAC, an error that I attribute to the kind of systems thinking
that assumes that all problems from aircraft design to urban planning
can be reduced to matrix manipulation.  They were also overconfident
to a humorous degree about "large-scale improvement in the present
'soft-ware crisis' in programming language" (page 88).  Improvements
in precisely defined hardware functionality measures are a whole
different problem, it turns out, than improvements in semantics,
management, and politics.  The "soft-ware crisis" is still very much
with us, except that now it's an everyday nuisance for everyone.

Having issued their quantitative predictions about the improvements in
basic computer speed, they launch without a paragraph break into this:

  Therefore, it is necessary to be skeptical of any sweeping but often
  meaningless or nonrigorous statements such as 'a computer is limited
  by the designer -- it cannot create anything he does not put in',
  or that 'a computer cannot be truly creative or original'.  By the
  year 2000, computers are likely to match, simulate, or surpass some
  of man's most 'human-like' intellectual abilities, including perhaps
  some of his aesthetic and creative capacities, in addition to having
  some new kinds of capabilities that human beings do not have.  These
  computer capacities are not certain; however, it is an open question
  what inherent limitations computers have.  If it turns out that
  they cannot duplicate or exceed certain characteristically human
  capabilities, that will be one of the most important discoveries of
  the twentieth century (page 89).

Needless to say, none of this happened -- not even the part about it
being one of the most important discoveries of the twentieth century
that it didn't.  We didn't so much disprove these predictions as get
over them.  But here is where the bad assumptions still hang around.
Implicit in this paragraph is the idea that intelligence is simply a
matter of processing capacity: the computer is the same sort of thing
as the mind, and if you just turn up the knob sufficiently then it
becomes as smart as we are.  Of course, the argument is a little bit
more sophisticated than this: the mind is an information processing
device, and so once we figure out how the mind works, all we have to
do is wait for the hardware to get fast enough, program a simulation
of the mental software, and presto.

But with all respect to the smart people who have spent their lives on
this problem, it hasn't worked out that way.  Our computers may have
been improving exponentially, but our knowledge of the mind has not.
Indeed a major discovery of AI, openly admitted by people like Marvin
Minsky (who, don't get me wrong, is a genius), is that even though
we know a little bit about how to reproduce a few of the advanced
intellectual kinds of thinking, the ones that can be formalized
and constrained in a very tight way, we haven't the slightest idea
how people do things that six-year-olds can do.  Neurophysiologists
also know some things, and they are now publishing impressive books
(e.g., Antonio Damasio, The Feeling of What Happens: Body and Emotion
in the Making of Consciousness, Harcourt Brace, 1999).  But we have
an awfully long way to go.  Another problem is that we have learned
just how deeply human intelligence is embedded in the social world
and in embodied activity; it's not just a matter of constructing a
disembodied software mind.  Yet the emphasis on disembodiment was one
of the defining features of Cold War culture.  Those people lived in
their heads.

On the other hand, they also say this:

  In addition to this possibility of independent intelligent
  activities, computers are being used increasingly as a helpful
  flexible tool for more or less individual needs -- at times in
  such close cooperation that one can speak in terms of a man-machine
  symbiosis.  Eventually there will probably be computer consoles
  in every home, perhaps linked to a public utility computers and
  permitting each user his private file space in a central computer,
  for uses such as consulting the Library of Congress, keeping
  individual records, preparing income tax returns from these records,
  obtaining consumer information, etc.

They got this from J. C. R. Licklider, whose generally very accurate
predictions I have already quoted.  They clearly don't foresee the
rise of personal computers, but then I'm not so sure that the rise of
personal computers was such a good thing.  If you read the paragraph
again, you can see them predicting the rise of so-called application
service providers that do most of the heavy storage and processing on
Web sites, and not on your personal computer.  Despite some premature
hype, ASP's make all kinds of sense for small business computing if
not other kinds, and I expect we'll see more of it.

But then you have this:

  Computers will also presumably be used as teaching aids, with one
  computer giving simultaneous individual instruction to hundreds
  of students, each at his own console and topic, at any level from
  elementary to graduate school; eventually the system will probably
  be designed to maximize the individuality of the learning process
  (page 90).

As with Licklider's predictions, we see here the tendency to combine
accurate predictions involving ubiquitous information services with
false predictions about artificial intelligence.  People in these
authors' culture simply assumed that computers would become more
intelligent in some human sense, and that the future functionality of
computers would accordingly map onto traditional human tasks.  This
hasn't happened.  If we factor out the AI aspects of this prediction,
we get the universal online university, which has been predicted in
pretty much its currently hyped form for almost forty years.  And we
see here the characteristic shortcomings of these predictions: the
lack of emphasis on education as socialization into a professional
culture, the neglect of topics such as dance and experimental science
that have to be learned in person, the desire to automate teachers
completely rather than providing teachers with advanced tools, the
individualistic picture of the learning process, and so on.  On the
other hand, this prediction has certainly come true in some specific
niches, and there are real corporate training systems today that fit
the authors' description, provided that we are willing to describe
those systems in anthropomorphic terms as "providing instruction".

Their next topic is the impact of automation on manufacturing, and
especially on employment.  They observe that the enthusiasts' hype on
the subject appears to have no basis in fact, but they also provide
a lame argument about the persistence of unskilled jobs.  The reality
here is that neither the enthusiasts' nor the skeptics' predictions
have come true.  Automation has not produced massive unemployement,
and neither has it produced a paradise of leisure.  Unskilled jobs
really are disappearing, both because of automation and because of
the technologies and political conditions that allow them to be moved
offshore.  The United States today experiences both labor shortages
and unemployment, but that's largely a failure of the educational
system and not a problem with technology.  Assessing the hype about
automation, however, the authors have something interesting to say:

  This seems to be one of those quite common situations in which
  early in the innovation period many exaggerated claims are made,
  then there is disillusionment and a swing to overconservative
  prediction and a general pessimism and skepticism, then finally
  when a reasonable degree of development has been obtained and
  a learning period navigated, many -- if not all -- of the early
  "ridiculous" exaggerations and expectations are greatly exceeded.
  It is particularly clear that if, as suggested in the last section,
  computers improve by five, ten, or more orders of magnitude over the
  next thirty-three years this is almost certain to happen (page 93).

The rhetorical trick in this passage may not be clear until you read
it a few times.  They begin by staking out a stand of common sense,
setting themselves against both the extremists of exaggerated claims
and the extremists of exaggerated skepticism.  They place themselves
outside of this fray by claiming to identify a recurring historical
pattern of alternation between the extreme views.  But then, somehow,
by the end of the sentence -- all of this action happens in a single
sentence -- they have repositioned themselves as being more extreme
than the extreme optimists.  These neo-optimists regard even the most
ridiculous predictions to be nigh-unto-inevitable under even their
most conservative estimates of improved computing.  Clearly they are
talking through their hats here, because they had enough information
to realize that their most ridiculous claims would not come true if
computers improved by only five orders of magnitude.

Moving along, the authors proceed to date themselves something awful
with this passage:

  Automation and cybernation may be extended to the home also.  The
  idea of moderately priced robots doing most of the housework by
  the year 2000 may be difficult to accept at first.  According to
  an enthusiast, Meredith Wooldridge Thring, professor of mechanical
  engineering at Queen Mary College in London, "within 10 to 20
  years' time we could have a robot that will completely eliminate
  all routine operations around the house and remove the drudgery
  from human life".  [long quote from Thring] His description of robot
  household capabilities and widespread use seems most reasonable by
  the year 2000.  We shall have to ponder the *double* impact within
  the next thirty-three years of widespread industrial and household
  automation (page 94).

This hasn't happened either.  The problem here is slightly different.
Industrial and household automation, it has turned out, are different
problems.  Lacking common sense, automation requires a controlled
environment, and that's what a factory is.  You have probably seen
videotapes of anthropomorphic robot "arms" doing factory work; what
you don't see is the elaborate system of jigs and fixtures that hold
the parts in known locations to enable those arms to do their work,
or the fancy special-purpose machines that orient the parts correctly.
You don't see the structured light that is required for almost all
industrial vision systems.  And you don't see the constraints that
automated manufacturing has placed on the design of the products
themselves.  All of that is missing in household environments: you
have unpredictable children and guests, you have unstandardized stuff
lying around, you have animals, you have soft things like carpets and
pillows whose movements are hard to model mathematically and that do
not lend themselves to interacting with robots whose sensors are very
crude.  You don't get the train the people, and you have a different
liability regime for potential accidents.  In short, in the household
you have the worst possible environment for autonomous computer-driven

This is why it matters so much that the authors' casual predictions
for AI didn't come true: AI isn't just an extra set of functionalities
or a philosophical discussion-topic.  "Intelligence" is the difference
between computers that can function in uncontrolled environments and
computers that cannot.  And blurring the element of control in their
plans for the world is precisely the ideology of systems rationalism.
The need for control, moreover, is not just a matter of the physical
environment.  It's also about data.  If a computer is going to add
two numbers and get a meaningful result, then those numbers need to be
commensurable.  They need to have a uniform semantics.  And it's not
just two numbers, of course, but the thousands, millions, or trillions
of numbers that high-end computers requires.  If the results of that
processing are going to be meaningful, then the world needs to be
made out of uniform sorts of things, and it has to be represented and
measured in uniform ways.  This condition is satisfied in some natural
sciences contexts, for example in space.  And in fact AI systems work
great in space, where the environment is just as uniform as it could
possibly be.  But when the meanings represented in the computer come
from the human world, then you start to have a problem.  The problem
is obvious enough when computers are supposed to participate in human 
conversations, but it is also serious when the scientific community
brings together numbers that scientists in different fields gather
in different parts of the world according to different practices and
under different assumptions; see Geof Bowker's paper "Biodiversity
Datadiversity": .

So let's reconceptualize the history of computing.  Maybe factors
such as the size of the memory and the power of the processor, while
important, are secondary.  Maybe the crucial factor in the growth of
computing is the construction of niches in the real world where large
amounts of commensurable data can be found.  That means data models
to make explicit the semantics of the data; it means standardized
procedures for identifying, classifying, and measuring the things in
the world that the data represents; it means technologies for moving
all of the results of those identifications, classifications, and
measurements into one place; it also means standardizing the things
themselves so that they can be described in consistent terms despite
the potentially great differences in their contexts; and it means
administrative procedures to manage all those standards.  Maybe,
in short, 90% of the history of computing lies outside of computers,
in what we might call the data-world.  The data-world is not simply
the world as we happen to stumble across it.  Quite the contrary, it
is a highly administered place -- like the controlled environment of
the factory, except that the methods of control are adapted to the
needs of a particular context, a particular profession and culture,
particular institutions, and particular goals.  The great unwritten
history of computing is the construction of the data-world, and yet
for Kahn and Wiener, that whole history does not even exist.  Enabling
them to ignore that history is the function that AI plays in their
predictions.  Now I don't imagine that they covered up 90% of the
problem of computing on purpose; in these ten pages about computing
their 600+ page book they were simply recording the conventional
wisdom of their places and times.

The issue becomes significant right away in their discussion of the
future of information storage and retrieval:

  The problems of putting something like the Library of Congress
  conveniently at the fingertips of any user anywhere are dependent
  on our understanding and simulation of the ways in which people make
  associations and value judgements.  ... Humans must identify those
  records that deal with a common area of interest, and must analyze
  each record or document to decide what specific topics it covers,
  and the other areas with which it should be associated.  ... It is
  impossible to predict how far along we will be by the year 2000 in
  simulating in computers analytical abilities that require decades and
  vast amounts of experience for humans to acquire, but thirty-three
  years of continuing work should be enough to surpass any current
  expectations that have been seriously and explicitly formulated
  (page 95).

They were certainly correct to anticipate that the society of 2000
would use networked computing to retrieve large amounts of information
from library-like databases.  They got this from Licklider as well.
Note that they assume that people are getting their online information
from the Library of Congress.  This reflects their government-centric
Cold War vision, but it also reflects their neglect of (in Bowker's
term) datadiversity: the diverse ways in which data, including natural
sciences data of course, but also documents, are gathered and stored.
They are aware that the cataloguing of documents requires human effort,
but they are confident that increasing computer power will automate
this effort to a degree that is not even worth trying to imagine.

In this case as well, sheer increases in computing power have turned
out to be an orthogonal issue from the diversity of data.  Documents
are diverse.  They are generated by diverse institutions by authors
with diverse worldviews.  They employ many genres, and document genres
turn out to be very different sorts of things than data types.  For
example, genres are made to be flouted, and even the most genre-bound
detective novel will be found to get much of its effect by playing
off against the expectations that the detective-novel genre creates,
as well as taking up complex stances such as irony toward other genres.
Because of the datadiversity of documents, automation has not played
an important role in the cataloguing of documents, except in the
sense that the catalog itself and the editing tools that people use to
create and maintain the catalog are all computerized.  The cataloguing 
itself is done by people.  Now, some authorities (such as Bill Arms)
maintain that cataloguing is history because ultrapowerful computers
can search and sort huge collections of documents in brute force ways
based on crude textual searches (like Web search engines) and advanced
document comparison schemes (find me the documents that are most like
this one, summarize this document, etc).  Whether that's true or not,
it's not what Kahn and Wiener are predicting.

One element of their prediction was basically true:

  Within about fifteen years, data-transmission costs are likely to be
  reduced so much that information storage and retrieval will become a
  runaway success (page 96).

Useful information storage and retrieval systems did exist by 1980,
although I'm not sure that the "runaway success" part was unambiguously
true until 1995.  In the prediction business, though, being off by a
factor of two is not so bad.

Cold War assumptions of centralized data are also on display here:

  The current proposal for a National Data Center being debated in
  Congress, in which the records of some twenty or more government
  organizations will be amalgamated in one place is an example
  of things to come.  [Concerns have been raised about abuse]
  but almost all agree that the system or one like it will soon be
  operating (page 96). 

This proposal, of course, was the object of a major civil liberties
fight, one that established the precedent that the federal government
would not centralize its dossiers about individual citizens.  The
pressure toward such centralization always exists, of course, and
the Census Bureau negotiates it every day.  But the authors' bland
note of inevitability did not match the history.  It should be said
that civil liberties concerns were not the only reason: amalgamating
all data into a centralized database is easily said, but because of
datadiversity it is not easily done.

To really remind us of what was going on in 1967, and following on
from the social concerns that have begun to creep into their text,
the authors proceed as follows:

  ... a future President of the United States might easily have
  command and control systems that involve having many television
  cameras in a future "Vietnam" or domestic trouble spot.  Since he
  would be likely to have multiple screens, he would be able to scan
  many TV cameras simultaneously.  In order to save space, some of
  the screens would be quite small and would be used only for gross
  surveillance, while others might be very large, in order to allow
  examination of detail.  Paul Nitze, the present Secretary of the
  Navy, has made the suggestion that similar capabilities might be
  made available to the public media or even the public directly.
  This would certainly be a mixed blessing.  Obviously such
  capabilities can give a misleading psychological impression of a
  greater awareness, knowledge, and sensitivity to local conditions
  and issues than really exists.  This in turn could lead to an
  excessive degree of central control.  This last may be an important
  problem for executives generally (page 97).

The peculiar focus on the President, and the equally peculiar absence
of the whole bureaucratic system that provides a command structure
with its information, is vintage Cold War thinking.  As they observe
themselves, remote video cameras provide the illusion of knowledge --
at least knowledge about complex social phenomena -- rather than the
real thing.  Yet this image of walls of video screens used to monitor
a social or political crisis has become a lasting part of the culture.
The film "Enemy of the State" portrays the NSA (whose real appearance
is, of course, unknown to anyone except its employees) as a cavernous
room that is dominated by video screens whose displays of mayhem the
film never needs to explain.  What is more, the idea that the network
of video cameras would be made available to everyone is also found
in David Brin's "Transparent Society" book, among other places.  The
authors claim to understand the illusions of immediate representation,
and yet they completely fall prey to them as well.  They are aware of
the inherently problematic nature of data, but they do not understand
the problem well enough to avoid reproducing it.

The civil liberties concerns grow apace with this:

  There are also important possibilities in the field of law
  enforcement.  New York State has already tried an experiment in
  which the police read the license plates of cars going over a bridge
  into Manhattan and had a computer check these licenses against its
  files of scofflaws.  The police were able to arrest a number of
  surprised drivers before they got to the other side of the bridge.
  (Some of the drivers seemed to feel that this was like shooting
  quail on the ground.)

  Such systems could be made completely automatic.  Indeed it would be
  no trick at all, if the license plates were written in some suitable
  computer alphabet, to have them read by a television camera that
  was mounted on an automatic scanner.  We can almost assume that toll
  booths or other convenient spots will be so equipped at some later
  date.  It would then not be difficult to place these records of
  automobile movements in a computer memory or permanent file as an
  aid to traffic-flow planning, crime detection, or other investigations
  (as is done with taxicab trip reports in many cities today).  One
  can even imagine fairly large-scale records of all license plates
  (or other identification) passing various checkpoints being kept for
  many streets and routes (page 97).

This scenario, too, has become part of the culture.  In working on
the privacy issues associated with highway toll collection, I encounter
these scenarios all the time, both in the planning documents of the
relevant authorities and in the folklore that the technology brings to
people's consciousness.  It is both useful and scary to realize just
how little the scenarios have changed in 33 years.

A final commonplace scenario in Kahn and Wiener's book, itself deeply
part of the culture, is the automatic electronic monitoring of all
telephone conversations for key words.  The discourse of "terrorism"
did not exist at that time, and so their list of sample words is
instructively dated: "bet", "horserace", "subvert", "revolution",
"kill", "infiltrate", "Black Power", "organize", and "oppose" (pages
97-98).  Being quite aware of the civil liberties concern, but being
unable to imagine the technology producing those concerns as anything
but inevitable, they observe that "[n]ew legal doctrines will need
to be developed to regulate these new possibilities" (page 97-98).

So there we have it: the definitive 1967 imagination of computing in
the year 2000.  We have the faster processors and bigger memories that
the experts predicted, and we have more or less the access to large
collections of online documents that they predicted as well.  But we
do not have any of the functionalities that require computers to be
the slightest bit intelligent.  And the problem of intelligence turns
out, on examination, to be a stand-in for something deeper.  We still
live in a world that is out of control: that is inherently diverse
and whose order is inherently local.  The authoritarian tendencies of
the Cold War technocratic mindset are certainly still with us, as the
inherent logic of the conventional practice of computing continually
encourages one organization or another to institute a world of total
surveillance and control.  Yet that world has not come.  The reason
for that postponement is partially political -- we don't believe in
the centralized world of the Cold War.  But the major reason has to
do with the world itself: that's just not how reality is.  Yet still
we have decisions to make, as ten thousand forces push new kinds
of standardization on us.  We can't wish those forces away by the
intransigence of our politics.  We have to reinvent the technology
that abets them, and the imagination that makes them seem inevitable.


Back in July I observed that Texas executes more people than any other
state, and I said that as far as I knew, Texas led the world.  In fact
the worldwide death penalty record is held, and by a large margin, by
China.  Don't tell them in Texas.


One indignant individual denied that the Supreme Court's decision
in Alden vs. Maine represented a significant change in citizens'
abilities to sue their state governments.  For any who might be
in doubt on the matter, here are a couple of explanations:

Erwin Chemerinsky, High court wrongly lets states off hook, Los
Angeles Times, 25 June 1999, page B9.

Edward P. Lazarus, Court plays the "states' rights" card, Los Angeles
Times, 27 June 1999, page M1.


I complained the other day that the public schools in LA only have
students who speak 140 different languages.  I have been assured,
however, that that number simply represents the limits of the school
system's ability to differentiate among languages.  I am also assured
that the students who speak Spanish and Chinese also speak plenty of
other first languages.  I am relieved.


I also misspelled the names of Tom Daschle and Slobodan Milosevic.
The first was brain fog; the second fast typing.


Some URL's.


Absentee Ballot Fraud in Six Florida Counties

US Election Systems Subject to De Facto Regulatory Cartel

Right-Wing Coup that Shames America,2763,415346,00.html

Now It's Unofficial: Gore Did Win Florida,2763,415400,00.html

A Different Florida Vote in Hindsight

Errors Plagued Election Night Polling Service


International Conference on Supporting Group Work, Boulder, October 2001

European Conference on Computer Supported Cooperative Work, Bonn, Sept 2001

Libraries in the Digital Age, Dubrovnik, 23-27 May 2001


Scandinavian Design: On Participation and Skill

Urban Design Quarterly

Scandinavian Journal of Information Systems

User Participation and Democracy

Design, Deliberation, and Democracy


Malicious Code Moves to Mobile Devices

highly useful wireless system for real-time bus arrival information

Quantum Computation with Calcium Ions in a Linear Paul Trap

The Physics of Quantum Information

Secret Project May Be Microsoft's Next Big Deal

everything else

cool composite image of earth lights from space

War of Resistance Rises in Cuba

Learning from Social Informatics

Ole Hanseth's publications on corporate information infrastructure

Boxmind: The Online Library

papers from Constitutional Design 2000

Canada Strengthens Internet Privacy

Understanding AOL's Grand Unified Theory of the Media Cosmos

Privacy Heats Up but Doesn't Boil Over

Today's Political Ideals in Historical Perspective

Global Film School

Virginia Beach Sends Microsoft a Check for Software Use