--------------------------------------------------------------------
T H E N E T W O R K O B S E R V E R
VOLUME 3, NUMBER 7 JULY 1996
--------------------------------------------------------------------
"You have to organize, organize, organize, and build and
build, and train and train, so that there is a permanent,
vibrant structure of which people can be part."
-- Ralph Reed, Christian Coalition
--------------------------------------------------------------------
This month: Responding to arguments against privacy
--------------------------------------------------------------------
Welcome to TNO 3(7).
This issue of TNO was delayed as I finished my book. The book,
whose title is "Computation and Human Experience" (Cambridge
University Press), should be out in late spring 1997, and I'll
send a notice when it appears. Meanwhile, this issue of TNO is
devoted to another collection, hopefully the last, of arguments
against privacy protection. The previous collections appeared in
TNO 1(10) and TNO 3(2), and I have also rebutted single arguments
against privacy in TNO 1(6) and TNO 2(8). I'm hoping to collect
all of these arguments and rebuttals into a single document in
both paper and Web form, and I'll send out a separate notice
when that's ready. If I'm counting correctly, I've responded to
55 arguments in total, two thirds of them in this issue of TNO.
Perhaps new arguments will spring up once these arguments lose
their effectiveness. In that case, I guess we'll just have to
bite the bullet and organize another collection of rebuttals.
--------------------------------------------------------------------
Even more bad privacy arguments.
This is the third and (I hope) last installment in my collection
of bad arguments against privacy. I have been collecting these
arguments and writing rebuttals against them for two reasons.
The first, obvious reason is that I want to help people who are
working to protect privacy. The second, not-so-obvious reason
derives from my personal theory of political debate in the public
sphere. The standard theories of the public sphere are basically
individualistic; they presuppose that people have opinions and
arguments, and they ask about the conditions under which those
opinions and arguments get included or excluded in the decision-
making processes of society. But this theory does not correspond
to my experience, which is that every issue movement -- whether
organized by an industry association, an activist organization, a
social movement, a political party, or whatever -- needs channels
through which it can distribute arguments to its supporters.
An issue movement is effective only when these channels exist
and operate effectively, reaching the movement's supporters
in a timely way with arguments that are likely to sway the people
that the movement needs in its coalition.
This fact is usually suppressed because everybody has an interest
in maintaining the individualistic myth. Everybody will say
something like: "our opponents certainly distribute propaganda
to their sheep-like supporters, but *our* supporters think
perfectly well for themselves and don't just need anyone feeding
them a predigested party line". Both halves of this opposition,
however, are nonsense. Of course people think for themselves.
The problem, rather, is cognitive: nobody in the world has
infinite rhetorical capacities. Just because someone supports a
given position (protecting the environment, increasing the status
of women in society, returning to traditional moral values, etc),
it doesn't follow that they can spontaneously invent suitable
counterarguments to all of the arguments of their opponents.
Everyone, therefore, needs a source of arguments that support
their position. When an argument is offered, individuals can
determine whether they agree with it or not, and they can just
ignore the arguments that don't make sense. Having filed away
the arguments that *do* make sense, they no longer have to be
caught flat-footed when one of their opponents says something
that *sounds* false but whose underlying fallacy is not obvious
right away.
This collection of responses to bad arguments against privacy,
then, exemplifies a method of using the Internet to support
democratic values. This method has two parts: first, broadcast
a call to the supporters of a position asking for arguments that
they have encountered against that position; then, assemble those
arguments and suitable responses and broadcast those as well.
Lobbyists and other professional advocates have always done this
sort of thing. With the Internet, everyone can do it.
I would like to express sincere thanks to the numerous Internet
users who submitted arguments for this project. I have used
almost all of them, collapsing some variants and omitting a
few that seemed too weak to bother with. Some of the arguments
originated in public discussions that I have conducted with
others on privacy issues, for example in the question periods of
talks, and if anybody recognizes themselves in this list, I hope
they aren't offended. My point is not that the people who offer
these arguments are doing so in bad faith, much less that they
would agree with any of the *other* arguments. I do disagree
with these arguments, but in most cases I respect and learn from
the people who make them.
Also, note that the arguments that I've put in quotation marks
are usually composites and paraphrases, not direct quotations.
Someone might suggest that another phrasing of these arguments
might be stronger than the ones I've provided. That is indeed
possible. If anyone has actual examples in mind, they are most
welcome to tell me about them.
Here, then, are the arguments and responses:
* "If you have nothing to hide then you should have no concern
for your privacy."
The word "hide" presupposes that nobody can have a good motive
for wishing to protect information about their lives. This is
obviously false. People have a legitimate interest in avoiding
disclosure of a wide variety of personal circumstances that
are none of anyone's business. If you are raped, will you want
the full details published in the newspapers the next day? I
don't think so. People also have a broader (though obviously
not unbounded) interest in regulating how they are represented
in public. If someone doesn't like you, they will dredge up
all sorts of facts and portray them in a bad light. If only for
this reason, it is reasonable to avoid giving everyone unlimited
access to your life.
* "Privacy advocates oppose national ID cards, but they don't
talk about the benefits of such cards. In a country with a
foolproof, unambiguous means of determining identity, nobody
has to suffer cases of mistaken identity, for example from
arrest warrants issued for people with similar names and dates
of birth."
When someone points to a benefit of a privacy-invasive policy,
the first question to ask is whether other policies could provide
the same benefit. Cases of mistaken identity, including identity
theft, could be greatly reduced by cleaning up the information-
handling practices of a remarkably small number of organizations,
particularly departments of motor vehicles. National ID cards
carry definite risks, not the least of which is the proliferation
of privacy-invasive technologies that will occur as it becomes
easier to identify and track individuals in a wide variety of
contexts.
* "Privacy must be balanced with many other considerations."
Privacy rights are not absolute. In particular concrete cases,
privacy interests will naturally be found to conflict with
other interests. For example, I personally have no problem
with compulsive sex criminals being sentenced to long-term use
of electronic tracking mechanisms. But many arguments against
privacy are bad because they weigh down the scales, exaggerating
the case aginst privacy rights, and we should be alert for these
arguments.
* "Attempts to prevent companies from distributing personal
information are censorship, and that sort of censorship is no
more likely to succeed than any other sort in the new world
of the Internet."
If laws regulating the sale of personal information constitute
censorship, then so do copyright, trademark, and trade secret
laws, to name a few. Selling someone's personal information
without their permission is stealing, and decent societies still
try their best to outlaw stealing, even when technology makes it
difficult. Besides, companies that sell information are unlikely
to put that information on the Internet until they feel certain
that others cannot take that information without paying for it.
Companies that traffic in personal information therefore cannot
reasonably complain about being censored, given their need to
"censor" others who would grab their own illicit copies of the
information.
* "We hear a lot of hoopla against companies that place existing
databases of personal information on the Internet or publish
them on CD-ROM's. But all those companies are doing is making
access to information available to everybody, not just to the
powerful. This is a force for equality and democracy, and it's
ironic to hear supposed civil liberties advocates opposing it."
Civil liberties advocates do not favor a world in which the
powerful get illicit access to personal information and the
common people do not. They favor a world in which *nobody*
gets illicit access to personal information. If someone takes
a previously expensive database of personal information and
publishes it on a cheap CD-ROM, an existing harm is multiplied,
and that multiplies the argument for getting that data off the
market altogether. And if it is politically impractical to force
the powerful to cease their misbehavior, that is no argument for
allowing the lowly to misbehave in the same way.
* "People who collect welfare and other government benefits
have no grounds for complaint about fingerprinting and other
measures intended to suppress fraud. They need to learn that
rights come with responsibilities, and in particular the right
to free handouts comes with the responsibility to cooperate
with fraud prevention. Surely it's not too much to ask that
people should identify themselves before getting free money."
People who are experiencing hard times deserve the same dignity
that everyone else enjoys. The whole process of getting welfare
has already been designed to be as difficult and demeaning as
possible, and it's really not reasonable to kick these people
gratuitously while they are down. Of course it is reasonable
to identify people who want to receive welfare payments. But
the welfare system should be analyzed in the same way as any
other organizational system to determine which technology enables
people to be identified with the least indignity while reducing
fraud and other identification-related problems to reasonably
low levels. Objective evidence that fingerprinting significantly
reduces fraud is very hard to come by, and we shouldn't let
stereotypes of welfare recipients get in the way of rational
analysis.
* "Privacy prevents the marketplace from functioning efficiently.
When a company knows more about you, it can tailor its
offerings more specifically to your needs."
This is a non sequitur. Few proposals for privacy protection
involve preventing people from voluntarily handing information
about themselves to companies with which they wish to do
business. The problem arises when information is transferred
without the individual's knowledge, and in ways that might well
cause upset and alarm if they became known.
* "We don't need more laws about privacy. Our operations are
already governed by dozens and dozens of laws, each creating
its own layer of red tape."
This argument works by shifting attention away from the specific
issues of right and wrong to the abstract category of "laws".
In most industrial sectors in the United States anyway, it
is totally misleading to suggest that privacy is regulated by
numerous laws. In most cases it is not regulated by anything
beyond the notoriously weak privacy torts. Some industries,
such as credit reporting, are government by weak laws that
basically legitimize the practices of the largest players
while protecting them against competition from smaller firms
who do not have the resources necessary to comply with the
laws. Indeed, part of the problem is that, whereas most of
the industrial world has a coherent and unified system of data
protection regulation, the United States has a ragged patchwork
of laws, all of which could be repealed or rewritten in a more
rational fashion if the United States conformed to the system
employed everywhere else.
* "The constant insistence on privacy encourages incivility".
This argument is commonly heard from people with an interest
in defending bothersome invasions of privacy such as spam, junk
phone calls, and the like. These people often encounter irate
objections from the individuals they bother, and naturally the
least civil of these objections stick in their minds. Thus,
when privacy advocates agitate for privacy protection, it sounds
to these people like advocacy of incivility, vigilantism, and
other bad things. But the argument is a non sequitur. It's like
saying that opposition to any other form of obnoxious behavior
encourages incivility. People sometimes shoot burglars, but that
doesn't make burglary okay. Likewise, even if some people get
irate when their privacy is invaded, that doesn't make it okay
to invade anyone's privacy. It is equally arguable that what
causes incivility is obnoxious behavior that has not yet been
outlawed. Lacking formal recourse, someone who has been wronged
by an invader of privacy can only choose between complaining
and remaining silent. Most, in fact, remain silent, and the
illogical nature of the argument is proven when other invaders
of privacy turn around and say that this preponderance of
silence proves that people don't really mind having their privacy
invaded, and that the complaints represent only a small faction
of hysterics.
* "The closing of voter registration lists, drivers' license
records, and the like in the name of privacy is part of a
dangerous trend away from our tradition of openness, toward
making it more difficult for the public to access public
information."
This is an example of a common, insidious PR tactic. The tactic
has two steps: classifying an obnoxious practice under a broad,
vague, positive-sounding category, then painting opponents of
the practice as opponents of that broader category. The category
here is "openness": those who oppose the sale of drivers' license
records are made into enemies of that broader value. It is like
arguing that someone who opposes bullfighting is against sports,
or that someone who opposes abortion is against medicine. Those
who support a "tradition of openness" should not feel obliged to
make common cause with people who abuse it. The purpose of open
government is to permit citizens to participate in the processes
of a democracy, and to know whether the government is doing what
the people want it to do. Reselling voter registration lists
and drivers' license records do nothing to promote the effective
workings of a democratic government. Reasonable, civic-minded
people have a right to be offended when self-interested parties
attempt to drain the meaning from a word like "openness", given
the significant role that word plays in civic discourse.
* "The information is already public because it concerns things
that happened in a public place."
This argument is commonly used to suggest that people do not
have a privacy interest in information about things they have
done in public places, for example their use of public roads
and sidewalks. It depends on two fallacies. The first fallacy
concerns the word "information". If someone happens to see
me walking down Main Street on Sunday, I should not be able
to prevent that person from telling others what they have seen.
That's "information" in one sense of the term -- someone's
personal knowledge about me. But if someone creates a database
of information about where various people have been walking, then
they have created "information" in a different sense -- data that
is stored on a computer. That data may represent something that
happened in a public space, but it does not automatically follow
that the resulting data should be regarded as "public". If that
were really true then anybody at all would have a right of access
to such databases -- they would be "public data". Nor does it
follow that I -- as a person whose activities are represented in
the database -- have no moral interest in that data. The second
fallacy is the automatic conclusion that people have no privacy
interest in things that happen in a public place. If two people
have a whispered conversation on a park bench, having looked
around to make sure that nobody is close enough to overhear them
by accident, then most people will probably agree that they have
a reasonable expectation of privacy, and that it would be wrong
to set up a sophisticated recording device to pick up their
conversation at a distance, or to install a hidden microphone on
the park bench. The question, then, is precisely what privacy
interests people *do* have in activities that occur in public
places. Consider the case of a database that records my travels
over several months, using data collected from sensors that
have been installed along highways in several states. Even if
we agree that each individual observation constitutes public
information -- we could hardly prevent someone from standing
along a roadway and happening to notice me driving along --
it does not follow that it is acceptable behavior to set out
deliberately to gather such information systematically, much
less to use the information in ways that affect individuals'
lives. The word "public" needs to be treated with more respect.
* "We are just selling access to information from public sources.
If we can gather this information, so can anybody else."
Just because certain information is available from a public
source, it doesn't follow that it's *right* for that information
to be available in that way. Nor does it follow that it is
okay to further propagate the information in any particular way.
Maybe it *is* right, but that conclusion requires a separate
argument beyond simply saying that the information came from a
public source.
* "The right to privacy is an elitist concept because it provides
an excuse for the powerful to keep their secrets while they go
ahead and invade the rest of our lives."
If the law protects people unequally, it does not follow that it
should not protect anyone. If the elites are invading the rest
of our lives, then they should be stopped. Furthermore, even if
it is impractical to prevent the elites from invading our lives,
it does not follow that the concept of privacy is elitist. The
concept might be perfectly egalitarian, even if the structures of
society prevent it from being implemented in an egalitarian way.
* "If you think you're being persecuted then you're probably just
flattering yourself. Big organizations don't really care about
you as an individual, just about their own narrow goals."
It is true that some people with privacy concerns are paranoid
schizophrenics with delusions of reference -- people who
interpret every rustling leaf as a sign of a vast conspiracy
aimed specifically at them. But most people with privacy
concerns do not understand them in that way. Most of the harm
done to personal privacy by big organizations does not depend on
them singling anybody out. Quite the contrary, it depends on the
organizations' capacity to gather, process, and act on personal
information on a mass-manufacturing basis. The danger derives
precisely from the organization's focus on its own narrow
goals, to the exclusion of the goals, rights, and interests of
everyone else. At the same time, big organizations do in fact
sometimes persecute individuals. Whistle-blowers, for example,
have often been subjected to investigation and smear campaigns.
Perhaps the classic case was General Motors' campaign against
Ralph Nader, which resulted in one of few civil actions for
private surveillance that have led to significant damages in
the United States. The United States government, for its part,
has run huge, well-documented campaigns of surveillance and
sabotage against nonviolent dissidents for most of this century,
and there is little reason to believe that it has stopped.
* "New technologies will make it possible to protect privacy."
This doesn't sound like an argument against privacy protection
on the surface, and often it is not. It is sometimes used that
way, however. The context is usually that someone is proposing
a new technical arrangement that seems to invade privacy; when
you object to the privacy invasion, they will observe that new
technologies will make it possible to protect privacy, leaving
it unclear whether they actually plan to use those technologies.
An analogy would be the use of double-hulled oil tankers. When
it was first proposed to open Alaska to oil-drilling, people
concerned about the environment objected that a giant oil spill
could easily happen as a tanker hits a rock in the complex
waters near the Alaskan coast. Not to worry, the lobbyists
said, double-hulled oil tankers will make it unlikely that
much oil would be spilled in an accident. But no laws were
passed requiring that double-hulled tankers be used, and they
were in fact used rarely if at all. Never let anybody get
away with presenting any technological advance as inevitable --
particularly when it would not be in their interest to use it.
* "You're right, we do have privacy problems. People are
understandably upset when they assume that they have certain
privacy rights and then later find out that they do not. We
must communicate with people so that they understand the actual
situation in advance."
This is a something that managers often say. On a policy level,
the problem is that it pretends that notification of information
handling procedures constitutes an adequate privacy policy
all by itself. On a rhetorical level, it attempts to redefine
the nature of "privacy problems". For most of us, the phrase
"privacy problems" refers to invasions of privacy. For people
with a manipulative orientation, however, "privacy problems"
refers to situations where people object to having their privacy
invaded. These people would prefer to make those situations
go away by making complaints illegitimate ahead of time. Note
the pretense of empathy for the distressing experience of having
your privacy violated without having been told ahead of time that
your privacy would be violated. What's missing is any empathy for
the distressing experience of having your privacy violated period.
* "If you ask people in a poll whether they're concerned about
privacy then of course they'll say yes. But if people really
cared about their privacy then we would see them flocking to
debit cards, which are much more similar to cash than credit
cards. The fact is that they get a benefit from credit cards,
namely the float on their money, and they are evidently willing
to surrender some control over their personal information in
exchange for that benefit."
This is a fairly sophisticated argument, but it doesn't work.
The basic idea is that privacy can be understood as a commodity
that is bought and sold in the market. Just as people who want
cheese on their hamburger pay more for it, likewise people who
want privacy with their business transactions should expect
to pay more for it. Some people will object that it is simply
immoral to turn rights into commodities. But even if that
premise is accepted, the argument only works if privacy markets
operate correctly. One can demonstrate that a market in privacy
protection exists, but that is not the same as demonstrating that
this market does what markets are supposed to do: allocate scarce
goods according to the relative values that various people put on
them. Markets in privacy protection are in fact quite seriously
dysfunctional, not least because in most cases it is just about
impossible for any normal consumer to assess the value of a given
increment of privacy protection. It is possible in principle
that such markets can be made to function halfways correctly, but
a substantial burden of proof should be placed on the promoters
of such strange market mechanisms to demonstrate how. In the
particular case of debit cards, the contrast is greater than
just a matter of float. Many people in the United States do not
use debit cards because their liability is unlimited when the
card is stolen.
* "We have to weigh the interests of the individual against the
interests of society as a whole."
This is one of those arguments that proceeds by constructing huge
abstractions and positing a conflict between them. When framed
in such an abstract way, this argument sure does seem to caution
us against letting privacy rights get out of control. But when
actual specific issues are raised, this sort of argument is most
often meaningless or irrelevant. Once the actually available
options are rationally assessed, it almost invariably turns out
that privacy protection does not have to conflict with much of
anything. And when conflicts do occur, they can be weighed and
judged on much more concrete grounds, without being reduced to
huge abstractions.
* "Fear of extensive merger of databases is misplaced because in
actually practice it is extremely difficult to merge databases.
Two databases that arose in different organizations, or for
different purposes, will probably be incompatible in many ways,
for example through the different meanings they assign to data
fields that sound superficially the same. Organizations that
maintain personal data have their hands full just maintaining
the accuracy of the databases they have, without trying to
create the one gigantic Big Brother hyperdatabase that privacy
advocates are always warning us against."
This argument asks us to doubt the power of technical progress.
Merging databases is a huge research topic right now, not least
because of the significant business opportunities that would
arise if the problem were to be solved. Markets have always
grown through standardization, and standardization of data is no
different -- a hard problem but no harder than a thousand others
that have gone before. In many industries, merged databases
may arise through industry standard data models, for example the
standard categorizations being developed in medicine. If the
databases are created in a standardized way in the first place,
then merging them will be easy. Also, it is true that companies
that own large databases of personal information must invest
large sums in maintaining them. But these companies are hardly
zero-sum deals. Investment capital flows easily to wherever it
can turn the best profit, and if extra profit can be gained by
both maintaining existing databases and merging them with other
databases, the necessary funds will be available to do both.
* "Privacy advocates are crying wolf. We have been hearing these
predictions of some kind of privacy disaster scenario for 20+
years and its hasn't happened yet."
This argument gets its force from the stereotype of Big Brother.
Our society does not yet resemble George Orwell's dystopia, the
argument goes, so the warnings are all hype. Big Brother is a
convenient metaphorical handle, but like all metaphors it is only
intended to apply in certain aspects. Also, the word "disaster"
suggests that nothing really bad is happening unless there occurs
some single, well-defined, horrible event by analogy to a nuclear
power plant meltdown. But few problems work like this, and it
is more accurate to see privacy as being eroded from a thousand
directions at a steady pace. Privacy has been very significantly
eroded over those 20 years. Privacy advocates have, if anything,
underestimated the number and variety of threats to privacy, for
the simple reason that privacy advocates are few in number and
the threats are much more numerous than those few individuals can
keep track of.
* "AVI toll-collection systems don't really identify the person
who is driving the car, just the car itself. It's not clear
what weight that kind of circumstantial evidence would have in
court, and if it's no good in court then it's not clear to me
what we're supposed to be worrying about."
Note the transition from "it's not clear" to "it's no good", from
raising doubt about a problem to asserting that the problem does
not exist. Circumstantial evidence carries weight in court all
the time. And if you live alone and have no obvious reason to
be lending your car, any rational jury will regard your car being
spotted somewhere as strong evidence that you were there.
* "Attacks on direct mail under the guise of privacy concerns are
really attacks on free speech. Mail is a democratic medium,
available to all. When newspapers and television stations
publicize attacks on mail from the tiny handful of self-styled
privacy activists, their real agenda is to suppress competition
to their centralized control of communication in our society."
This is an actual argument; I am not making it up. It employs
a standard PR move, redefining attacks on unsolicited commercial
mail as attacks on mail as such. When attention is focused
on the specific category of unsolicited commercial mail, this
argument only carries weight in the context of mail that is
demonstrated to have political value for a democratic society.
That is surely a small proportion of the total. Given the
increasingly common practice of mailing negative attack ads
to voters on the eve of an election, making it impossible for
an opponent to reply, the proportion of defensible mail is even
smaller. But forget all that. Nobody is proposing to outlaw
unsolicited commercial mail, not least because of the free speech
issue. The problem is not unsolicited commercial mail as such;
it is the use of personal information to generate commercial mail
without the permission of the person targeted. No reasonable
person has a problem with direct mail that is solicited by its
recipient.
* "These issues about computers and privacy aren't really new and
aren't really about computers. Everything you can do with a
computer could be done before with paper files."
This is false. With paper files, it is literally impossible to
perform data mining with terabyte databases. Now, mathematicians
recognize various abstract senses of words according to which
things are possible even though it would take millions of years
to do them. But in normal language, things like data mining are
only possible if large organizations can do them in a lifetime.
Besides, the argument turns on a more elementary fallacy. Every
problem can be portrayed as "not new" if it is characterized in
a vague enough way. And problems frequently become qualitatively
worse with a large enough quantitative increase in one or more of
their contributing factors. This is a simple point.
* "Computer technology isn't bringing us into some scary new era
that we can't understand. Quite the contrary, it is returning
us to the old-time village where everybody knew everybody
else's business. That's the normal state of people's lives
-- the state that was lost as modern society and technology
caused us all to be separated into cubicles. Privacy is thus
a distinctly modern obsession, and an unhealthy one too."
Large organizations knowing everybody's business is not the
same as "everybody" knowing everybody's business. The village
metaphor suggests a degree of equality and reciprocity that does
not describe individuals' relationships to the organizations that
maintain databases of personal information about them. Now, some
people imagine science fiction worlds in which ordinary people
know as much about Equifax as Equifax knows about them. I'm not
placing my bets on the emergence of such a world. And even if
it existed, it would differ from an old-time village in ways too
numerous to count.
* "The problem isn't privacy per se. The problem is the
invention of a fictional "right to privacy" by American courts.
This supposed "right", found nowhere in the Constitution,
has been running amok, providing the courts with excuses for
inventing artificial rights, such as the right to abortion,
and interfering in people's lives in other ways, for example
by restricting the questions that employers can ask potential
employees in interviews. Ironic but true, the real agenda
behind this supposed "right to be let alone" is actually a
power-grab by which courts extend their control over society.
The best guarantee of privacy is freedom -- the freedom of
people to negotiate their relationships among themselves by
themselves, without government interference."
Starting with the last point, if the efficacy of regulation is
understood as an empirical issue and not a matter of dogma, then
it is empirically false that lack of regulation causes privacy
to be protected. The cases of systematic abuse of privacy in
unregulated markets are innumerable. Returning to the first
point, it is true that the word "privacy" does not appear in the
Constitution. The Constitution was written by people who had
never heard of corporations or computers, and so it necessarily
takes intellectual work to understand how it should be applied
to a world that has been profoundly reorganized through the use
of such novelties. Reasonable people can disagree about how
this should be done, but simply observing that a given word does
not appear in the document is not very helpful. It is not as
though the argument is unfamiliar: the Constitution is supposed
to be interpreted and applied as a coherent whole, not as a
disaggregated series of unrelated phrases, and the First, Fourth,
Fifth, and Fourteenth Amendments, among other passages, together
aim very clearly at a strong protection for individual autonomy.
Such a principle is always found at the center of any liberal
theory of society, such as that held by the framers, and the
Constitution makes no sense as a normative political theory
unless it includes such protections. Of course this principle
can conflict in particular cases with other, equally important
principles, but weighing such conflicts is what the law is for.
If legal decisions are to be made simply by observing which words
appear in the text, it would be impossible to achieve rational
and consistent outcomes -- much less outcomes that are just and
supportive of democratic values.
* "It is too costly to implement such elaborate safeguards."
This assertion is usually just false. Even when it is true,
the reason is usually that it is difficult to change systems
once they have been implemented. The system could most likely
have been designed originally with a whole variety of inexpensive
but effective privacy safeguards. Privacy concerns are not
exactly new, and hardly any systems today were designed before
these concerns were first articulated in detail. An organization
should not be absolved of its responsibility to protect privacy
just because it fell down on that responsibility in the original
design of its systems. What is more, any organization whose
information systems are so outdated that they do not incorporate
privacy safeguards could almost certainly profit from a thorough
review and reengineering of its information handling practices.
Any number of highly experienced consultants could help them
with this, and the benefits would probably go far beyond privacy
protection.
* "Technology has changed the very ontological category of the
person, who is no longer just a flesh-and-blood organism but
also a far-flung digital entity as well. In this context, when
people's identities are distributed across cyberspace, concepts
of privacy don't even make sense any more. In that sense we
should accept that we have entered the post-privacy age."
This argument depends on a simple fallacy: just because your
identity is "distributed", it doesn't follow that anybody needs
to have any access to it outside your control. Note that the
fallacy depends on the "space" part of cyberspace. Normally
we expect to have little control over objects and events that
exist far away from us in space, and so if our identities are
distributed across cyberspace, it would seem to follow that
parts of our identities are far away from us, and that therefore
we can expect to have little control over them. But the premise
is false. The whole point of cyberspace is that it collapses
distance and makes it possible to maintain relationships with
people and information across vast geographic distances in real
time. It is technically quite feasible to provide individuals
with control over the use of numerous geographically distributed
components of their electronic identity. In that way, concepts
of privacy make even *more* sense than they used to, not less.
* "I don't care about privacy."
You are not obliged to care about your own privacy. The point
is that other people have a right to care about their privacy.
Their concerns are legitimate, and it is reasonable for society
to make some provision for addressing them.
* "Those same technologies that cause privacy concerns also
provide positive social benefits."
While true as a simple assertion, interpreted as an argument this
statement is a non sequitur. Even if some particular technology
produces both benefits and privacy invasions, it is altogether
likely that some *other* technology provides the same benefits
while posing less danger to privacy. The rapid emergence of
privacy-enhancing technologies will make this even more likely
in the future.
* "All this talk of Panopticons is ridiculously overblown.
We are not living in any sort of totalitarian prison society
just because we get too many magazine subscription offers in
the mail. Let's be sensible grown-ups and weigh the costs
and benefits of the technology, rather than exaggerating with
dramatic but misleading metaphors from trendy philosophers."
Magazine subscriptions make people angry because they are
invasive and visible. The most serious threats to privacy are
the least visible, and sensible grown-ups evaluate arguments
based on the strongest case, not the weakest. It may be that
some people are misled by the metaphors, but sensible grown-ups
understand that metaphors are metaphors, and that only certain
of their implications are intended. The pervasiveness of
surveillance in industrial societies has been well documented.
* "Privacy advocates claim that Caller ID is an invasion of
privacy. The other point of view is that nuisance phone call
are an invasion of privacy, which Caller ID allows people to
take some control over."
Most privacy advocates are not opposed to Caller ID as such.
Caller ID, if it is implemented correctly, provides a mechanism
by which people can negotiate their privacy. It ought to be easy
for callers to decide whether to send their phone numbers, and it
ought to be easy for call recipients to decide whether to answer
calls for which phone numbers have not been sent. The switch
manufacturers who want to sell Caller ID services to marketing
firms, however, have fought tooth and nail to make it difficult
for callers to choose whether to send their phone numbers with
their calls. And this is what privacy advocates objected to.
* "Credit reporting agencies provide a service that people want.
Indeed, people regularly go to great lengths to cause a record
of their credit history to be created in a credit reporting
agency's database, precisely because they want to enjoy the
benefits of having a credit record."
This is all true but it is not relevant to debates about privacy.
Credit reporting serves a useful social function, and it is
possible that no other means of serving that function exists
now. That's not the issue. The issue is ensuring that consumers
are able to know and control what happens to information about
them. Among other things, they need effective rights of access
and correction (copies of their report that are easy to get,
corrections that actually get made); they need effective controls
over secondary use of their information (not just obscure opt-
outs); and they need an effective means of redress when they are
harmed by the spread of false or incomplete information.
* "If you look hard enough at who is really agitating about
privacy, you start finding a lot of tax resisters, cult
members, and other marginal characters with something to hide.
It really makes you wonder about the motives of the high-minded
people who get quoted in the newspaper issuing Chicken Little
predictions about Big Brother."
It is true that some lowlifes have been vocal about protecting
their privacy. But in a rational society, things are decided
based on whether the arguments work, not on who is making them.
And to lump the honest privacy advocates with the lowlifes is
the lowest type of smear. The fact is that ordinary citizens,
who presumably include only a small percentage of lowlifes,
consistently express high levels of privacy concern in polls
and levels of outrage when told about real information handling
procedures in focus groups, and that they consistently express
very high levels of support for specific privacy-protection
proposals that have nonetheless been rendered unthinkable by our
distorted political system.
* "The technology to create electronic healthcare networks is
here, but its spread has been slowed by court rulings on the
privacy of medical records. This is clearly an area where
Congressional action is needed. If the issues are looked
upon as providing modern healthcare rather than an invasion
of privacy, such an act will probably fly."
This argument calls for privacy issues to be simply ignored. It
is more a statement of political strategy than a real argument.
* "We provide these access tools for our customers' convenience.
When we set up cumbersome barriers between our customers and
the information they need, we get complaints."
This argument often arises in situations where organizations
make information on their customers available in some public
way, for example over the phone, without adequate security.
The complaints of people who are made to identify themselves
adequately are held out as arguments against adequate privacy
protection. But the argument is a non sequitur. Just because
some category of people dislikes the mechanisms that are
necessary to protect privacy, it does not follow that all other
categories of people should have their privacy placed at risk.
Your privacy has a higher moral status than my convenience.
* "Organized crime poses a more serious threat to society than do
government and corporate snooping. Privacy protection disables
a key weapon that law enforcement presently uses to keep
organized crime under control."
This argument routinely arises in contexts where law enforcement
is asking for extremely broad powers that have little rational
connection to organized crime, or whose likely application is
vastly greater than just organized crime. The argument should
not be accepted in its abstract form, but only concretely, in
application to specific proposals.
* "Epidemiologists need broad access to medical records in order
to detect patterns of disease. Without these patterns, the
disease might go uncured, even undiagnosed."
Epidemiologists rarely need to know the individuals' identities.
For almost all purposes, they can work just fine with anonymized
databases. Yes, occasionally epidemiologists do need access to
specific identified individuals. But these powers can easily
be abused, and they should not be granted in a general way.
Individual identities should only be disclosed to epidemiologists
on a case-by-case basis when a very high standard of necessity is
established and appropriate due process protections are observed.
* "When people talk about the need to protect privacy, it usually
turns out that they are only talking about individual privacy.
But we need to balance individual privacy with organizational
privacy."
Organizations do not have privacy rights. The argument that
they do always depends, so far as I am aware, on an overly broad
application of the legal idea that corporations resemble persons.
The ascription of human status to corporations has always been
limited. It is useful to distinguish two senses of the word
"rights". Human individuals have natural rights. Individuals
and other entities also have rights that are created by statute.
There exist expedient political and economic reasons for society
to recognize other kinds of organizational interests, but these
are not matters of natural right. They have a strictly lower
moral status than the rights of natural individuals, and their
exact scope will vary historically as technological and economic
conditions change. It does sometimes happen that individuals'
privacy rights conflict in specific cases with the legitimate
interests of other entities, and it may sometimes happen that
particular privacy rights are outweighed by particular interests
of organizations. But this is not because the organizations
possess rights that outweigh those of individuals, but because
the existence of certain organizational interests serves certain
societal values that happen, in particular cases, to outweigh
certain personal privacy rights. These conflicts are real, but
they can be discussed rationally without being misrepresented as
a clash of conflicting privacy rights.
--------------------------------------------------------------------
Phil Agre, editor pagre@ucla.edu
Department of Information Studies
University of California, Los Angeles +1 (310) 825-7154
Los Angeles, California 90095-1520 FAX 206-4460
USA
--------------------------------------------------------------------
Copyright 1996 by the editor. You may forward this issue of The
Network Observer electronically to anyone for any non-commercial
purpose. Comments and suggestions are always appreciated.
--------------------------------------------------------------------
Go back to the top of the file