Computing as a Social Practice

Philip E. Agre
Department of Information Studies
University of California, Los Angeles
Los Angeles, California 90095-1520
USA

pagre@ucla.edu
http://polaris.gseis.ucla.edu/pagre/

This is the introduction to Philip E. Agre and Douglas Schuler, eds, Reinventing Technology, Rediscovering Community: Critical Explorations of Computing as a Social Practice, Norwood, NJ: Ablex, 1997.

Please do not quote from this version, which may differ slightly from the version that appears in print.

7000 words.

 

1 Introduction

The papers in this volume represent a broad range of theoretical and practical work over the last decade on the relationship between computer technology and society. They report numerous exemplary interventions within the technology, institutions, and policies around computing. This introduction will offer a general analysis of their shared premise, which is that computer professionals can work within society for responsible applications of computer technology.

The very concept of working for socially responsible computing implies several things. It implies, first of all, that a special kind of work is actually necessary. Computer people bring an ordinary degree of responsibility to the daily practice of their profession, of course, and outside social mechanisms such as laws and markets promote and regulate the use of computing in their own ways. Yet these factors together have not produced all the potential social benefits of applied computing, and they have not prevented certain institutional pathologies. Another implication is that computer professionals can, by departing from usual ways of doing things, actually ameliorate these problems. Doing so, whether as part of one's paid employment or on one's own time, amounts to a type of social activism whose relation to existing practices may not be simple.

One might approach the concept of socially responsible computing in a variety of ways, for example through ethical philosophy (Johnson and Snapper 1985), but my own analysis of them will be primarily sociological in nature. Some of the relevant questions include:

* What, in social terms, is computing technology, and how does this technology condition the ways people use it in society?

* How does the historical development of the computer profession shape attitudes and approaches towards social activism in the present day?

* What are the possibilities and limits of activism for socially responsible computing, and how are these influenced by the technology itself and by the institutions that have defined it?

These are large questions, and it will not be possible to provide final answers to them. Nonetheless, a brief consideration of them may provide some orientation to a new generation of computer professionals who wish to connect theory with practice in actively shaping the future of computing.

It is not obvious to many people that social activism aimed at shaping the use of computing by society is even feasible. Many skeptics have asked, how is it even possible to predict the future of computing and its place in society? After all, the technology continues to change at a tremendous rate, and the interactions between emerging technologies and evolving social institutions are difficult to define after they have happened, much less years beforehand. Moreover, as Winner (1986) points out, scenarios that connect technical advances to anticipated political changes have frequently relied upon facile equations between information, knowledge, power, and democracy. This is a serious objection, and it should provoke reflection upon the nature and goals of social activism around computing. Yet the business of prediction seems less overwhelming when placed in the context of the many continuities across four decades of computing technologies. Some of these continuities are technical, for example the von Neumann register-transfer model of computer processing. Others pertain to the design process, for example the use of special languages to represent data flows in the dominant traditions of systems analysis. And others are institutional, such as the basic structures of society and the legal and economic underpinnings of the industries that employ computing technology. All of these things are of course capable of change, but their historical development and persistence seem amenable to investigation. Social activism that intervenes in the institutions of technology can be guided by explanations of such things, and can provide an occasion for testing and refining explanations of them as well.

Other arguments are more concrete. Social activism by computer people has had considerable influence, though the stories are not often told. Lee Felsenstein, for example, originated some of the basic concepts of both personal computers and community networking as part of a project to mechanize the maintenance of telephone-based bulletin boards by political activists (Freiberger and Swaine 1984: 100ff). In doing so he may not have predicted in detail the profound changes in the computer industry that personal computing would later bring, but his philosophy of democratic computing has had a lasting influence nonetheless, and the Community Memory Project (see the paper by Farrington and Pine) was an early model for an explosion of community networking activity in the last few years. Another example might be found in the participatory design movement, which originated in Scandinavia as a collaboration between unionists and academic computer scientists with an interest in democratic processes for developing technical systems (B/odker 1991). It is not certain whether this movement will spread in the United States beyond universities and industrial research centers (Schuler and Namioka 1993), but the difficulty of predicting technology is at most a minor determinant of the movement's success.

Even beyond these examples, though, the project of socially responsible computing is aligned with much larger trends in the use of computing in society. So long as computing was confined to laboratories or to the automation of existing work tasks, nobody had to give much consideration to the interactions between computing technology and social institutions. As Friedman (1989) points out, computer systems development researchers spent their first few decades working out basic algorithms, learning to define technical specifications for programs, and fashioning good user interfaces. Only now that those three problems have been provisionally solved has the even greater problem of integrating computers into the workings of organizations begun its difficult movement into the mainstream of information technology research (Kling 1993). This latter-day emphasis upon the human dimensions of computing draws upon many sources, some of which have more fully articulated visions of socially responsible technology than others (Kling 1980, Weizenbaum 1976). Nevertheless, the broadening of vision that social activists have promoted -- from a technology focus to a technology-and-society focus -- has proven necessary across a broad range of institutions and professions, as the difficulties of integrating technology with the human problems of organization have become more obtrusive.

Despite their diversity, the papers in this volume are united by their common pursuit of this broadened vision of technology as a social phenomenon. The authors' approaches can be neatly classified into the categories of criticism and construction. Critical papers (in Part One of the book) use theories from sociology, economics, law, and other fields to describe computing as it is now, in the fullness of the institutionally structured situations in which it is actually encountered. Constructive papers (in Part Two) report projects to create computer systems that operate on alternative premises, including the lessons learned in submitting these systems to use by people in complex human environments. The purpose of this introduction, though, is not to encourage this distinction between critical and constructive styles of research, but rather to sketch a wide range of considerations that affect efforts to integrate these two styles. How can system designers take critical theories of computing into account in constructing and fielding real systems? And how can theorists paint conceptual pictures of computing in society that are responsive to the needs of practitioners while also challenging the implicit assumptions in the practices inherited from technical tradition?

We are only beginning to appreciate the enormity of these questions. Analyses like those of Kling, for example, are exposing the many layers of unreflective habit that shape contemporary discourses on computing, including some discourses that are motivated by critical social projects. What is more, the widespread deployment of distributed computer technology is starting to challenge some of the most basic categories of social thought. For example, as Davis and Dandekar point out, it is not at all obvious what it means to speak of information as property. As a result, one of the basic concepts of common law is suddenly up for renegotiation. Concepts of community and knowledge are also challenged by the ideas and practices around computing. As these concepts expand their scope, do we necessarily lose our appreciation for the rich human phenomena to which they once exclusively applied? Clearly the issues at stake are social and cultural as well as technical.

The remainder of this introduction explores these issues in the following way:

Section 2 sketches some of the large-scale economic issues that provide the background for work on socially responsible computing, starting with the notion that computerization is the occasion for another industrial revolution.

Section 3 recounts two historical traditions of social action in science and technology. Each of these traditions has had a considerable influence over attitudes toward social issues among scientists and technologists to the present day.

Section 4 describes certain aspects of the institutional organization of computer work, and particularly the predicament of the computer professional. These structural factors also play important roles in shaping computer professionals' understandings of the political issues that bear on their work.

Section 5 presents some of the critique of computer work that research has developed over the last decade. The orientation to technical problems, it has been claimed, constantly threatens to foreshorten computer professionals' awareness of the human environments in which their systems will be used.

Section 6 moves into these contexts of computer use, tracing the difficulties inherent in attempts to understand these contexts in ways that acknowledge the complexity and specificity of their technical and social aspects alike.

Section 7 builds on these analyses by describing the situation of the activist for socially responsible computing, who must simultaneously confront some deeply contested social boundaries and build the alliances that become possible as computing begins to affect the lives of numerous social groups. Yet, although the future remains as hard to predict as ever, the increasing pervasiveness of computing in society means that activism around technology increasingly becomes coextensive with social change work in general. No longer a specialized concern, computing becomes both an indispensable tool and a facet of every particular site of human practice.

 

2 The economic background

It is commonly asserted that computer technology is bringing a new industrial revolution or the rise of an information economy qualitatively different in its rules than before (Drucker 1993). Though perhaps hyperbolic, these assertions do challenge us to articulate more precisely the nature of the structural changes that the application of information technology is making possible. Claims for a new industrial revolution are mistaken in at least one sense: except for the computer industry itself and certain parts of banking and finance, the United States is not witnessing anything like the tremendous growth of productivity that the rise of national distribution systems made possible in the late nineteenth century (Chandler 1977, Kling and Dunlop 1993, Strassman 1990). Rapidly expanding use of computer technology has not led to spectacular improvements in the price or quality of food, clothing, housing, or transportation. Nonetheless, computing has become remarkably pervasive across functions and industries throughout the economy. Compared with a milling machine, which makes a readily measurable contribution to the efficiency of specific types of manufacturing, computers are extraordinarily plastic, capable of being fitted to the needs of an enormous range of productive activities. As a result, it is difficult to make definite statements about the role that information technology is playing in the undeniably profound changes that are taking place in the global economy.

Davis begins his analysis of these questions by noting the special properties of information. Unlike sandwiches and steel, information that has been created and captured in some digital medium can be replicated in unlimited quantities for a negligible cost. This fact has a wide variety of implications, depending on what the information in question represents. Robot programming that captures the movements that human beings perform in building a car, for example, can easily be put to work in factories around the world. Likewise, information about new techniques of efficient production can just as easily be transferred to all of the other sites where it is applicable. Finally, software that might have been written by a large team at great expense can be distributed to ten users or ten million for only a small increment in cost, so that its market price bears little systematic relationship to its costs of production. As a result, participants in a market economy are never certain what to do with digitized information. As a result, Davis argues, information actually tends to subvert a market economy. At a minimum, as Dandekar points out, the subtleties of information call for a fundamental reexamination of the nature of property and ownership.

Davis also joins other scholars (e.g., Reich 1991) in arguing that the pervasive application of computing is contributing to a stratification of the workforce, with a growing economic gap between who manipulate information and while those who manipulate physical materials or interact with people in routinized service encounters. Information technology contributes to this type of social division in several ways. By making it easy to move information instantaneously across large distances, for example, technology makes it easier for firms to relocate industrial activities that produce or consume information. Information technology is also contributing to greatly sophisticated logistical systems that permit activities to be coordinated globally while increasing centralized control of far-flung operations. Each of these trends helps intensify wage competition among geographically dispersed groups of workers. Some types of information work, such as highly structured computer programming, are also subject to these changes. But many kinds of information work consist of unique and complex activities that are hard to routinize, and the people who have the skills to engage in this kind of work have been relatively successful at maintaining their relatively prosperous position in the labor market.

The scope and magnitude of these phenomena pose considerable challenges to would-be activists for socially responsible computing. Yet the easy replication of digitized information is a powerful resource for social activism as well. If a computer program is useful to ordinary people, for example, then distributing it for free on global computer networks can affect many people's lives in a short period with little effort. As Davis notes, many programmers have experimented with alternative means of distributing their work, for example with remuneration depending on good faith and the ability to pay. Another common practice is to distribute a basic version of a program for free and to provide more advanced features and support services to customers who are willing to pay. Activists have also put tremendous effort into creating free and low-cost global computer networks, for example the simple store-and-forward mechanisms employed by BITNET, Usenet, and Fidonet (Quarterman 1990) and the local community networks described by Schuler, Resnick and King, and Farrington and Pine. In each case, the goal is to put the easy replicability of information to work for ordinary people by creating a cheap digital medium of communication.

 

3 Scientists, engineers, and social responsibility

Although the metaphysics of information in the global economy defines the background for any attempts at social activism, it does little to explain the actual history of activism within the computer profession. Computing is an engineering profession (Johnson 1990), and certain historical continuities between computing and earlier engineering disciplines are readily apparent. As professions go, engineering stands out for its relatively low degree of autonomy, and the computer profession is particularly extreme in this regard. Physicians and attorneys, for example, make their own rules and police their own ranks to a considerable extent, but the professional societies of engineering have done little to define the conditions of engineering work. Noble (1977) argues that the reasons for this can be found in the historical origins of the engineering professions in the early 20th century. Influenced more by the values of their employers than by guild traditions like those of medicine and law, engineers were generally willing to apply to themselves the same instrumentalism that they brought to the technical problems of their daily work (cf Meiksins 1988). If computer professionals in particular have largely avoided the experience of systematic deskilling depicted by Kraft (1977), the reason, as Friedman (1989: 356) makes clear in his critique, is not to be found in the strength of their professional organizations but in the disruptions brought by continual changes in the technology itself.

But engineers have not lacked conceptions of society and their place in it. Quite the contrary, the founding of the American engineering professions during the Progressive Era was a deeply and self-consciously political process. Frederick Taylor, for example, viewed time studies of work as a program of social amelioration in the context of serious conflicts over the organization of work (Nelson 1980). If the engineer were to discover, by scientific means, the "one best way" to organize a given industrial process, he reasoned, then surely nothing would be left to fight about. Although Taylor himself was hardly regarded as a spokesman by the engineering profession as a whole (Noble 1977: 273), the engineer's position as an objective referee standing outside of social conflicts was an important theme in American culture throughout the first half of the century (Banta 1993).

The foremost philosopher of this movement was Veblen (1921). In a period when the very practicability of capitalism was uncertain, Veblen contrasted two principles of economic organization: the "price system" run by financiers who did no real work and much damage, and the rational operation of industry by engineers. Although drawing upon -- indeed, creating -- popular caricatures about the rich and their speculation and consumption, Veblen was by no means a populist or socialist. His focus was upon the productive infrastructure andhis utopia was not based on property or class but upon reason. This philosophy, which was by no means limited to Veblen himself (Layton 1971), took a whole series of cultural forms, eventually including the strange Technocracy movement with its fetishistic symbolism of technical order (Ross 1991). Social activists often wonder why it is so difficult to involve engineers in their campaigns; one reason is surely that engineering is already a utopian social project, in however misguided a way.

Although engineering no longer plays the same role in American culture that it played during the 1920's, the events of this period shaped engineering in many ways. In particular, engineering came to be understood within a series of binary oppositions: order versus chaos, reason versus emotion, mathematics versus language, technology versus society, and so forth. As a result, engineers have often viewed moral appeals to social responsibility as a disruptive force, alien to the nature of their work (Florman 1978). In place of these ideals of individual choice and the tacit conventions of tradition, engineering has sought to institute a form of reasoning that is objective because it is external; the rationale behind a technical design can can laid out on paper and argued through in a public way, at least within the community of engineers and their expertise. This reasoning is instrumental; starting with a problem to be solved, it does not question the problem but simply seeks the demonstrably most efficient means of solving it. Its claims to social authority lie not in the choice of problems but in the methods for their solution.

A different tradition of social concern about technology grew out of the Second World War, most especially the development of the atomic bomb. Many of the bomb's inventors were disturbed by its use on civilian populations, and they set about articulating an ethical protest that had a broad influence throughout the Cold War. By the 1980's this tradition led to a relatively small but highly visible movement of scientists and engineers who refused to work on projects related to the military, either in general or specifically to weapons research.

This ethical movement understood its dilemma and its responsibilities in a specific way. Its focus is upon dangerous technologies or upon evil uses of technology, and its question is whether to participate in the development of such technologies and "where to draw the line". The technologies themselves are given, having been defined by others, and are beyond individual control. The primary focus is upon individual choices rather than larger collective movements, upon public statements rather than broad-based organizing, and upon the refusal to do bad rather than the positive obligation to do good.

This type of ethical concern about technology has been applied widely and made into a framework for formulating social choices about technologies such as nuclear power. Marx's paper is a good representative of this tradition. Although not specifically concerned with military technologies, his starting point is the framing of ethical choices about a given technology, and his aim is a rigorous and searching clarification of thinking, diagnosing fallacies and ensuring that the broadest range of considerations is taken into account. Similar ideas were influential in the founding in 1981 of Computer Professionals for Social Responsibility, whose first major campaigns concerned the use of computer technology in controversial military research and development projects such as the Strategic Computing Initiative and Strategic Defense Initiative.

Yet despite its honorable inheritance, the ethical choice framework fails to appeal to many computer professionals. The connections between individual professionals' work projects and the claims of evil consequences are often distant and diffused, and the negative notion of refraining from work does not appeal to practical-minded engineers nearly as much as a positive agenda might. Another, deeper factor is the conception of engineering reason as something apart from, or above, social conflicts. It is difficult to conceive of arguments for refraining from engineering when engineering is conceived in this way.

 

4 Computer professionals

The ideology that computer professionals have inherited from the broader engineering profession, then, has both a majority and a minority tradition. The majority tradition views engineering not simply as socially neutral but a positively millenarian creation of order from social chaos. The minority tradition views certain kinds of engineering work as complicit with evils that engineers cannot control. But neither of these traditions suffices to describe the experience of contemporary computer professionals. This experience is profoundly shaped by the dynamics of the labor market.

The most important feature of the market for computer work is the continual, rapid change in the technology. Computer professionals do well financially in good times, but they are aware that their continued success depends on maintaining a range of up-to-date skills. Employers, by contrast, have an interest in hiring employees whose skills run deep in a particular area. This tension has pervasive consequences for the culture of the computer industry. Lacking a guild-like system of collective control over the market supply of technical skill, computer professionals must individually place continual "bets" on the direction that the industry is taking, choosing jobs and undertaking training courses that will position them adequately in the labor market of a few years hence. The size and complexity of the computer industry, as well as the microspecialization of computer skills in the marketplace, reinforces a sense of the industry as autonomous and uncontrollable when viewed from any given individual's perspective -- something to be predicted and accommodated rather than collectively chosen.

An enormous and highly developed discourse spins around this endless project of prediction. Much of this discourse is esoteric in nature, as engineers debate which technical standard (for wireless data transmission, for example) is "the way to go". On one level these debates are matters of pure engineering reason, as the trade-offs inherent in different technical approaches are compared and weighed. But on another, deeper level, often not sharply distinguished from the first, these debates are about the direction of something much larger -- a market whose interacting elements include as much finance and marketing and law as technical rationality. In addition to their predictive function, these debates produce self-fulfilling prophecies, inasmuch as the success of a technical standard depends on the emergence of a critical mass of parties willing to adopt it (Davidow 1986). Entrepreneurs, as one might expect, are masters of this discourse of market prediction, but it is also a well-cultivated necessity throughout the industry. As technical possibilities grow and multiply, this discourse begins to draw on a wide range of resources, particularly science fiction, and whole discursive worlds of psychological and sociological speculation develop. The process is remarkably volatile, and a technological and discursive fashion such as "virtual reality" can develop from novel coinage to enormous metaphorical elaboration to wholesale dismissal as yesterday's fad to modestly successful occupant of niche markets in a period of a few years. In the meantime, sizable companies will rise and possibly fall, considerable investments will be made and perhaps liquidated, and numerous bets about tomorrow's marketable skills will be placed and won or lost.

In this context it is easy to understand Kling's observations concerning the proliferation of utopian writing about information technology. As business managers have been compelled to face the complexities of organizational computing, though, this hyperbole has become mixed in many ways with genuinely sophisticated ideas. This can be seen, for example, in the proliferation of books by management consultants about the novel organizational forms made possible by emerging technologies for distributed computing (Davis 1987, Quinn 1992, Walton 1989). These books increasingly attend to the communicative dimension of system implementation, inasmuch as resistance by users is often a real threat to the systems' success (Agre in press). At the same time, the profound sense of technics-out-of-control (Winner 1977) has also given rise within computer culture to a proliferation of dystopian narratives about electronic surveillance and other types of repression. It is striking, for example, that the full range of these narratives can routinely be found, all equally hyperbolic in their rhetoric despite their superficially opposite politics, in publications such as Wired magazine. The ideological libertarianism reflected in Wired is a recent development in the computer profession, but it is wholly understandable in the context of the profession's narratives of individualism, rational progress, technological determinism, and the autonomous development of the market.

Of course, the professional ideologies that have been shaped by the market for computer skills are not completely hegemonic. The Association for Computing Machinery, for example, has developed a Code of Ethics and Professional Conduct that encourages the socialization of computer professionals within various canons of professional responsibility (Anderson, Johnson, Gotterbarn, and Perolle 1993). Moreover, numerous individuals have acquired computer skills because of the relative freedom that many kinds of computer work provide; contract programming pays the bills for independent projects by numerous artists, musicians, and political activists. The point, rather, is that the computer profession as such has not experienced itself as capable of collective choice or action. Instead, it has more commonly identified itself with an impersonal historical process that it cannot predict but which it nonetheless basically trusts.

 

5 Computer work

In their work, then, computer people are specialists, building the particular types of hardware and software that lie within their domain of expertise. In order to apply their specialized expertise to real situations in the world, computer people need those situations to be packaged into discrete problems -- problems that their particular techniques can expect to solve. To be a computer person is to possess a certain repertoire of specialized hammers and to be constantly looking out for nails to hit with them. This need not reflect any narrowness of mind or other such deficiency; it simply reflects the institutional organization of the skills. The institutions of computer work are arranged to match computer professionals with problems they can solve, and much of the skill of computer science research is to discover or devise a series of problems that permit existing techniques to be extended, one step at a time, into new territory.

As a result of these phenomena, the specialization of computer work poses significant challenges to computer professionals who would wish to apply their skills in socially beneficial ways. Given the fragmentation of technical work and the distance between individual engineers and the consequences of their work, it is extremely difficult to translate a large social concern such as poverty or inequality or gender relations into a technical problem that demands one's own specific skills (cf Ladd 1982). Calls for social responsibility sound unreasonable or arbitrary to many computer people because they seem to bear no clear relationship to the definition of technical problems, or to the institutions that match problems to the people who can solve them. Moreover, when computer people do present examples of system design as social activism, such as several of the chapters in this book, the technologies involved often turn out to be relatively simple -- far from the "cutting edge" where computer professionals must invest their efforts in order for their skills to remain marketable. This is a significant obstacle to efforts to broaden the socially responsible computing movement. The most significant exception lies in the subfield of human-computer interaction, whose core value of usability is readily framed as both a social justice issue and an imperative of the mass market for personal computers (Adler and Winograd 1992, Nardi 1993, Shneiderman 1992).

Other aspects of computing work have led to criticism by social scientists and activists. Most large computer systems are built by teams of these specialists, and computer people and their managers have developed numerous methods of coordinating all of this complex activity. These structured relationships among computer professionals are an important part of their daily work. But computer professionals have another set of significant relationships as well -- to the people who will use their systems. These relationships have been deeply influenced by the history of engineering. The task of a systems analyst, for example, is to investigate some existing work practices, for example in a billing office, and represent these practices in a way that can be replicated in the programming of a computer. The methods of systems analysis are thus continuous, historically, with other forms of work automation practiced by earlier generations of engineers. But these methods differ in that computer software can easily be replicated in computers and robots across the world. As Davis points out, this continuity may have profound effects on the economics of work.

Computer system design has certainly grown more complex and varied since the early days of systems analysis. The larger point, though, is that system design requires computer professionals to make representations of people's activities. These representations might be largely informal, for example when designers talk among themselves about what users are likely to do with a system. But even when informal methods are used, at some point it is necessary to make technical decisions and actually start building things. As a result, every system incorporates certain assumptions about the users and about the larger network of human activities within which the system will be used.

Critiques of computer work have often focused upon these built-in assumptions. Gray, for example, identifies a long list of assumptions that were built into the US military's Aegis missile defense system. Although the system seems to have performed correctly according to its technical specification, it can nonetheless be viewed as incorporating ideas about battle situations that did not correspond to reality on the day that an American ship shot down a Iranian civilian airliner in the Persian Gulf. Sheridan and Zeltzer demonstrate the contrast between the transparent immersion promised by the metaphors of "virtual reality" and the many opaque features of actual systems. Suchman and Jordan take this critique further, arguing that systems analysts who study women's work often overlook much of its complexity since they restrict their observations to aspects of the work that can be readily articulated in formal terms, leaving out all of the tacit knowledge that local communities of workers have built up over time. This trend can lead to misguided decisions about which activities to automate (just because an activity goes smoothly doesn't mean it can be mechanized) and to poorly designed systems that do not stand up to the complexities and variations of the actual tasks. These authors all argue that problematic assumptions built into a system lead to trouble later on, and that detailed study of the trouble can lead to insights about the design processes and their limitations.

 

6 Contexts of use

As these examples illustrate, the empirical study of actual contexts of computer use has played an important role in the development of theory and activism around socially responsible computing. These contexts of use can be analyzed on several levels. Bromley, for example, directs attention to the institutions that adopt computers. Observing a series of "stubborn tendencies" in the use of computers, he asks where these tendencies come from. They are probably not inherent in the machinery itself, as is evident from the innovative uses of computing reported in this book. Yet they are not wholly independent of the machinery itself, since computing machinery has been shaped in many ways by the special styles of thinking and working of their creators. Rather than explain the stubborn tendencies of computer-use in purely social or purely technical terms, Bromley offers ways of thinking about the historical role of institutions in shaping both the technology and the ways it is used. This approach is congruent with a broad tradition in the social studies of science (Latour 1987, MacKenzie and Wajcman 1985) which seeks to erase the distinction between "technology" and "society" and replace them both with useful hybrid concepts.

Kling offers another way of thinking about the social embedding of computer use, tracing the social "webs" that connect computer users with fellow users, managers, designers, maintainers, and many others. The design of any complex computer system will reflect an equally complex social process of alliance-building, frequently organized within an larger social movement for computerization. Both Bromley and Kling emphasize that this technosocial style of research goes significantly beyond the neat opposition between technical utopianism, with its visions of social good produced by new technologies, and technical dystopianism, with its equally simple visions of social evil produced by the same technologies. The point is not to split the difference but rather to erase the distinctions and replace them with concepts that are more responsive to the reality one discovers in actual, empirically studied contexts of computer use.

Empirical studies of actual contexts of computer use have also been central to computer professionals' attempts to redefine computing in socially responsible ways. This trend is illustrated by several of the papers in this issue -- those concerned with communities and networking, whether in local community networks or in on-line network communities. Computer networking has been a natural technology for activists, inasmuch as it permits people to organize new kinds of relationships without the fixed structures and procedures that have been imposed by many computer systems developed for use in particular work sites. But beyond their concern with computer networking, the projects reported here share a commitment to learning through use -- permitting groups of users to explore the system's possibilities, watching what they do, and interacting with them. The results of these projects are both technical and social. They include insights about which mechanisms are most useful, but these insights are meaningless except in the larger social context: which policies, which conflict-resolution tactics, and which ways of involving the users in the whole process are most useful. Schuler's paper in particular attempts to summarize these insights in a way that can allow others to replicate the early successes. If it is difficult to articulate the specifically technical results of these projects then that is precisely the point.

The chapters that report these projects are helpful both in their anecdotes and in their ability to distill generalizations and advice from a mass of accumulated experience. But this experience has often been difficult to obtain. Computer professionals who wish to assist particular social groups, as for example in the case of Resnick and King's work in an economically depressed Boston neighborhood, must overcome all of the barriers that have historically kept those groups away from a deeper involvement in technology. The Berkeley Community Memory Project, likewise, placed its terminals in public places where non-technical people go, and this meant that they had to face numerous issues that do not arise when computer systems are deployed in controlled work environments. In each case, the systems' success depended upon the designers' evolving knowledge of how they might fit into the larger fabric of the people's lives.

Experiments with on-line communities drew on a population of users who already had some involvement with computing, but they had to face an unprecedented complexity of interactions among the users as the collective life of the community unfolded. Concepts of identity, civility, and community were suddenly transformed beyond recognition -- and not just in a theoretical way, but in a way that the system maintainers and the users themselves had to work with daily. System maintainers like Coate and Curtis have been, in many ways, rediscovering the basics of democracy as they negotiate the social contract that balances individual freedom and social harmony while confronting a whole range of social distinctions and divisions.

All of these studies point the way toward an enormous intellectual and practical challenge: reformulating ideas about the interactions between computing machinery and social processes in ways that depend on non-trivial ideas about both of them. Schuler, for example, argues that a community networking project requires attention both to computer architecture and to the social architecture around it. Likewise, Resnick and King present a rationale for their telephone-based bulletin-board system that is grounded in community activists' ideas about knowledge and community that predate computing.

How can this integration of social and technical ideas be extended, and how can they influence both social and technical practices? For example, if we believe that various communities differ significantly in their cultures and social structures then it might follow that these communities require equally disparate types of computing machinery. How deep should these difference run? Do different communities need different interfaces, or different underlying system metaphors, or different operating systems, or different hardware architectures? The answers to these questions are important because they determine the extent to which hardware and software developed for one context can usefully be shared with people in other contexts. But the questions themselves cannot be explored abstractly. Instead, system designers need to retain a living sense of their technical options, including options not normally valued or explored.

 

7 Activism

As these considerations make clear, activists for socially responsible computing are forever traversing difficult borders. These borders are comprised precisely of the dichotomies that have defined engineering, and even though these dichotomies are rarely tenable they are often well defended: emotional vindications of reason, chaos induced by attempts to impose order, solidarity in defense of individualism, and so forth. Durlach makes these barriers the object of his artistic investigations, and they have forced many others to engage in constant improvisations in their professional relationships. Other barriers are institutional: between universities and city streets, laboratories and work sites, technical and non-technical disciplines, and so forth again. These institutional barriers present opportunities as well as hazards: the trouble they cause is real and not abstract, and effective means of bridging them will often have broader applications. The community-building projects reported in this volume, for example, have been extraordinarily influential, as their experiences have contributed to rapidly growing movements both in commercial systems and in systems built by other activists for other purposes.

Perhaps the most fundamental challenge for computer activism is the creation of broader coalitions with people outside the computer profession. These coalitions must seek issues that articulate common interests across a range of geographical or professional communities, and they must identify styles of collective action that bridge the gaps between technical complexities and people's lives. The atmosphere for this kind of coalition-building has improved considerably in the last several years, simply through the spread of computers. As Agre points out, the broad social distribution of computing technology immediately creates common interests, if only the interest in keeping the computers working. These common interests are expressed in various kinds of collective action. Some, like local computer societies and e-mail discussion groups, are relatively organized, while others are more informal. As computing technology becomes more deeply consequential for the organization of work in various professions, such as librarianship, education, nursing, social work, and law, other types of common interests arise as well.

It is commonly held that much of our society's life will someday soon be conducted through the mediation of computing and networking. If this is true then a fundamental common interest arises, which has often been formulated in terms of equity of access to technology. While this formulation has important limitations, it has nonetheless served as the basis for impressive coalition-building. In the United States the Communications Act of 1994, which failed in the final moments of the Senate's calendar despite broad bipartisan support, contained a number of provisions that had brought together a wide range of organizations. Many of these groups coordinated their political efforts on a national level through the Telecommunications Policy Roundtable (1994), and similar coordination groups have been formed in some local areas as well.

Other projects, too varied to enumerate here, are proceeding on other fronts. These include projects to build women's spaces on the Internet and other network venues, as well as efforts to explore and articulate a distinctive women's approach to network-based interactions (Shade 1994). They also include remarkably successful movements to make public information available on the net. Research originating in Scandinavia, already mentioned above, has sought to democratize the process of system design, discovering along the way the profound interconnection of issues that connects technology activism with other types of social action. The authors in this volume have contributed in a wide variety of substantive ways to projects like these. But they have also contributed in less tangible ways by providing models of socially concerned critical and constructive research on the place of technology in society. A great deal remains to be done, but these projects represent a start.

 

Acknowledgements

This introduction has benefitted from comments by Harold Driscoll, Alexander Glockner, Rob Kling, and Doug Schuler.

 

References

Paul S. Adler and Terry A. Winograd, eds, Usability: Turning Technologies into Tools, New York: Oxford University Press, 1992.

Philip E. Agre, Conceptions of the user in computer systems design, in Peter Thomas, ed, Social and Interactional Dimensions of Human-Computer Interfaces, Cambridge University Press, in press.

Ronald E. Anderson, Deborah G. Johnson, Donald Gotterbarn, and Judith Perolle, Using the new ACM Code of Ethics in decision making, Communications of the ACM 36(2), 1993, pages 98-107.

Martha Banta, Taylored Lives: Narrative Productions in the Age of Taylor, Veblen, and Ford, Chicago: University of Chicago Press, 1993.

Susanne Bodker, Through the Interface: A Human Activity Approach to User Interface Design, Hillsdale, NJ: Erlbaum, 1991.

Alfred D. Chandler, Jr., The Visible Hand: The Managerial Revolution in American Business, Cambridge: Harvard University Press, 1977.

William H. Davidow, Marketing High Technology: An Insider's View, New York: Free Press, 1986.

Stanley M. Davis, Future Perfect, Reading, Mass: Addison-Wesley, 1987.

Charles Derber, Professionals as Workers: Mental Labor in Advanced Capitalism, Boston: G. K. Hall, 1982.

Peter F. Drucker, Post-Capitalist Society, New York: HarperBusiness, 1993.

Samual C. Florman, Moral blueprints: On regulating the ethics of engineers, Harper's 257(10), 1978, pages 30-33.

Paul Freiberger and Michael Swaine, Fire in the Valley: The Making of the Personal Computer, Berkeley: Osborne/McGraw-Hill, 1984.

Andrew L. Friedman, Computer Systems Development: History, Organization and Implementation, Chichester, UK: Wiley, 1989.

Deborah G. Johnson, The social responsibility of computer professionals, The Journal of Computing and Society 1(2), 1990, pages 107-118.

Deborah G. Johnson and John W. Snapper, eds, Ethical Issues in the Use of Computers, Belmont, CA: Wadsworth, 1985.

Rob Kling, Social analyses of computing: Theoretical perspectives in recent empirical research, Computing Surveys 12(1), 1980, pages 61-110.

Rob Kling, Organizational analysis in computer science, The Information Society 9(2), 1993, pages 71-87.

Rob Kling and Charles Dunlop, Controversies about computerization and the character of white collar worklife, The Information Society 9(1), 1993, pages 1-30.

Philip Kraft, Programmers and Managers: The Routinization of Computer Programming in the United States, New York: Springer-Verlag, 1977.

John Ladd, Collective and individual moral responsibility in engineering: Some questions, IEEE Technology and Society Magazine 1(2), 1982, pages 3-10.

Bruno Latour, Science in Action: How to Follow Scientists and Engineers Through Society, Cambridge: Harvard University Press, 1987.

Edwin T. Layton, Jr., The Revolt of the Engineers: Social Responsibility and the American Engineering Profession, Cleveland: Case Western University Press, 1971.

Donald MacKenzie and Judy Wajcman, eds, The Social Shaping of Technology: How the Refrigerator Got Its Hum, Milton Keynes: Open University Press, 1985.

Peter Meiksins, The ``revolt of the engineers'' reconsidered, Technology and Culture 29(2), 1988, pages 219-246.

Bonnie A. Nardi, A Small Matter of Programming: Perspectives on End User Computing, Cambridge: MIT Press, 1993.

Daniel Nelson, Frederick W. Taylor and the Rise of Scientific Management, Madison: University of Wisconsin Press, 1980.

David F. Noble, America by Design: Science, Technology, and the Rise of Corporate Capitalism, New York: Knopf, 1977.

John Quarterman, The Matrix: Computer Networks and Conferencing Systems Worldwide, Bedford, MA: Digital Press, 1990.

James Brian Quinn, Intelligent Enterprise: A Knowledge and Service Based Paradigm for Industry, New York: Free Press, 1992.

Robert B. Reich, The Work of Nations: Preparing Ourselves for 21st-Century Capitalism, New York: Knopf, 1991.

Andrew Ross, Getting out of the Gernsback continuum, in Strange Weather: Culture, Science, and Technology in the Age of Limits, London: Verso, 1991.

Douglas Schuler and Aki Namioka, eds, Participatory Design: Principles and Practices, Hillsdale, NJ: Erlbaum, 1993.

Leslie Regan Shade, Gender issues in computer networking, in Alison Adam, Judy Emms, Eileen Green, and Jenny Owen, eds, Women, Work and Computerization: Breaking Old Boundaries, Building New Forms, Amsterdam: Elsevier, 1994, pages 91-105.

Ben Shneiderman, Designing the User Interface: Strategies for Effective Human-Computer Interaction, second edition, Reading, MA: Addison-Wesley, 1992.

Paul A. Strassman, The Business Value of Computers: An Executive's Guide, New Canaan, CT: Information Economics Press, 1990.

Telecommunications Policy Roundtable, Renewing the commitment to a public interest telecommunications policy, Communications of the ACM 37(1), 1994, pages 106-108.

Thorstein Veblen, The Engineers and the Price System, New York: Viking Press, 1921.

Richard E. Walton, Up and Running: Integrating Information Technology and the Organization, Boston: Harvard Business School Press, 1989.

Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation, San Francisco: Freeman, 1976.

Langdon Winner, Autonomous Technology: Technics-Out-of-Control as a Theme in Political Thought, Cambridge: MIT Press, 1977.

Langdon Winner, The Whale and the Reactor: A Search for Limits in an Age of High Technology, Chicago: University of Chicago Press, 1986.