Conceptualizing Users and Uses

by
Marcia J. Bates

Talk given at the American Society for Information Science and Technology
Annual Meeting, Nov. 18, 2002, Philadelphia, PA.
Copyright © 2002 by Marcia J. Bates

I’d like to approach this question of conceptualizing users and uses from an historical perspective. I’ve been around long enough that my own personal career path has traversed much of the history of "user studies" or information seeking behavior.

Both in the early years and currently there has been what I would call a certain methodological provincialism in our field and in many of the other social sciences. This provincialism has limited our openness to what may actually be learned by studying users.

When I started out, research methods theory was dominated by an objectivist methodology, often misleadingly, even inaccurately, called quantitative methods. The intention was to isolate the factors or variables of interest in a social situation, control for all other variables, then test hypotheses to determine the nature of the relationships between variables. This approach was dominated by a fundamental assumption that the human mind and real-world situations could fool or mislead the scientist very easily. The entire apparatus of a scientific experiment was devised in order to prevent this fooling of the scientist, to eliminate as many alternative explanations for the results as possible, and so be left with a confirmation or disconfirmation of precisely and only the hypothesis being tested.

Consequently, under the umbrella of the objectivist view of science, if a researcher did not provide all the elaborate structure of an experiment, then the research was not controlling for alternative explanations and so could not validly claim than a result was due to the causes or associations that had been hypothesized. Anyone who did not use these scientific designs for research, who approached social science questions with a more subjective approach, was seen to be unscientific, and their research dismissed.

In fact, much of this approach to doing social science research was borrowed, without a lot of modification, from the natural sciences.

In the time that I have been around as a researcher, this objectivist perspective has been turned inside out. The pendulum has swung all the way to the other end of scholarship--to the humanities. In the subjectivist methodological paradigm that dominates the social sciences currently, the techniques of the humanities have essentially been applied to the social sciences. So-called qualitative researchers are frequently derisive of the earlier methods as being borrowed from the sciences and so inadequate for social studies. What is less often admitted, however, is that much of the current fashion in subjectivist social science research is taken from the humanities perspective--and so also may only be partially applicable to the social sciences.

Whereas the scientific perspective was driven by a fundamental assumption that we can be easily fooled by our own perceptions, and so we had to place intervening controls between ourselves and the thing observed, the qualitative approach revels in the ingenuity and uniqueness of insight of the individual who is doing the research. By stating that everyone has a unique perspective, everyone subjectively constructs experience and understanding, the qualitative researcher champions the individual perspective and argues that core ideas are established by consensus, not, in its most extreme form, from some reasonably valid observation of the world. This is the classic position of humanities research.

If the earlier model was the natural sciences applied to the social sciences, then the current, "qualitative," approach is essentially the humanities applied to the social sciences.

So when are we going to develop and support a truly social scientific research methodology--neither the natural sciences nor the humanities? This is not to say that we learned nothing from either approach. The more experience social scientists had with natural science methods, the more modifications and the more novel methods were developed to study the things distinctive to the social sciences. Likewise, some of the subjectivist methods popular now are being adapted, with great ingenuity, to the distinctive needs of the social sciences.

So why have I gone on about this rather philosophical issue?

(It may look like I’ll never get to users and uses, but I will.)

I’ve done so because each wave of methodology (and its associated philosophy) has de-legitimated other methodological approaches, and has narrowed its vision regarding what methods may be employed in "good" research. That's what I mean by "provincialism."

As this story has played out through time, it is one-directional, that is, each era derides the prior era. The extreme scientism found in the social sciences in the 50’s and 60’s was a response to the perceived sloppiness of subjective methods used earlier in the century. The current wave of humanistic approaches is a reaction to the scientism of earlier decades.

I appreciate that each of these developments represents a kind of growth in the thinking overall in the social sciences. For example, the subjectivity of current research methods does not replicate (at least, not usually) some of the worst features of the subjectivist research of the early 20th century. And no doubt, when objectivist approaches return to the social sciences--and they will return--they will most likely return without the worst features of the methods used in the 50’s and 60’s.

So these shifts in methodological approaches do represent growth in understanding. At the same time, as I mentioned earlier, these fashions in methodology also de-legitimate earlier research, because that work used methods now out of fashion.

I would like to propose another way of looking at these various methods. In my leeeeeengthy (!) experience, what I have found is that it is not the method per se that makes research good or bad, but rather it is the quality of thought behind the design, the appropriateness of the method to the question asked, and the care with which the work is carried out that determines whether the results are valid, meaningful, or useful.

To make that determination, one must abandon cheap and easy methods of assessment. You should no longer be able to say any work more than five years old is passé. You should no longer say that all work produced by one method is bound to be good because the method is the hot new thing, and work produced by another method or philosophy is automatically inadequate.

In my many years, I have observed that good research and good researchers can transcend almost any methodological fashion, and poor researchers will do a bad job almost no matter what the method they choose.

So, what does this have to do with use and users? I would say that the long arc of research in this area has moved with the methodological times. The early research tended to use more objectivist methods, but the work was by no means as monolithic as is generally assumed.

People were often attempting to determine some pretty basic information about users and use of information. It may be hard to realize today, but it was a surprise to early researchers to learn what a large percentage of the information people used--whether the general public or researchers--came from other people. When interest in this area first developed, researchers had to do what you always have to do at the beginning of work in an area--discover the basics. Not unreasonably, they had previously thought of information use in terms of the formal information system of paper, and only after some research realized how important other people were as information sources.

Thus, in that early time researchers did start with studies of uses of various types of resources. So the people doing the using were the "users" and the actions they carried out were the "uses." I think we can see the progression of work in subsequent years as developing an ever more subtle and complex vision of human beings in relation to information. In recent years, researchers presenting at the Information Seeking in Context conferences in Europe, are viewing human beings as being completely submerged in and integrated into an information environment. It is assumed that the full picture can never be understood until the rich human and infrastructural environment is taken fully into account. I think this is an entirely appropriate progression in our thinking. As we have understood more and thought more about human beings in relation to information, we have enriched and deepened our vision of the subject matter of this field. If we feel that "use" and "users" evoke that earliest era, when we were first finding out about information seeking, then indeed change the names.

To me, however, what we are doing has always been about human beings in our information environment. We knew much less then, but that did not prevent the possibility of a larger vision at that earlier era. And here we come full circle to the point I was making earlier about good researchers transcending any method or any model popular in a particular era.

Just because we knew less then, doesn’t mean that what we found out was not useful--and often still useful. My favorite article in this regard is a study done by Herbert Menzel and published in the International Conference on Scientific Information in 1959. In the study, he interviewed scientists about the various circumstances under which information came to them serendipitously. It was a beautifully done study, sensitively interpreted. Menzel argued that many of these serendipitous encounters came about because people were working in close proximity or met at conferences. He suggested in another publication that the transmission of scientific information could be accelerated simply by creating an environment in which people with related interests would easily bump into each other.

Now there have been several studies recently looking at serendipitous experiences as if Menzel never existed. Our understanding is thus impoverished. The model that is the human being has not changed nearly as rapidly as the models of technology have.

One more example, then I’ll quit. Brenda Dervin and Michael Nilan published what we might call a "call-to-arms" annual review article on user research in 1986, urging that a subtler, more subjectivist approach be taken to studying people and information. This article has been widely cited, and I don’t know how many times I’ve heard people speak as though sensitive, understanding research on human information seeking only began in response to that article.

It does not detract in any way from Dervin and Nilan’s achievement with that article to point out that 18 years earlier, in 1968, William Paisley wrote an annual review article in which he suggested a nuanced way of thinking about scientist information users. He wrote of the scientist being at the heart of a series of social reference groups--his work team, his organization, his professional specialty, his professional society, his nation, his culture, and so on. I’ve heard of work recently that sounds a lot like this, but which doesn’t cite Paisley.

Study the work for what it has to say and for the quality of the research, not for whether it uses fashionable jargon or methods, and don’t limit yourself to the last five years. In research on human beings in relation to information, whatever you call it, take the trouble to make your own assessment of the work, and seek out the quality research, whatever its method or era.