How things (actor-net)work: Classification, magic and the ubiquity of standards
Geoffrey C. Bowker
Susan Leigh Star
Graduate School of Library and Information Science
University of Illinois at Urbana-Champaign
November 18, 1996
to appear in a special issue of Philosophia
INTRODUCTION
"A classified and hierarchically ordered set of pluralities, of variants, has none of the sting of the miscellaneous and uncoordinated plurals of our actual world." (Dewey, 1989: 49)
"We do many things today that a few hundred years ago would have looked like magic". We all know versions of this banal assertion - we've probably all made it ourselves at some point or another. And if we don't understand a given technology it looks like magic: we are perpetually surprised by the mellifluous tones read off our favorite CDs by (we believe) a laser. Star (1995b) notes that even engineers black box and think of technology `as if by magic' in their everyday practical dealings with machines. A common description of a good waiter or butler (one thinks of Jeeves in the Wodehouse stories) is that she clears a table `as if by magic'. Are these two kinds of magic or one or none?
The following paper is an attempt to answer this question, which can be posed more prosaically as:
* What work do classifications and standards do? We want to look at what goes into making things work like magic: making them fit together so that we can buy a radio built by someone we have never met in Japan, plug it into a wall in Champaign and hear the world news from the BBC.
* Who does that work? We want to explore the fact that all this magic involves much work: there is a lot of hard labor in effortless ease[1]. Such invisible work is often not only underpaid - it is severely underrepresented in theoretical literature (Star and Strauss, in press). We will discuss where all the `missing work' that makes things look magical goes.
* What happens to the cases that don't fit? We want to draw attention to cases that don't fit easily into our created world of standards and classifications: the left handers in the world of right-handed magic, chronic disease sufferers in the world of allopathic acute medicine, the onion-hater in MacDonalds (Star, 1991b) and so forth.
These are issues of great epistemological, political and ethical import. It is easy to get lost in Baudrillard's (1990) cool memories of simulacra. The hype of our times is that we don't need to think about the work any more: the real issues are scientific and technological - in artificial life, thinking machines, nanotechnology, genetic manipulation... Clearly each of these are important. However, we endeavor to demonstrate that there is rather more at stake - epistemologically, politically and ethically - in the day to day work of building classification system and producing and maintaining standards than in these philosophical high-fliers. The pyrotechnics may hold our fascinated gaze; they cannot provide any path to answering our questions.
Through looking at classification systems and standards, we will move towards an understanding of the stuff which makes up the networks of actor network theory. Latour, Callon and others within the actor-network approach have developed an array of concepts in order to describe the development and operation of technoscience. Their valuable concepts include: regimes of delegation; the centrality of mediation; and the position that nature and society are not causes but consequences of human scientific and technical work. The position that a fact may be seen as a consequence, and not as an antecedent, is axiomatic to the American pragmatist approach as well, particularly in the work of John Dewey (e.g., Dewey, 1929). As he noted in his Experience and Nature:
For things are objects to be treated, used, acted upon and with, enjoyed and endured, even more than things to be known. They are things HAD before they are things cognized....the isolation of traits characteristic of objects known, and then defined as the sole ultimate realities, accounts for the denial to nature of the characters which make things lovable and contemptible, beautiful and ugly, adorable and awful. It accounts for the belief that nature is an indifferent, dead mechanism; it explains why characteristics that are the valuable and valued traits of objects in actual experience are thought to creative a fundamentally troublesome philosophical problem. (1989 [1925]: p. 21)
We draw attention here to the places where the work gets done of assuring that delegation and mediation will work: to the places where human and non-human are constructed to be operationally and analytically equivalent. And following both Dewey and Latour, we also question the indifference -- of nature, and of machines. So doing, we explore the political and ethical dimensions of actor-network theory, restoring the interlinked and webbed relationships between people, things, and infrastructure.
TWO DEFINITIONS
We will take a `classification' to be a spatial, temporal or spatio-temporal segmentation of the world. A `classification system' is a set of boxes, metaphorical or not, into which things can be put in order to then do some kind of work - bureaucratic or knowledge production. We will not demand of a classification system that it has properties such as:
* the operation of consistent classificatory principles (for example being solely a genetic classification (Tort, 1989) classifying things by their origin);
* mutual exclusivity of categories;
* completeness (total coverage of the world being described).
No working classification system that we have looked at meets these `simple' requirements and we doubt that any ever could (Desrosières and Thevenot, 1988).
For example, consider the International Classification of Diseases, which will be one of our major examples throughout this paper. The full title of the current (10th) edition of the ICD, is: "ICD-10 - International Statistical Classification of Diseases and Related Health Problems; Tenth Revision". Note that it is designated a `statistical' classification. By this is meant that only diseases which are statistically significant are to be entered in (it is not an attempt to classify all disease). It calls itself a `classification', even though many have said that it is a `nomenclature' since it has no single classificatory principle (it has at least four; which are not mutually exclusive (Bowker and Star, 1994). In many cases it represents a compromise between conflicting schemes: "The terms used in categories C82-C85 for non-Hodgkin's lymphomas are those of the Working Formulation, which attempted to find common ground among several major classification systems. The terms used in these schemes are not given in the Tabular List but appear in the Alphabetical Index; exact equivalence with the terms appearing in the Tabular List is not always possible". (ICD-10, 1, 215). However, it presents itself clearly as a classification scheme and not a nomenclature. Since 1970, there has been an effort underway by the World Health Organization to build a distinct International Nomenclature of Diseases, whose main purpose will be to provide: "a single recommended name for every disease entity" (ICD-10, 1, 25). The point here is that we want to take a broad enough definition so that anything that is consistently called a classification system can be included. If we took a purist view, the ICD would be a nomenclature and who knows what the IND would be. With a broad definition we can look at the work that is involved in building and maintaining a family of entities that people call classification systems - rather than attempt the Herculean, Sisyphian task of purifying the (un)stable systems in place. Howard Becker makes the point here: "Epistemology has been a ... negative discipline, mostly devoted to saying what you shouldn't do if you want your activity to merit the title of science, and to keeping unworthy pretenders from successfully appropriating it. The sociology of science, the empirical descendant of epistemology, gives up trying to decide what should and shouldn't count as science, and tells what people who claim to be doing science do..." (1996: 54-55).
We will take a `standard' to be any set of agreed-upon rules for the production of (textual or material) objects. There are a number of histories of standards which point to the development and maintenance of standards as being a key to industrial production. Thus, as David Turnbull points out, it was possible to build a cathedral like Chartres without standard representations (blueprints) and standard building materials (regular sizes for stones, tools etc.) (1993). However it is not possible to build a modern housing development without them: too much needs to come together - electricity, gas, sewer, timber sizes, screws, nails and so on. The control of standards is a central, often underanalyzed (but see the work of Paul David - for example David and Rothwell, 1994 - for a rich treatment) feature of economic life. They are key to knowledge production as well - Latour (1987) speculates that far more economic resources are spent creating and maintaining standards than in producing `pure' science. Key dimensions of standards are:
* They are often deployed in the context of making things work together - computer protocols for Internet communication involve a cascade of standards (cf. Abbate and Kahin, 1995) which need to work together well in order for the average user to gain seamless access to the web of information. There are standards for the components to link from your computer to the phone network, for coding and decoding binary streams as sound, for sending messages from one network to another, for attaching documents to messages and so forth;
* They are often enforced by legal bodies - be these professional organizations; manufacturers' organizations or the State. We can say tomorrow that volapük (a universal language that boasted some 23 journals in 1889[2]) or its successor Esperanto shall henceforth be the standard language for international diplomacy; without a mechanism of enforcement we shall probably fail.
* There is no natural law that the best (technically superior) standard shall win - the QWERTY keyboard, Lotus 123, DOS and VHS are often cited in this context. Standards have significant inertia, and can be very difficult to change.
Classifications and standards are two sides of the same coin. The distinction between them (as we are defining them) is that classifications are containers for the descriptions of events - they are an aspect of organizational, social and personal memory - whereas standards are procedures for how to do things - they are an aspect of acting in the world. Every successful standard imposes a classification system.
UNDERSTANDING CLASSIFYING AND STANDARDIZING
This paper will offer four major themes for understanding classifying, standardizing (and the related processes of formalizing) and their politics and histories. Each theme operates as a gestalt switch - it comes in the form of an infrastructural inversion (Bowker, 1994). Inverting our commonsense notion of infrastructure means taking what have often been seen as behind the scenes, boring, background processes to the real work of politics and knowledge production[3] and bringing their contribution to the foreground. The first two, ubiquity and material texture, speak to the space of actor-networks; the second two, the indeterminate past and the practical politics, speak to their time. Taken together, they sketch out features of the historically creation of the infrastructure which (ever partially, ever incompletely) orders the world in such a way that actor-network theory becomes a reasonable description.
The first major theme is seeing the ubiquity of classifying and standardizing. Classification schemes and standards literally saturate the worlds we live in. This saturation is furthermore intertwined, or webbed together. While it is possible to pull out a single classification scheme or standard for reference purposes, in reality none of them stand alone. So a subproperty of ubiquity is interdependence, if not smooth integration.
The second major theme is to see classifications and standards as materially textured. Under the sway of cognitivism, it is easy to see classifications as properties of mind and standards as ideal numbers or settings. But both have material force in the world, and are built into and embedded in every feature of the built environment (and many of the borderlands, such as with engineered genetic organisms). When we think of classifications and standards as material, we can afford ourselves of what we know about material structures, such as structural integrity, enclosures and confinements, permeability, and durability, among many others. We see people doing this all the time in describing organizational settings, and a common way to hear people's experience of this materiality is through metaphors. So the generation of metaphors is closely linked with the shift to texture.
The third major theme is to see the past as indeterminate[4]. This is not a new idea to historiography, but is important in understanding the evolution of ubiquitous classification/standardization and the multiple voices that are represented in any scheme. No one classification orders reality for everyone -- e.g. the red light-green light-yellow light categories don't work for blind people or those who are red-green color blind. In looking to classification schemes as ways of ordering the past, it is easy to forget those who are overlooked in this way. Thus, the indeterminacy of the past implies recovering multi-vocality; it also means understanding how standard narratives that seem universal have been constructed (Star, 1991a).
The fourth major theme is uncovering the practical politics of classifying and standardizing. There are two aspects of these politics: arriving at categories and standards, and, in the process, deciding what will be visible within the system (and of course what will thus then be invisible). The negotiated nature of standards and classifications follows from indeterminacy and multiplicity that whatever appears as universal or, indeed, standard, is the result of negotiations or conflict. How do these negotiations take place? Who determines the final outcome in preparing a formal classification? Visibility issues arise as one decides where to make the cuts in the system, for example, down to what level of detail one specifies a description of work, of an illness, of a setting. Because there are always advantages and disadvantages to being visible, this becomes crucial in the workability of the schema.
Ubiquity
In the built world we inhabit, thousands and thousands of standards are used everywhere, from setting up the plumbing in a house to assembling a car engine to transferring a file from one computer to another. Consider the canonically simple act of writing a letter longhand, putting it in an envelope and mailing it. There are standards for (inter alia): paper size, the distance that lines are apart if it is lined paper, envelope size, the glue on the envelope, the size of stamps, their glue, the ink in the pen that you wrote with, the sharpness of its nib, the composition of the paper (which in turn can be broken down to the nature of the watermark, if any; the degree of recycled material used in its production, the definition of what counts as recycling). And so forth.
Similarly, in any bureaucracy, classifications abound -- consider the simple but increasingly common classifications that are used when you dial an airline for information now ("if you are traveling domestically, press 1"; "if you want information about flight arrivals and departures, press 2...."). And once the airline has hold of you, you are classified by them as a frequent flyer (normal, gold or platinum); corporate or individual; tourist or business class; short haul or long haul (different fare rates and scheduling applies); irate or not (different hand-offs to the supervisor when you complain).
A systems approach would see the proliferation of both standards and classifications as a matter of integration -- almost like a gigantic web of interoperability. Yet the sheer density of these phenomena go beyond questions of interoperability. They are layered, tangled, textured; they interact to form an ecology as well as a flat set of compatibilities. There ARE spaces between (unclassified, non-standard areas), of course, and these are equally important to the analysis. A question: it seems that increasingly these spaces are marked as unclassified and non-standard. How does that change their qualities?
It is a struggle to step back from this complexity and think about the issue of ubiquity broadly, rather than try to trace the myriad connections in any one case. We need concepts for understanding movements, textures, shifts that will grasp larger patterns in this. For instance, the distribution of residual categories ("not elsewhere classified" or "other"), is one such concept. "Others" are everywhere. The analysis of any one instance of a residual category might yield information about biases or what is valued in any given circumstance; seeing that residual categories are ubiquitous offers a much more general sweep on the categorizing tendencies of most modern cultures. Another class of concepts which are found ubiquitously, and which speak to the general pervasiveness of standards and classification schemes, concern those which describe tangles or mismatches between subsystems. For instance, what Strauss calls a "cumulative mess trajectory" is a useful notion (Strauss, et al., 1985). In medicine, this occurs when one has an illness, is given a medicine to cure the illness, but incurs a serious side effect, which then needs to be treated with another medicine, etc. If the trajectory becomes so tangled that you can't return and the interactions multiply, "cumulative mess" results. We see this phenomenon in the interaction of categories and standards all the time -- ecological examples are particularly rich places to look.
Texturing Classification and Standardization
How do we "see" this densely saturated classified world? We are commonly used to casually black-boxing this behind-the-scenes machinery, even to the point, as we noted above, of ascribing a casual magic to it. All classification and standardization schemes are a mixture of physical entities such as paper forms, plugs, or software instructions encoded in silicon and conventional arrangements such as speed and rhythm, dimension, and how specifications are implemented. Perhaps because of this mixture, the web of intertwined schemes can be difficult to "see." In general, the trick is to question every apparently natural easiness in the world around us and look for the work involved in making it easy. Within a project or on a desktop, the seeing consists in seamlessly moving between the physical and the conventional. So when a computer programmer writes some lines of C code, she moves within conventional constraints and makes innovations based on them; at the same time, she strikes plastic keys, shifts notes around on a desktop, and consults manuals for various standards and other information. If we were to try to list out all the classifications and standards involved in writing a program, the list could run to pages. Classifications include types of objects, types of hardware, matches between requirements categories and code categories, and meta-categories such as the goodness of fit of the piece of code with the larger system under development. Standards range from the precise integration of the underlying hardware to the 60Hz power coming out of the wall through a standard size plug.
Merely reducing the description to the physical aspect such as the plugs does not get us anywhere interesting in terms of the actual mixture of physical and conventional. A good operations researcher could describe how and whether things would work together, often purposefully blurring the physical/conventional boundaries in making the analysis. But what is missing there is a sense of the landscape of work as experienced by those within it. It gives no sense of something as important as the texture of an organization: it is smooth or rough? Bare or knotty? What is needed is a sense of the topography of all of the arrangements -- are they colliding? co-extensive? gappy? orthogonal? One way to begin to get at these questions is to begin to take quite literally the kinds of metaphors that people use when describing their experience of organizations, bureaucracies, and information systems (Star, in press). So, for example, when someone says something simple like "things are running smoothly," the smoothness is descriptive of an array of articulations of people, things, work and standards. When someone says, "I feel as though the whole project is moving through thick molasses," it points to the opposite experience. These are not merely poetic expressions, although at some level they are that, too. As Schon pointed out in his seminal book, Displacement of Concepts, a metaphor is an import, meant to illuminate aspects of a current situation via juxtaposition (1963). It is also a rich and often unmined source of knowledge about people's experience of the densely classified world.
The Indeterminacy of the Past
There is no way of ever getting access to the past except through classification systems of one sort or another - formal or informal, hierarchical or not ... . Take the unproblematic statement: "In 1640, the English Revolution occurred; this led to a twenty year period in which the English had no monarchy". The classifications involved here include:
* The current segmentation of time into days, months and years. Accounts of the English revolution generally use the Gregorian calendar, which was adopted some hundred years later - so causing translation problems with contemporary documents;
* The classification of `peoples' into English, Irish, Scots, French and so on. These designations were by no means so clear at the time - the whole discourse of national genius really only arose in the nineteenth century;
* The classification of events into revolutions, reforms, revolts, rebellions and so forth (cf. Furet, 1978 on thinking the French revolution). There really was no concept of `revolution' at the time; our current conception is marked by the historiographical work of Karl Marx.
* And then, what do we classify as being a `monarchy'? There is a strong historiographical tradition which says that Oliver Cromwell was a monarch - he walked, talked and acted like one after all. Under this view, there is no hiatus at all in this English institution; rather a usurper took the throne.
There are two major schools of thought with respect to using classification systems on the past - one saying that we should only use classifications available to actors at the time (authors in this tradition warn against the dangers of anachronism - Hacking (1995) on child abuse is a sophisticated version) and the other that we should use the real classifications that progress in the arts and sciences has uncovered (typically history informed by current sociology will take this path - for example Tort's (1989) work on `genetic' classification systems, which were not so called at the time, but which are of vital interest to the Foucaldian problematic). Whichever we choose, it is clear that we should always understand classification systems according to the work that they are doing -the network within which they are embedded.
When we ask historical questions about the deeply and heterogeneously structured space of classification systems and standards, we are dealing with a 4-dimensional archaeology - some of the structures it uncovers are stable, some in motion; some evolving, some decaying. An institutional memory, about say, an epidemic, can be held simultaneously and with internal contradictions (sometimes piecemeal or distributed and sometimes with entirely different stories at different locations) across [a given institutional] space.
In the case of AIDS, for example, there are shifting classifications over the last 20 years, including the invention of the category in the first place. There is then a backwards look at cases which might have been AIDS before we had the category (a problematic gaze to be sure, as Bruno Latour (forthcoming) has written about tuberculosis; see also Star and Bowker, 1997). There are the stories about collecting information about a shameful disease, and a wealth of personal narratives about living with it. There is a public health story and a virology story, which use different category systems. There are the standardized forms of insurance companies and the categories and standards of the census bureau; when an attempt was made to combine them in the 80s to disenfranchise young men living in San Francisco from getting health insurance, the resultant political challenge stopped the combination of this data from being so used. At the same time, the blood banks refused for years to employ HIV screening, thus refusing the admission of another category to their blood labeling -- as Shilts (1987) tells us, with many casualties as a result.
Practical Politics
Someone, somewhere, often a body of people in the proverbial gray suits and smoke-filled rooms, must decide and argue over the minutiae of classifying and standardizing. The negotiations themselves form the basis for a fascinating practical ontology -- our favorite example is when is someone really alive? Is it breathing, attempts at breathing, movement....? And how long must each of those last? Whose voice will determine the outcome is sometimes an exercise of pure power: we, the holders of Western medicine and of colonialism, will decide what a disease is, and simply obviate systems such as acupuncture or Ayruvedic medicine. Sometimes the negotiations are more subtle, involving questions such as the disparate viewpoints of an immunologist and a surgeon, or a public health official (interested in even ONE case of the plague) and a statistician (for whom one case is not relevant) (Neumann and Star, 1996).
Once a system is in place, the practical politics of these decisions are often forgotten, literally buried in archives (when records are kept at all) or built into software or the sizes and compositions of things. In addition to our archaeological expeditions into the records of such negotiations, we provide here some observations of the negotiations in action. Finally, even where everyone agrees on the way the classifications or standards should be established, there are often practical difficulties about how to craft their architecture. For example, a classification system with 20,000 "bins" on every form is practically unusable. (The original International Classification of Diseases had some 200 diseases not because of the nature of the human body and its problems but because this was the maximum number that would fit the large census sheets then in use). Sometimes the decision about how fine-grained to make the system has political consequences as well. For instance, in describing and recording the tasks someone does, as in the case of nursing work, may mean controlling or surveilling their work as well, and may imply an attempt to take away discretion. After all, the loosest classification of work is accorded to those with the most power and discretion, who are able to set their own terms.
These ubiquitous, textured classifications and standards help frame our representation of the past and the sequencing of events in the present. They can best be understood as doing the ever-local, ever-partial work of making it appear that science describes nature (and nature alone) and that politics is about social power (and social power alone). Consider the case discussed at length by Young (1995) and Kirk and Kutchins (1992) of psychoanalysts who in order to receive reimbursement for this procedures need to couch them in a biomedical language (the DSM) that is anathema to them, but is the lingua franca of the medical insurance companies. There are local translation mechanisms that allow the DSM to continue to operate and to provide the sole legal, recognized representation of mental disorder. A `reverse engineering' of the DSM or the ICD reveals the multitude of local political and social struggles and compromises which go into the constitution of a `universal' classification.
INFRASTRUCTURE AND ACTOR NETWORK THEORY
We have, then, looked briefly at the space and time of the infrastructures that subtend actor-networks. Our position is that through due attention to these infrastructures, we can achieve an understanding of how it is that actor network theory comes to be a useful way of describing the nature scientific knowledge on the one hand and the (increasing) convergence of human and non-human on the other.
The converging sameness of humans and non-humans, and in general the construction of a world in which actor-network theory is true, is a political and ethical question. Work by scholars such as Joan Fujimura (1991), Valerie Singleton and Mike Michael (1993) and Leigh Star (1991b; 1995) has pointed to the fact that actor-network theory can be read as an uncritical celebration of the power of modern science and technology. There are certainly readings of Latour's Science in Action or The Pasteurization of France which could support such an assertion. Through our concentration on the work of standardization and classification - a concentration fully consonant with the analysis of Latour and Callon - we are pointing to a place where actor-network theory can be further developed; and to a place where its political side meets its philosophical underpinnings.
In order to clarify our position here, let us take an analogy. In the early nineteenth century in England there were a huge number of capital crimes - starting from stealing a loaf of bread and going up... . However, precisely because the penalties were so draconian, few juries would ever impose the maximum sentence; and indeed there was actually a drastic reduction in the number of executions even as the penal code was progressively strengthened. There are two ways of writing this history - one can either concentrate on the creation of the law; or one can concentrate on the way things worked out in practice. This is very similar to the position taken in Latour's We have never been modern: where he says we can either look at what scientists say that they are doing (working within a purified realm of knowledge) or at what they actually are doing (manufacturing hybrids). Actor network theory has looked in detail at the role of relatively black-boxed hybrids in creating the discourse of pure science as endpoint; we are advocating a development of the theory that pays more attention to the classification and standardization work that allows for hybrids to be manufactured and so explores the terrain of the politics of science in action.
The point for us is that both of these are valid kinds of account. Early actor-network theory concentrated on the ways in which it comes to seem that science gives an objective account of natural order: trials of strength, enrolling of allies, cascades of inscriptions and the operation of immutable mobiles. It drew attention to the importance of the development of standards (though not to the linked development of classification systems); but did not look at these in detail. We were invited to look at the process of producing something which looked like what the positivists alleged science to be. We got to see the `Janus face' of science. In so doing we `followed the actors'. We shared their insights (allies must be enrolled, translation mechanisms must be set in train so that, in the canonical case, Pasteur's laboratory work can be seen as a direct translation of the quest for French honor after defeat in the battlefield).
However, by the very nature of the method, we also shared their blindness. The actors being followed did not see what was excluded: they constructed a world in which that exclusion could occur. Thus if we just follow the doctors who create the International Classification of Diseases at the World Health Organization in Geneva, we will not see the variety of representation systems that other cultures have for classifying diseases of the body and spirit; and we will not see the fragile networks these classification systems subtend. Rather, we will see only those actants who are strong enough, and shaped in the right way, to impact the fragile actor-networks of allopathic medicine. We will see the blind leading the blind.
We ascribe to Latour's (1987) definition of reality as `that which resists' (again, a concept with strong American pragmatist resonances, se e.g. Dewey, 1916). The actor-network will be changed by the resistances that it encounters. We have suggested that the work of dealing with resistance is twofold:
* Changing the world such that the actor-network's description of reality becomes true. Thus if all diseases (of the mind and body) are classified purely physiologically and systems of medical observation and treatment are set up such the physical manifestations are the only manifestations recorded and physical treatments are the only treatments available then it is of course possible that the world will be such that schizophrenia, say, results purely and simply from a chemical imbalance in the brain. It will be impossible to think or act otherwise. We have called this the principle of convergence (Star and Bowker, 1994; Neumann, Bowker and Star, in press).
* Distributing the resistance in such a way that it becomes marginalized and can be overlooked.
A good example of responses to resistances comes from the nursing administrators we are studying at present. We will see how they are producing a classification of nursing work whose political edge is in the technical work of meshing this classification system with those already operating within the sociotechnical framework of the hospital. There is a play of resistances around this political of representation.
The Iowa Intervention Team are producing a classification of all nursing work - a nursing interventions classification (NIC) (McCloskey and Bulechek, 1996). NIC itself is a fascinating system. Those of us studying it see it as an ethnomethodological nirvana. Some categories, like bleeding reduction - nasal, are on the surface relatively obvious and codable into discrete units of work practice to be carried out on specific occasions. But what about the equally important categories of hope installation and humor? Hope installation includes the subcategory of `Avoid masking the truth'. This is not so much something that nurses do on a regular basis, as something that they should not do constantly. It also includes: `Help the patient expand spiritual self'. Here the contribution that the nurse is making is to an implicit lifelong program of spiritual development. With respect to humor, the very definition of the category suggests the operation of a paradigm shift: "Facilitating the patient to perceive, appreciate, and express what is funny, amusing, or ludicrous in order to establish relationships"; and it is unclear how this could ever be attached to a time line: it is something the nurse should always do while doing other things. Further, contained within the nursing classification is an anatomy of what it is to be humorous, and a theory of what humor does. The recommended procedures break humor down into subelements. One should determine the types of humor appreciated by the patient; determine the patient's typical response to humor (e.g. laughter or smiles); select humorous materials that create moderate arousal for the individual (for example `picture a forbidding authority figure dressed only in underwear'); encourage silliness and playfulness and so on to make a total of fifteen sub-activities: any one of which might be scientifically relevant. A feature traditionally attached to the personality of the nurse (being a cheerful and supportive person) is now attached through the classification to the job description as an intervention which can be accounted for.
Within the context of the hospital's sociotechnical system, nursing work has been deemed irrelevant to any possible future reconstruction; it has been canonically invisible, in Star's (1991a) term. The logic of NIC's advocators is that what has been excluded from the representational space of medical practice should be included. The Iowa group, the kernel of whom were teachers of nursing administration, made essentially three arguments for the creation of a nursing classification. First, it was argued that without a standard language to describe nursing interventions, there would be no way of producing a scientific body of knowledge about nursing. NIC in theory would be articulated with two other classification systems: NOC (the nursing sensitive patient outcomes classification scheme) and NANDA (the nursing diagnosis scheme). The three could work together thusly. One could perform studies over a set of hospitals employing the three schemes in order to check if a given category of patient responded well to a given category of nursing intervention. Rather than this comparative work being done anecdotally as in the past through the accumulation of experience, it could be done scientifically through the conduct of experiments. The Iowa Intervention project made up a jingle: NANDA, NIC and NOC to the tune of Hickory, Dickory, Dock to stress this interrelationship of the three schemes. The second argument for classifying nursing interventions was that it was a key strategy for defending the professional autonomy of nursing. The Iowa nurses are very aware of the literature on professionalization - notably Schon (1983) - and are aware of the force of having an accepted body of scientific knowledge as their domain. (Indeed Andrew Abbott, taking as his central case the professionalization of medicine, makes this one of his key attributes of a profession [1988].) The third argument was that nursing, alongside other medical professions, was moving into the new world of computers. As the representational medium changed, it was important to be able to talk about nursing in a language that computers could understand - else nursing work would not be represented at all in the future, and would risk being even further marginalized than it was at present.
However, there is also a danger in representing. It is more difficult to hive off aspects of nursing duties and give them to lower paid adjuncts, if nursing work is relatively opaque. The test sites that are implementing NIC have provided some degree of resistance here, arguing that activities should be specified - so that, within a soft decision support model a given diagnosis can trigger a nursing intervention constituted of a single, well-defined set of activities. As Marc Berg (in press) has noted in his study of medical expert systems, such decision support can only work universally if local practices are rendered fully standard. A key professional strategy for nursing - particularly in the face of the ubiquitous process re-engineer - is realized by deliberate non-representation in the information infrastructure. What is remembered in the formal information systems resulting is attuned to professional strategy and to the information requisites of the nurses' take on what nursing science is.
Further, there is a brick wall that they come up against when dealing with nurses on the spot: if they overspecify an intervention (that is break it down into too many constituent parts), then it gets called, in the field, an NSS classification - where NSS stands for `No shit, Sherlock' and is not used (Bowker, Star and Timmermans, 1996). It is assumed that any reasonable education in nursing or medicine should lead to a common language wherein things do not need spelling out to the ultimate degree. The information space will be sufficiently well pre-structured that some details can be assumed. Attention to the finer-grained details is delegated to the educational system, where it is overdetermined.
These NIC-related strategies of dealing with overspecification and the political drive to relative autonomy by dropping things out of the representational space - are essential for the development of a successful actor-network system that includes nursing. These two forms of erasure of local context are needed in order to create the very infrastructure in which nursing can both appear as a science like any other and yet nursing as a profession can continue to develop as a rich, local practice. The ongoing erasure is guaranteed by the classification system: only information about nursing practice recognized by NIC can be coded on the forms fed into a hospital's computers or stored in a file cabinet.
Nursing informaticians agree as a body that in order for proper health care to be given and for nurses to be recognized as a profession, hospitals as organizations should code for nursing within the framework of their memory systems: nursing work should be classified and forms should be generated which utilize these classifications. However, there has been disagreement with respect to strategy. To understand the difference that has emerged, recall one of those forms you have filled in (we have all experienced one) which do not allow you to say what you think. You may, in a standard case, have been offered a choice of several racial origins; but may not believe in any such categorization. There is no room on the form to write an essay on race identity politics. So you either you make an uncomfortable choice in order to get counted, and hope that enough of your complexity will be preserved by your set of answers to the form; or you don't answer the question and perhaps decide to devote some time to lobbying the producers of the offending form to reconsider their categorization of people. The NIC group has wrestled with the same strategic choice: fitting their classification system into the Procrustean bed of all the other classification systems that they have to articulate with in any given medical setting in order to form part a given organization's potential memory; or rejecting the ways in which memory is structured in the organizations that they are dealing with. We will now look in turn at each of these strategies.
Let us look first at the argument for including NIC within the information infrastructural framework of the hospital's sociotechnical system. They argue that NIC has to respond to multiple important agendas simultaneously. Consider the following description of needs for a standard vocabulary of nursing practice:
It is essential to develop a standardized nomenclature of nursing diagnoses in order to name without ambiguity those conditions in clients that nurses identify and treat without prescription from other disciplines; such identification is not possible without agreement as to the meaning of terms. Professional standards review boards require discipline-specific accountability; some urgency in developing a discipline-specific nomenclature is provided by the impending National Health Insurance legislation, since demands for accountability are likely both to increase and become more stringent following passage of the legislation. Adoption of a standardized nomenclature of nursing diagnoses may also alleviate problems in communication between nurses and members of other disciplines, and improvement in interdisciplinary communication can only lead to improvement in patient care. Standardization of the nomenclature of nursing diagnoses will promote health care delivery by identifying, for legal and reimbursement purposes, the evaluation of the quality of care provided by nurses; facilitate the development of a taxonomy of nursing diagnoses; provide the element for storage and retrieval of nursing data; and facilitate the teaching of nursing by providing content areas that are discrete, inclusive, logical, and consistent . (Castles, 1981, 38)
We have cited this passage at length since it unites most of the motivations for the development of NIC. The development of a new information infrastructure for nursing, heralded in this passage, will make nursing more `memorable'. It will also lead to a clearance of past nursing knowledge - henceforth prescientific - from the textbooks; it will lead to changes in the practice of nursing (a redefinition of disciplinary boundaries) - a shaping of nursing so that future practice converges on its representation.
Many nurses and nursing informaticians are concerned that the profession itself may have to change too much in order to meet the requirements of the information infrastructure. We murder, they note, to dissect. In her study of nursing information systems in France, Ina Wagner (1993) speaks as follows of the gamble of computerizing nursing records:
Nurses might gain greater recognition for their work and more control over the definition of patients' problems while finding out that their practice is increasingly shaped by the necessity to comply with regulators' and employers' definitions of 'billable categories.'
Indeed, a specific feature of this 'thought world' into which nurses are gradually socialized through the use of computer systems is the integration of management criteria into the practice of nursing. She continues: "Working with a patient classification system with time units associated with each care activity enforces a specific time discipline on nurses. They learn to assess patients' needs in terms of working time." This analytic perspective is shared by the Iowa nurses. They argue that documentation is centrally important; it not only provides a record of nursing activity but structures same:
While nurses do complain about paperwork, they structure their care so that the required forms get filled out. If the forms reflect a philosophy of the nurse as a dependent assistant to the doctor who delivers technical care in a functional manner, this is to some extent the way the nurse will act. If the forms reflect a philosophy of the nurse as a professional member of the health team with a unique independent function, the nurse will act accordingly. In the future, with the implementation of price-per-case reimbursement vis-à-vis diagnosis related groups, documentation will become more important than ever. (Bulechek and McCloskey, 1985, 406).
As the NIC classification has developed, observes Joanne McCloskey, the traditional category of `nursing process' has been replaced by `clinical decision making plus knowledge classification'. And in a representation of NIC that she produced both the patient and the nurse had dropped entirely out of the picture (both were, she said, located within the `clinical decision making box' on her diagram) (Iowa Intervention Project meeting, 6/8/95). A recent book about the next generation nursing information system argued that the new system:
Cannot be assembled like a patchwork quilt, by piecing together components of existing technologies and software programs. Instead, the system must be rebuilt on a design different from that of most approaches used today: it must be a data-driven rather than a process-driven system. A dominant feature of the new system is its focus on the acquisition, management, processing, and presentation of 'atomic-level' data that can be used across multiple settings for multiple purposes. The paradigm shift to a data-driven system represents a new generation of information technology; it provides strategic resources for clinical nursing practice, rather than just support for various nursing tasks. (Zielstorff et al., 1993, 1).
This speaks to the progressive denial of process and continuity through the segmentation of nursing practice into activity units. Many argue that in order to `speak with' databases at a national and international level just such segmentation is needed. The fear is that unless nurses can describe their process this way (at the risk of losing the essence of that process in the description), then it will not be described at all. They can only have there own actions remembered at the price of having others forget, and possibly forgetting themselves, precisely what it is that they do.
Some nursing informaticians have chosen instead to challenge the informational framework existing in the medical organizations they deal with. They have adopted a Batesonian strategy of responding to the threat of the new information infrastructure by moving the whole argument up one level of generality and trying to supplant `data-driven' categories with categories that recognize process on their own terms. Thus the Iowa team pointed to the fact that women physicians often spend longer with patients than male doctors, but they need to see patients less often as a result: they argue that just such a process-sensitive definition of productivity needs to argued for and implemented in medical information systems in order that nursing work gets fairly represented (Iowa Intervention Project meeting, 6/8/95). They draw from their secret (because unrepresented) reservoir of knowledge about process in order to challenge the data-driven models from within.
Within this strategy, the choice of allies is by no means obvious. Since with the development of NIC we are dealing with the creation of an information infrastructure, the whole question of how and what to challenge becomes very difficult. Scientists can only, willy nilly, deal with data as presented to them by their information base, just as historians of previous centuries must, alas, rely on written traces. When creating a new information infrastructure for an old activity, questions have a habit of running away from one: a technical issue about how to code process can become a challenge to organizational theory (and its database). A defense of process can become an attack on the scientific world view. One of the chief attacks on the NIC scheme has been made by a nursing informatician, Susan Grobe, who believes that rather than standardize nursing language computer scientists should develop natural language processing tools so that nurse narratives can be interpreted. Grobe argues for the abandonment of any goal of producing: "A single coherent account of the pattern of action and beliefs in science" (1992, 92); she goes on to say that: "philosophers of science have long acknowledged the value of a multiplicity of scientific views" (92). She excoriates Bulechek and McCloskey, architects of NIC, for having produced work: "derived from the natural science view with its hierarchical structures and mutually exclusive and distinct categories." (93). She on the other hand is drawing from cognitive science, library science and social science (94). Or again, a recent paper on conceptual considerations, decision criteria and guidelines for the Nursing Minimum Data Set cited Fritjof Capra against reductionism, Steven Jay Gould on the social embeddedness of scientific truth and praised Foucault for having developed a philosophical system to "grapple with this reality" (Kritek, 1988, 24). Nurse scientists, it is argued, "have become quite reductionistic and mechanistic in their approach to knowledge generation, at a time when numerous others, particularly physicists, are reversing that pattern" (p. 27). And nursing has to find allies amongst these physicists:
Nurses who deliver care engage in a process. It is actually the cyclic, continuous repetition of a complex process. It is difficult, therefore, to sketch the boundaries of a discrete nursing event, a unit of service, and, therefore, a unit of analysis. Time is clearly a central force in nursing care and nursing outcomes. Nurses have only begun to struggle with this factor. It has a centrality that eludes explication when placed in the context of quantum physics. (Kritek, 1988: 28)
The point here is not whether this argument is right or wrong. It is an interesting position. It can only be maintained, as can many of the other possible links that bristle through the NIC literature, because the information infrastructure itself is in flux. When the infrastructure is not in place to provide a `natural' hierarchy of levels, then discourses can and do make strange connections between themselves.
If they want to prove a case within a given hospital for the opening-up of a new nursing position, they need to demonstrate that nursing is cost-effective according to the dominant accountancy paradigm. Now they in fact disagree with this paradigm (arguing, for example, that `quality of care' is not quantifiable but is still significant); and yet they feel that they must act as if they accept it - or else their voice will not be heard at all. There are a group of radical accountants who argue for the kinds of position that the NIC nurses are taking; however, these accountants are tied in to a different series of local battles about classification and standardization. The resistance to such cost accounting might be large in the aggregate while its impact, because of effective distribution, is minimal.
In order to not be continually erased from the record, nursing informaticians are risking either modifying their own practice (making it more data driven) or waging a Quixotic war on database designers. The corresponding gain is great, however. If the infrastructure itself is designed in such a way that nursing information has to be present as an independent, well defined category, then nursing itself as a profession will have a much better chance of surviving through rounds of process re-engineering and nursing science as a discipline will have a firm foundation. The infrastructure assumes the position of Bishop Berkeley's God: as long as it pays attention to nurses, they will continue to exist. Having ensured that all nursing acts are potentially remembered by any medical organization, the NIC team will have gone a long way to ensuring the future of nursing.
What actor-network theory has to offer in its approach to resistance is a reading of where and how political work is done in the world of technoscience; and how such work can be problematized and challenged. Donald MacKenzie's wonderful study of `missile accuracy' furnishes the best example of this approach. In a concluding chapter to his book, he discusses the possibility of `uninventing the bomb', by which he means changing society and technology in such a way that the atomic bomb becomes an impossibility. Such change, he suggests, can be carried out in part at the overt level of political organizations. However, and crucially for our purposes, he also sensitizes the reader to the site of the development and maintenance of technical standards as a site of political decisions and struggle. Standards and classifications, however dry and formal on the surface are suffused with traces of political and social work.
CONCLUSION
It is difficult when discussing any theory to adopt the appropriate degree of reflexivity. Actor-network theory tells us quite clearly that a theory should not be judged according to an absolute set of indicators, but according to the work that it does in the world. How does the theory itself stand up against this criterion?
We have argued that it can do a good job in drawing our attention to the real political work that is being done in the development of technoscience; and can provide us with some useful concepts for analyzing that work. We have not in this paper argued, but would maintain (in accordance especially with Michel Serres' corpus; and to an extent Latour's We have never been modern and Dieux Faitiches) the symmetrical position that there is real philosophical and scientific work being done in the realm traditionally seen as the purely political. The central point is that technoscientific societies are powerful precisely because they are so good at delegating and distributing; and that actor-network theory is well position to track and describe the work of delegation and distribution.
Does this mean that actor-network theory is the theory for our times? Indeed not. However, it is a theory which takes the work of classification and standardization seriously; and so provides one way of understanding the development of a master narrative (Western science) which is not a master narrative (because it frequently breaks down locally as postmodernists would remind us) and yet which act likes one (in that it enacts the very exclusions and silencing that allow it to appear to be true). The magic of modern technoscience is a lot of hard work.
References:
Abbate, J. and Kahin, B. (eds) (1995). Standards policy for information infrastructure. Cambridge, Ma : MIT Press.
Abbott, A. (1988).The system of professions : an essay on the division of expert labor. Chicago: University of Chicago Press.
Baudrillard, Jean. (1990). Cool memories . New York: Verso.
Becker, H.S. (1996). `The epistemology of qualitative research' in R. Jessor, A. Colby and R. A. Shweder, eds., Ethnography and human development; Context and meaning in social inquiry, Chicago: University of Chicago. pp. 53-71.
Berg, M. (forthcoming 1996). Rationalizing medical work - decision support techniques and medical problems. Cambridge, MA: MIT Press.
Bowker, G. (1994). Science on the run: Information management and industrial geophysics at Schlumberger, 1920-1940. Cambridge, MA: MIT Press.
Bowker, G. and Star, S.L. (1994). Knowledge and Infrastructure in international information management: Problems of classification and coding. In L. Bud-Frierman (ed), Information acumen: The understanding and use of knowledge in modern business (Pp.187-216). London: Routledge.
Bowker, G., Star, S.L. and Timmermans, S. (1996). "Infrastructure and organizational transformation: Classifying nurses' work" in W. Orlikowski, G. Walsham, M. Jones and J. DeGross, eds. Information technology and changes in organizational work. (Proceedings IFIP WG8.2 Conference, Cambridge, England.) (Pp.. 344-370) London: Chapman and Hall.
Bulechek, G. and McCloskey, J. (1985). Future directions. In Gloria M. Bulechek, Joanne C. McCloskey, Nursing Interventions: treatments for nursing diagnoses (Pp.401-408). Philadelphia, PA: Saunders.
Callon, M. (1986). Some elements of a sociology of translation. In J. Law (Ed.), Power, action, and belief: A new sociology of knowledge? (Pp. 196-233). London: Routledge and Kegan Paul.
Castles, M.R. (1981). Nursing Diagnosis: standardization of nomenclature. In H. H. Werley and M. R. Grier (eds), Nursing information systems (Pp. 36-44), New York: Springer.
David, P. and Rothwell, G. S. (1994). Standardization, diversity and learning : strategies for the coevolution of technology and industrial capacity. Stanford, CA : Center for Economic Policy Research, Stanford University.
Desrosières, A. and Thévenot, L. (1988). Les catégories socio-professionnelles. Paris: Découverte.
Dewey, J. (1916). Logic: The theory of inquiry. New York: Holt, Rinehart and Winston.
Dewey, J. (1929). The quest for certainty. NY: Open Court.
Dewey, John. (1989). [Originally published in 1925). Experience and nature. La Salle, IL: Open Court Press.
Cody, W. (1995). Letter from William K. Cody, Nursing outlook (Pp.93-94), 43 (2).
Fujimura, J. (1991). On methods, ontologies, and representation in the sociology of science: Where do we stand?," In David Maines, ed. Social organization and social process: Essays in honor of Anselm L. Strauss. Hawthorne, NY: Aldine de Gruyter.
Furet, F. (1978). Penser la Revolution francaise. Paris: Gallimard.
Grobe, S. (1992). Response to J.C. McCloskey's and G.M. Bulechek's Paper on Nursing Intervention Scheme. In The Canadian Nurses Association, Papers from the Nursing Minimum Data Set Conference, October 27-29, 1992, Edmonton, Alberta: The Canadian Nurses Association.
Hacking, I. (1995). Rewriting the soul: multiple personality and the sciences of memory. Princeton, N.J.: Princeton University Press.
Huffman, E. (1990). Medical record management. Berwyn, IL: Physicians' Record Company.
Hutchins, E. (1995). Cognition in the wild. Cambridge, MA: MIT Press.
ICD-10. (1992). ICD-10. International Statistical Classification of Diseases and Related Health Problems, tenth revision, Volume 1. Geneva: World Health Organization.
Jenkins, T. (1988). New roles for nursing professionals. In M.J.Ball, K.J. Hannah, U. Gerdin Jelger, H. Peterson (eds), Nursing Informatics: Here caring and technology meet (Pp.88-95). NY: Springer.
Kirk, S. A. and Kutchins, H. (1992). The selling of DSM : the rhetoric of science in psychiatry . New York: A. de Gruyter.
Kritek, P.B, (1988). Conceptual considerations, decision criteria and guidelines for the nursing minimum data set from a practice perspective. In H. H. Werley and N. M. Lang (eds), Identification of the nursing minimum data set (Pp.22-33). New York: Springer.
Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Milton Keynes: Open University Press.
Latour, B. (1993). We have never been modern . Cambridge, Mass. : Harvard University Press.
Latour, B. (1996a). Aramis or the love of technology. Cambridge, MA: Harvard University Press.
Latour, B. (1996b). Petite réflexion sur le culte moderne des dieux faitiches. Paris: Les Empecheurs de penser en rond.
Latour, B. (forthcoming). `Did Ramses II die of tuberculosis? On the partial existence of existing and non-existing objects'. Typescript from author.
McCloskey, J. and Bulechek, G. (1996). Iowa Intervention Project - Nursing Interventions Classification. (NIC) Second Edition. St Louis, MO: Mosby.
MacKenzie, Donald A. (1990). Inventing accuracy : an historical sociology of nuclear missile guidance . Cambridge, MA : MIT Press.
Neumann, L. J. and Star, S. L. (1996). Making infrastructure: The dream of a common language, Proceedings of PDC `96 (Participatory Design Conference), Eds. J. Blomberg, F. Kensing and E. Dykstra-Erickson. Palo Alto, CA: Computer Professionals for Social Responsibility, pp. pp. 231-240.
Neumann, L., Star, S.L. and Bowker, Geoffrey C. (in press). `Information convergence'. Submitted to Journal of the American Society for Information Science (JASIS).
Proust, Marcel. (1989) A la recherche du temps perdu, tome iv, Paris: Pleiade.
Schon, D. (1983). The reflective practitioner : how professionals think in action. New York: Basic Books.
Schon, D. (1963). Displacement of concepts. London: Tavistock Publications.
Serres, M. (1993). Les origines de la géométrie. Paris: Flammarion.
Singleton, V. and M. Michael. (1993). Actor-networks and ambivalence: General practitioners in the UK Cervical Screening Programme. Social studies of science 23: 227-64.
Star, S.L. (1989). Regions of the mind: brain research and the quest for scientific certainty. Stanford, CA: Stanford University Press.
Star, S.L. (1991a). The Sociology of the invisible: The primacy of work in the writings of Anselm Strauss. In David Maines (Ed.), Social organization and social process: Essays in honor of Anselm Strauss (Pp.265-283). Hawthorne, NY: Aldine de Gruyter.
Star, S.L. (1991b). "Power, technologies and the phenomenology of standards: On being allergic to onions," in A sociology of monsters? Power, technology and the modern world, John Law, ed. Sociological Review Monograph. (Pp. 27-57 ). No. 38, 1991. Oxford: Basil Blackwell.
Star, S.L. (1995a) Introduction. In S.L.Star, ed. Ecologies of knowledge: Work and politics in science and technology. Albany, NY: SUNY Press.
Star, S.L. (1995b). The politics of formal representations: Wizards, gurus and organizational complexity," Pp. 88-118 in S. L. Star, Ed. Ecologies of knowledge: Work and politics in science and technology. Albany, NY: SUNY Press.
Star, S.L. (In press). Leaks of experience: The link between science and knowledge. To appear in Thinking Practices, eds. Shelley Goldman and James Greeno. Hillsdale, NJ: Lawrence Erlbaum Associates.
Star, S. L. and G. C. Bowker. (1997). Of lungs and lungers: The classified story of tuberculosis. Mind, Culture and Activity.
Star, S.L. and A. L. Strauss. (In press.) Layers of silence, arenas of voice: The dialogues between visible and invisible work. In B. Nardi and Y. Engeström, eds. A Web on the Wind: The Structure of Invisible Work.
Strauss, A., S. Fagerhaugh, B. Suczek and C. Wiener. (1985). Social organization of medical work. Chicago: University of Chicago Press.
Tort, P. (1989). La raison classificatoire : les complexes discursifs : quinze etudes Paris: Aubier.
Turnbull, D. (1993). `The ad hoc collective work of building gothic cathedrals with templates, string, and geometry' in Science, technology, & uman values. (Pp. 315-343). Vol. 18(3).
Wagner, I., 1993.Women's voice: The case of nursing information systems. In AI and society. Vol. 7(4).
Weick, K.E. and Roberts, K.H., 1993. Collective mind in organizations: Heedful interrelating on flight decks. In Administrative science quarterly (Pp.357-381). Vol. 38.
Young, A. The harmony of illusions : inventing post-traumatic stress disorder . Princeton, N.J. : Princeton University Press.
Zielstorff, R.D., Hudgings, C.I., Grobe, S. J.and The National Commission on Nursing Implementation Project (NCNIP) Task Force on Nursing Information Systems (1993). Next-Generation nursing information systems: Essential characteristics for professional practice, Washington, DC: American Nurses Publishing.