Research methods

Chapter 9 of

Technology for Nonviolent Struggle
by Brian Martin
(London: War Resisters' International, 2001)

Go to:

Contents page

Notes

The content of science and technology for nonviolent struggle -- that is, the fields studied, the ideas and the artefacts developed -- is different in a range of ways from the content of military science and technology, as illustrated in previous chapters. There is also another and perhaps more profound difference involved. To effectively serve the purposes of nonviolent struggle, there must be fundamental changes in the method of doing science and of testing technologies.

To talk of "scientific method" immediately raises images of formulating hypotheses and undertaking experiments to test them. A common view of scientific method draws on Karl Popper's idea of conjectures and refutations, in which the constant aim is to falsify existing theories.[1] There are also many other images associated with "scientific method," including objectivity of the scientist, rejection of deceit, open publication of results, and principles such as Ockham's razor (finding the hypothesis that requires the fewest arbitrary assumptions).

It is appropriate to talk of "images" associated with "scientific method" because, on closer scrutiny, "scientific method" turns out to be a convenient myth. It is a myth because the way science actually proceeds often bears little resemblance to the official principles of "scientific method."[2] For example, scientists seldom reject an established theory because there is evidence that contradicts it, although this is what is specified by Popper's falsificationism. When careful experimenters found an aether drift that should have falsified the special theory of relativity, the results were simply assumed to be wrong and ignored for decades. The much touted trait of scientific objectivity is scarce on the ground: many scientists, particularly elite scientists, are passionately committed to their pet theories and will go to amazing lengths to maintain their views in the face of disconfirming evidence.[3] The subjective aspects are quite apparent to most practising scientists.

"Scientific method" is a convenient myth because it portrays science as above and beyond the ordinary failings of normal society, in which personal biases, corruption, vested interests, and social structures are seen to play a significant role. Why should science be different? The "scientific method" promises to transmute the activities of fallible humans into Truth. Without the blessing of "scientific method," science becomes simply one more human enterprise, with all the possibilities for serving the purposes of either domination or liberation. That of course is a central theme in this book. Science can be shaped to serve either violent or nonviolent methods of struggle -- just as it can be shaped to serve commercial, democratic or other values -- and in practice it has been massively shaped to serve violent ends.

So how would the practice of science be different with priorities for nonviolent struggle? If the usual idea of "scientific method" is a myth, then it is necessary to describe what actually goes on in the doing of science. For my purposes here, only a broad description is necessary. Most scientific research is undertaken by full-time professional scientists, most of whom are employees of governments, corporations or universities. The practice of science is something that happens among these professionals in laboratories or on field trips. Very seldom are non-scientists involved in the doing of scientific research, except as the subject of experiments.

In the case of military research, the end product is usually a piece of technology or occasionally an idea such as a behavioural technique. Technologies are tested by engineers in laboratories and then by military personnel in special facilities. The ultimate test is in war. Note that in the applied end of military R&D, the process moves out of the hands of the engineering professionals and into the hands of military professionals. The rest of the population is normally not involved. There are exceptions, though, such as fallout shelters for survival of nuclear attacks. Building fallout shelters makes little sense unless people are willing and able to use them, and this requires education and training of the entire population.

There are also many cases where skills and experience are relevant to both civilian and military tasks, as in the case of pilots who can fly either civilian or military aircraft and electrical engineers who can set up either civilian or military power systems. In the case of rifles, some civilians have an indirect input into military design, since they use the weapons, or related ones, for nonmilitary purposes such as hunting. Nevertheless, as a rough generalisation it can be said that military R&D is largely an in-house process, with minimal involvement by people other than military scientists, engineers and personnel. This is because the military enterprise -- at least in the form it has taken in western high-technology professionalised forms -- does not require active participation by the rest of the population. In the case of fuel-air explosives, for example, no "members of the public" are involved, except as casualties.

Nonviolent struggle is quite a different proposition. It is founded on popular support and involvement. Although not everyone has to participate, a considerable level of participation is essential to its success. Whereas most combat soldiers are young, fit men, anyone who wants to, regardless of age, sex or abilities, can participate in some form of nonviolent action.[4] Therefore, science and technology for nonviolent struggle, if they are to be effective, must be developed with the active support and participation of the ultimate users of the ideas and artefacts. This means that the "method" of doing science needs to involve more of the population.

Testing a method of nonviolent action usually means a field test with a large cross-section of the population. This might be planting fruit and nut-bearing trees to make communities more self-sufficient in food or designing factories so that they can be safely and easily shut down if taken over by an aggressor. The implication is that R&D for nonviolent struggle, to be effective, would require close liaison with numerous community groups, from local gardeners to factory workers. The equivalent of soldiers testing out a new rifle would be a community testing of a new communication procedure.

Consider, for example, radio systems. Military radio systems need only be tested within the military itself. Radio for nonviolent struggle needs to be tested by all who are likely to use it. If cheap, reliable and easy-to-use short-wave systems are to be introduced throughout the society, then extensive tests need to be carried out with all sectors of the population, including groups such as children and people with impaired hearing. The military can develop radio systems and then recruit or train specialists to operate them. Radio for nonviolent struggle, by contrast, needs to be useable by all. Therefore, the design and development phases require input from likely users. In other words, the development process must be responsive to a wider section of the population than is the case for military technology.

Military and nonviolence R&D are alike in that science and technology are never developed solely in the minds of intellectuals or in remote labs: there is always a process of social interaction, including the motivation, funding, training and applications for R&D. Where these alternatives differ, in this regard, is in the social groups of greatest significance to the R&D process.

The so-called scientific revolution was made possible by combining theoretical work, carried out by gentlemen philosophers, with practical skills possessed by the much lower status artisans. Modern science thrives on the theory-practice interaction. Currently it is shaped predominantly by links with the state, corporations and the military. An alternative direction would be created by forging links with grassroots social action and life. In a sense, this would be an extension of the original scientific revolution, expanding the constituency of scientific and technological production beyond professional scientists and engineers and their primary patrons to the general public.

The difference in the development process can be pictured in the following way. For military R&D, scientists, engineers and military testing are somewhat insulated from other influences. "External" social influences on military science and technology exist, to be sure -- examples include strategic policy, competition for funding, and influence of the peace movement. But a key "social influence" is actually the very organisation of the R&D as a professional, in-house enterprise.

In a more participatory process of R&D for nonviolent struggle, there would be no clear distinction between researchers and the rest of the population. Of course, some people may be much more active than others in the process of technological innovation. But in this model, such innovation depends vitally on interaction and cooperation with a wide cross-section of the population. Furthermore, this interaction and cooperation is likely to lead to contributions by others -- those who in the military model would be simply users of the technology. This participatory model of R&D undermines the special role and status of professional scientists and engineers as the exclusive holders of expertise about science and technology.[5]

There are some precedents for this sort of participatory R&D. Citizen groups in Japan -- often with participation by some scientists -- have investigated environmental problems, using simple techniques such as talking to people about local health problems and testing for the presence of radioactivity by observing specially sensitive plants. Such an approach was more successful in determining the cause of Minamata disease -- due to mercury pollution in the ocean -- than heavily funded teams of traditional scientists using sophisticated ocean sampling and computer models.[6]

Many parts of the women's health movement -- most prominently, the Boston Women's Health Book Collective -- have reassessed available evidence and drawn on their own personal experiences to provide a different perspective about women's health, one that is less responsive to the interests of drug companies and medical professionals and more responsive to the concerns and experiences of women themselves.[7]

AIDS activists in the US, concerned about the slow and cumbersome processes for testing and approving drugs to treat AIDS, developed their own criteria and procedures and tried them out with drugs, some of which were produced and distributed illicitly. Their efforts and political pressure led to changes in official procedures.[8]

These examples show that nonscientists can make significant contributions to the process of doing science, and in some cases do better or cause changes in establishment approaches. However, the issue is not a competition between scientists and nonscientists, but rather promotion of a fruitful interaction between them. Scientists, to do their jobs effectively, need to bring the community "into the lab" and nonscientists need to learn what it means to do research. In the process, the distinction between the two groups would be blurred.

A good case study of the two models is the debate over encryption of digital communication described in chapter 5. The military model was embodied, literally and figuratively, in the Clipper chip, designed by the US National Security Agency so that authorised parties could decipher any encrypted messages. Clipper was designed in secrecy. It was based on the Skipjack algorithm, which remained a secret. Clipper and related systems were planned for installation in telephones and computer networks essentially as "black boxes," which people would use but not understand. If Clipper had been a typical military technology, such as a ballistic missile or fuel-air explosive, it would have been implemented in military arenas with little debate (except perhaps from peace activists) and certainly little public input into the choice of technology.

At first glance, the participatory alternative to Clipper is public key encryption, widely favoured by computer users. But rather than the alternative being a particular technology, it is more appropriate to look at the process of choosing a technology. Encryption has been the subject of vigorous and unending discussions, especially on computer conferences. Different algorithms have been developed, tested, scrutinised and debated. This has occurred at a technical level and also a social level. Various encryption systems have been examined by top experts, who have then presented their conclusions for all to examine. As well, the social uses and implications of different systems have been debated. Last but not least, lots of people have used the encryption systems themselves. The contrast to Clipper is striking.

Even the more participatory process used in developing and assessing encryption is still limited to a small part of the population. This is inevitable, since not everyone can be involved in looking at every technology. The point is that the process is relatively open: there are far more people who have investigated cyptography in relation to public key encryption than could ever be the case with a government-sponsored technology such as Clipper. The other important point is that the participatory process requires informed popular acceptance of the technology, rather than imposition through government pressure. The best indicator of the participatory process is a vigorous and open debate involving both technical and social considerations.

The case of encryption shows that participatory R&D does not eliminate the role of expertise. What it does reduce is the automatic association of expertise with degrees, jobs in prestigious institutions, high rank, awards, and service to vested interests. Expertise has to be tested in practical application. Just as an athlete cannot claim current superiority on the basis of degrees or past victories, so an expert in a process of participatory R&D cannot rely on credentials, but is always subject to the test of current practice.

These comments on participatory R&D are inevitably tentative. By their very nature, participatory systems are shaped by the process of participation itself, so what they become is not easy to predict.

 

Notes

1. Karl R. Popper, Objective Knowledge: An Evolutionary Approach (Oxford: Clarendon Press, 1972).

2. Henry H. Bauer, Scientific Literacy and the Myth of the Scientific Method (Urbana, IL: University of Illinois Press, 1992); Paul Feyerabend, Against Method: Outline of an Anarchistic Theory of Knowledge (London: New Left Books, 1975).

3. Ian I. Mitroff, The Subjective Side of Science: A Philosophical Inquiry into the Psychology of the Apollo Moon Scientists (Amsterdam: Elsevier, 1974).

4. Obviously, not everyone is able to participate in every form of nonviolent action. For example, using a short-wave radio to send messages requires certain skills and technology. But virtually everyone can participate in petitions, rallies, boycotts, strikes and other forms of noncooperation. On participation by people with disabilities, see Brian Martin and Wendy Varney, "Nonviolent action and people with disabilities," Civilian-Based Defense, Vol. 15, No. 3, Year-End 2000, pp. 4-16.

5. There is a considerable literature on citizen participation in technological decision making. See for example Malcolm L. Goggin (ed.), Governing Science and Technology in a Democracy (Knoxville: University of Tennessee Press, 1986); Alan Irwin, Citizen Science: A Study of People, Expertise, and Sustainable Development (London: Routledge, 1995); Frank N. Laird, "Participatory analysis, democracy, and technological decision making," Science, Technology, & Human Values, Vol. 18, No. 3, Summer 1993, pp. 341-361; Brian Martin (ed.), Technology and Public Participation (Wollongong: Science and Technology Studies, University of Wollongong, 1999); James C. Petersen (ed.), Citizen Participation in Science Policy (Amherst: University of Massachusetts Press, 1984); Richard E. Sclove, Democracy and Technology (New York: Guilford Press, 1995); Leslie Sklair, Organized Knowledge: A Sociological View of Science and Technology (St. Albans: Paladin, 1973); Langdon Winner (ed.), Democracy in a Technological Society (Dordrecht: Kluwer, 1992). However, most of this writing sees citizens as involved in decision making but not actually doing research. On science by the people, see Brian Martin, "The goal of self-managed science: implications for action," Radical Science Journal, No. 10, 1980, pp. 3-17; Brian Martin, "Anarchist science policy," The Raven, Vol. 7, No. 2, Summer 1994, pp. 136-153; Richard Sclove, "Research by the people, for the people," Futures, Vol. 29, No. 6, 1997, pp. 541-549. Relevant here are the diverse experiences in participatory action research, though such "people's research" is far more likely to be in fields of social analysis rather than science and engineering. See for example Stephen Kemmis and Robin McTaggart (eds.), The Action Research Planner (Geelong, Victoria: Deakin University, 3rd edn, 1988); Robert A. Rubinstein, "Reflections on action anthropology: some developmental dynamics of an anthropological tradition," Human Organization, Vol. 45, No. 3, Fall 1986, pp. 270-279; William Foote Whyte (ed.), Participatory Action Research (Newbury Park, CA: Sage, 1991); Trevor Williams, Learning to Manage our Futures: The Participative Redesign of Societies in Turbulent Transition (New York: Wiley, 1982).

6. Jun Ui, "The interdisciplinary study of environmental problems," Kogai -- The Newsletter from Polluted Japan, Vol. 5, No. 2, Spring 1977, pp. 12-24.

7. Boston Women's Health Book Collective, Our Bodies, Ourselves (Boston: New England Free Press, 1971 and several later editions).

8. Steven Epstein, "Democratic science? AIDS activism and the contested construction of knowledge," Socialist Review, Vol. 21, April-June 1991, pp. 55-64.