Cyber vulnerability

Published in Eneken Tikk and Mika Kerttunen (eds.), Routledge Handbook of International Cybersecurity (London: Routledge, 2020), pp. 111-121

Brian Martin


Go to

Brian Martin's publications on technological vulnerability

Brian Martin's publications

Brian Martin's website


New year’s eve, 1999: people around the world have stockpiled supplies in preparation for possible computer breakdowns. The Y2K problem, also called the millennium bug, was due to old computer code not being prepared to cope with the change from 99 to 00 in the final two digits of the year. There were many predictions of disaster but, when the time came, nothing much happened. Aside from a few malfunctions, everything operated as usual.

2009–2010: Iranian centrifuges, used to enrich uranium, start spinning out of control, causing them to self-destruct. The cause is initially unknown. Eventually it is traced to a computer worm, called Stuxnet, presumably written to infect Iranian devices, which also infected other industrial computing systems. Where did it come from? Suspects include US and Israeli intelligence services.

2030? EMP weapons are unleashed against several major cities. EMP stands for “electromagnetic pulse,” which is like lightning but with a more sudden surge of energy. The pulse can short-circuit all sorts of exposed electronic devices, causing massive chaos. Transport, manufacturing and communications are jeopardised.

These are three examples of cyber vulnerability, with very different outcomes. For years before 2000, there was extensive publicity about the impending collapse of computer-based systems. There are two ways to understand the potential disaster that didn’t happen. One is to give credit to the diligent work of myriads of computer professionals to ensure that code was not vulnerable. By anticipating possible breakdowns and taking appropriate action, disaster was prevented. The other way to understand Y2K is that the risks were greatly exaggerated, to the benefit of firms offering to fix potentially affected systems. Cyber vulnerability is a problem but so is unreasonable alarm about the danger. Real or imagined cyber threats can be framed in ways that change perceptions, shape policy and serve the agendas of individuals and groups (Dunn Cavelty, 2008).

The case of Stuxnet is entirely different (Lindsay, 2013; Zetter, 2014). It remains shrouded in secrecy, punctured by only a few exposés. What the episode revealed is that computer malware can be used in offensive mode, to interrupt and possibly control computer systems. The existence of Stuxnet shows that it is probably possible to use code to infect enemy communication, banking, medical and a host of other systems. Some would say that if this is possible, then it is likely that spy agencies are preparing to use such code and to defend against it. But because of the secrecy involved, it is unlikely that civilian operations are being protected.

The phenomenon of the nuclear electromagnetic pulse (EMP) is known due to a few high-altitude nuclear tests in the 1950s and 1960s. That was long before the full flowering of the microelectronics revolution that has made everyday operations highly dependent on sensitive equipment. Scientists can calculate the possible effects of an electromagnetic pulse (Lerner, 1981; Wik et al., 1985), but the dangers seem not to influence civilian planning. Even without EMP, the use of nuclear weapons would cause massive disruption to communication and other infrastructures. In recent years, militaries in several countries have worked on developing EMP weapons, with a smaller range, that can be carried by missiles or even in a suitcase. They are also developing and refining methods for deliberate disruption of urban infrastructure, which includes electricity grids, water supplies, sewage treatment systems, transport links, and fuel supplies; cyberwarfare is just one aspect of a wider targeting of facilities and networks vital for urban survival (Graham, 2011).

In the next section, several types of technological vulnerability are outlined. Then, communication vulnerabilities are examined with special attention to different perspectives, in particular the perspective of a repressive government and that of an opposition movement. (Cybercrime is not addressed here because it is not centrally about communication.) Following this are three sections addressing illustrative case studies: the shutdown of the Internet in Egypt in 2011; Edward Snowden’s leaks; and struggles over encryption. The final section points to radical ways of addressing vulnerabilities.

Technological vulnerability

Communication systems are vulnerable to breakdown, interruption, disruption and takeover, namely impacts that cause the systems to operate otherwise than designed. These vulnerabilities can generically be called technological, with “technology” interpreted in the sense of including artefacts and associated human and social systems. For example, the technological system of radio includes broadcasting equipment, receivers, personnel, operating procedures and manufacturing processes, among others. Thus, technological vulnerability can involve failures in equipment, in design and use. Typically, in complex systems, vulnerabilities involve sets of linked weaknesses, often in combinations not foreseen (Perrow, 1984).

Although technological vulnerability commonly is multicausal, it is nevertheless useful to point to different areas in which weaknesses can occur. Here is one classification.

There is both a cross-fertilisation and a tension between designing systems for war and for peacetime. Cross-fertilisation occurs when practices in one domain are taken up in another. A famous example is the military origins of the Internet. Tension occurs when practices in one domain are undermined by those in another, such as when encryption for civilian transactions is compromised by government-sponsored back doors.

Whose security?

In some discussions, the implicit assumption is that security is from the perspective of government, the military or sometimes large companies. In addition, security is assumed to be a good thing. These assumptions need to be questioned.

In wartime, cybersecurity — encompassing information security more generally — is paramount for each side in the conflict. Typically, enemies seek to breech or undermine their opponents’ systems. Cybersecurity is good for us but not cybersecurity for them. This basic point has ramifications for every type of technological vulnerability. Militaries try to prevent or prepare for breakdowns of their own systems while doing what they can to foster breakdowns in the enemy’s systems. Preventing sabotage of one’s own systems is a key goal, and so is exploiting, denying and damaging enemy systems.

Cybersecurity is also important to others besides governments, militaries and large companies. Individuals — citizens, customers, patients and others — have an obvious stake in the security of communication and information systems. The main difference is that individuals have little capacity for making policy decisions about research, investment and implementation. Nevertheless, the collective choices of individuals can have consequences. For example, when more individuals choose highly secure applications, this encourages designers and companies to cater for the resulting demand, the result being that surveillance becomes more difficult.

An important perspective to consider is that of civil society groups, including clubs, professional organisations, churches, trade unions and environmental groups. Some such groups have only a limited concern about security, for example wanting to protect the privacy of members and to guard against fraud. Others, though, need to protect against adversaries. For example, trade unions may be concerned about surveillance of their members by employers, given that many employers monitor their employees’ email and use of social media. Groups seeking to expose corrupt police might need to be prepared for disruption or being framed through false messages and manufactured misdeeds.

By looking at cybersecurity from the perspectives of a number of different groups, it is possible to observe that concerns sometimes differ and occasionally are directly contradictory, in the sense that one group’s actions undermine the security of another’s. This can be illustrated by a schematic breakdown of cybersecurity issues for two directly opposed groups: a government and an opposition movement. The government might be a repressive one, such as Russia or China, or an ostensibly more liberal one, such as the United States, that has repressive elements. Table 1 lists technological vulnerabilities for the government and Table 2 those for the opposition movement.

Table 1: Communication vulnerabilities for a government with repressive elements


Type of vulnerability

Examples

Examples of responses

Equipment breakdowns

Electricity outages

Redundant systems

Human failure

Programming error

Checks; redundant systems

Surveillance

Surveillance by a foreign power

Firewalls; encryption

Information exposure

Whistleblowing; leaking

Secrecy laws; reprisals

Sabotage, external

Disruption by a foreign power

Firewalls; encryption

Sabotage, internal

Intentional disruption by an employee

Screening of employees; need-to-know protocols

Organisational struggles

Strike by employees

Better wages; arrest of strike leaders

War

Destruction by bombing

Redundant systems

 

Table 2: Communication vulnerabilities for an opposition movement to a repressive government


Type of vulnerability

Examples

Examples of responses

Equipment breakdowns

Electricity outages

Redundant systems

Human failure

Programming error

Redundant systems

Surveillance

Surveillance by government

Encryption; non-digital communication

Information exposure

Media stories based on information from infiltrators or surveillance

Screening of members; media management

Sabotage, external

Government confiscation of devices; arrests

Encryption; decentralised leadership

Sabotage, internal

Disruption by infiltrating government agents

Monitoring of new members

Organisational struggles

Disputes between rival movement organisations

Methods for mediation

War

Arrests; martial law

Decentralised leadership

The most striking difference between these two sets of vulnerabilities relates to what is called sabotage. The repressive government seeks to control and subjugate the opposition movement, and to this end tries to monitor the movement’s communications and perhaps to corrupt or destroy the movement’s information systems or even to threaten, arrest or kill opponent members. Each of these actions is a distinct vulnerability for the movement. If the government is engaged in a war with an external enemy, this provides a pretext for declaration of martial law and repression of internal opposition movements.

It would be possible to make general observations about the similarities and differences in technological vulnerability for governments, militaries, companies, civil society groups and others. However, initially more insights may be gained through looking at particular episodes or issues, and this is the approach here. Several striking instances in which cybersecurity has been breached — at least from some groups’ perspectives — will be used to highlight contrasting perspectives.

The Internet in Egypt, 2011

In December 2010, a Tunisian street-seller felt he had been unfairly treated by authorities and immolated himself in protest. This action triggered an upsurge of protest against Tunisia’s repressive government and within a few weeks the popular uprising — unarmed — overthrew the government. The Tunisian popular campaign inspired government opponents in several other Arab countries. One of them was Egypt, where dictatorial rule had been in place for decades, and Hosni Mubarak was a ruthless ruler. The grassroots uprising in Egypt was in part fostered by a Facebook page, “We are all Khalid Said,” set up in memory of a young man beaten and killed by police.

The Facebook page was managed by Wael Ghonim, whose hid his identity, knowing that if his name was known to authorities, he and his family would be targets for arrest, torture and murder (Ghonim, 2012). This illustrates a common divergence in security. From the point of view of the Egyptian authorities, Ghonim was a criminal or traitor or troublemaker. The security of the regime depended on being able to identify and arrest him. From the point of view of the popular resistance to the regime, Ghonim was an inspiration, indeed a hero. Security from this perspective meant being able to express opposition without repression from the authorities.

Next consider the role of Facebook, normally seen as a social platform. However, in the context of a repressive government such as Egypt’s in early 2011, Facebook was a convenient tool for resistance, allowing messages to be posted. Facebook subsequently introduced a policy of requiring users to verify their identity, in order to overcome the misuse of the platform by creators of fake profiles. However, in Egypt 2011, anonymity was vital to the continued role of the “We are all Khalid Said” page in the resistance to the regime. The page was rescued from takedown by a sympathiser, living outside Egypt, who put her name to it despite the personal risk (Zufekci, 2017: 141–142).

The security of the government could have been enhanced by being able to access Facebook information. So between Facebook and the Egyptian government, there were conflicting security interests. To maintain its credibility, Facebook needed to appear to be independent of government, thus enabling it to be used for challenges to the government. If Facebook were thought to be collaborating with the government, then opposition groups would simply shift to another platform.

The Egyptian government, in response to the rapid mobilisation of resistance, took a drastic step: on 28 January 2011, it shut down all electronic communications (mobile phones, the Internet, messaging services), thereby demonstrating a potential but seldom exploited vulnerability. Although the Internet shutdown hindered the capacity of regime opponents to organise, they were able to use various workarounds to communicate with each other and with supporters outside the country. The shutdown had several counterproductive effects for the government. First, it meant that everyone heard about the uprising: many previously had not, due to the regime’s control of the mass media. Second, it disrupted businesses and government operations across the country, thereby alienating wide sectors of the population that otherwise might have remained indifferent to the political conflict. Third, many citizens in Cairo and elsewhere, deprived of information about political affairs, went out on the street to find out what was happening, and some of them ended up joining the protests. Within a few days, the government restored services.

The Egyptian Internet shutdown provides a lesson for challengers to governments about the choice of communication systems. One approach is to set up a dedicated resistance channel, with its own protocols and technology. However, a dedicated channel is vulnerable to disruption that targets only the challengers, whether the disruption is engineered by government or other political opponents. A different approach is to use mainstream communication systems such as Facebook. That has the advantage that disrupting the challengers’ communication channels also disrupts many uninvolved individuals and groups, potentially alienating them. This general conclusion requires qualification: each communication channel needs to be evaluated for its own security features.

It should also be noted that repressive governments have learned from their engagements with challengers (Dobson, 2012). The Egyptian government, and others, are now more aware of the role of social media in enabling opposition mobilisation and are prepared to counter it with techniques such as overloading social media with propaganda, spreading rumours, questioning the authenticity of claims about government abuses, and harassing social media leaders (Tufekci, 2017). These methods constitute a new sort of vulnerability of communication systems.

Snowden

The US National Security Agency runs a massive data collection operation, seeking to obtain nearly all electronic communications, including phone calls and email messages. It has the capability of scanning this vast body of data for keywords, thus enabling tracking of actual or potential terrorists, and much else. The NSA surveillance operates in conjunction with agencies in Australia, Britain, Canada and New Zealand in the so-called Five Eyes arrangement.

The NSA is far larger than the more well known US Central Intelligence Agency. For many years, the NSA, as well as its counterparts in Australia, Britain, Canada and New Zealand, were so secret that even their existence was hidden from public view. The NSA’s operations are an evolutionary development from earlier monitoring efforts, including British secret preparations for rule following nuclear war (Campbell, 1982; Laurie, 1970).

From the point of view of the NSA and related agencies, cybersecurity implies protection of its own operations (including its surveillance and disruptive operations), which are part of US government efforts to maintain security against foreign and internal threats. However, NSA monitoring and surveillance are a direct threat to the cybersecurity of every person, company and government whose communications are intercepted. This is a clear example of how security for one group means insecurity for another.

The existence of NSA operations was exposed through investigative scrutiny (Bamford, 1982). A major exposure of the details of the Five Eyes operation was by Nicky Hager, a New Zealand activist who obtained information from many workers at the country’s two eavesdropping stations, eventually being able to extract details such as a floor plan despite never having visited the stations. Hager’s book Secret Power, published in 1996, became well known in circles concerned about operations of spy agencies. In 1998, Steve Wright, drawing on work by Hager and Duncan Campbell, wrote a report for the European Union that drew attention to the Five Eyes’ Echelon spying operation. This EU report brought extensive media attention to the massive state surveillance (Wright, 2005).

In 2013, Edward Snowden, an NSA contractor, released to the Guardian a large number of internal NSA documents revealing its extensive data collection system. Snowden’s revelations generated immense international publicity (Greenwald, 2014; Gurnow, 2014; Harding, 2014). Several factors explain the much greater attention to Snowden’s information than to prior exposés. First, he was an insider and thus had greater credibility. Second, he released NSA documents, which had greater credibility than inferences by outsiders like Hager. Third, he teamed up with credible journalists and mass media outlets, so that the leaked information had the greatest possible impact. The media connection had the additional effect of encouraging journalists to learn much more about government surveillance of digital communications. In the furore, the existence of earlier exposés was usually overlooked.

The Snowden leaks and prior exposés starkly show that different groups can have contrary cyber vulnerabilities. The NSA sought to maintain the secrecy and hence security of its own operations; Snowden broke through the secrecy and thereby potentially compromised the NSA’s ability to continue to gather worldwide electronic communications unhindered. Companies, foreign governments, NGOs and citizens sought to maintain the security of their communications; Snowden revealed that this security had been being breached and provided the incentive, for some, to use methods to circumvent NSA surveillance.

Encryption struggles

Predating electronic systems, there has long been a struggle between citizens and governments about the security of information and communication. In the early days of the British postal system, the monarch’s agents would open mail. Naturally, this was unwelcome to those wanting their messages to be confidential. This led to pressure for a secure postal system, which involved operating principles and expectations in countries around the world (Joyce, 1893). Nevertheless, at times governments attempt to intercept and monitor letters, especially during wartime (Fowler, 1977). One method has been to photograph envelopes sent to particular addresses without trying to read the enclosed letters, foreshadowing the collection of metadata.

In the cyber era, the struggle between sender-receiver secure communication and the desire of governments (and some others) to intercept or monitor messages has continued, with several manifestations. One method to facilitate snooping is to install backdoors in communication devices. In the 1990s in the US, there was a major struggle over the Clipper chip, a proposed microchip to be installed in communication equipment that would allow the US government access to messages (Gurak, 1999). This same struggle continues in a variety of other guises, including public-private key encryption such as PGP and the Tor browser designed to prevent collection of metadata (Landau, 2017).

The details are less important here than the general point that when one group wants to monitor messages of those they see as dangers or opponents, the targeted group may feel threatened and want to prevent this monitoring. An example is governments and militaries seeking state security by means that undermine the communication security of others. However, there are many groups seeking to monitor others, and the ethical and practical implications are wide-ranging (Wright, 2015).

Conclusion

When thinking of the vulnerabilities of information and communication systems, breakdowns often come to mind: component failures, power outages, natural disasters and unintentional human error that cause interruptions to normal service. The more common the type of failure, the more likely it is to be anticipated, so engineers put a priority on the reliability of components and on building back-up systems. Natural disasters are rare but, in many cases, taken into account. These sorts of vulnerabilities are also “common” in the sense that everyone has a stake in minimising them. The main constraints are technical and financial.

It is much more challenging to deal with other kinds of vulnerabilities that are less predictable and more dynamic. They are the ones caused by intentional human actions: sabotage by external agents or by insiders, the by-products of organisational struggles, and war. Crackers, for example, seek to break into computer systems for hostile purposes. Although the possibility of such cracking is recognised, it is hard to predict when and how it will occur, because players on both sides of this game of offence and defence are trying to outwit each other. This is also what makes cracking versus anti-cracking a dynamic process. As one set of defences is deployed, attackers explore different modes of attack.

In the case of warfare, the game of cyberattack versus cyberdefence becomes more symmetrical: each side tries to compromise enemy systems while defending its own. In practice, this apparent symmetry may be one-sided, for example in struggles between low-tech insurgents versus high-tech counterinsurgency operations. In the case of drone attacks, the insurgents primarily defend, for example by avoiding electronic communication or switching phones, and have limited capacity to intercept or disrupt the communications of the attackers.

The issue of cybersecurity thus involves two different sorts of vulnerabilities: those that are accidental (breakdowns, human error) and those that are intentional (sabotage, warfare). Discussions of security typically focus on what might be called defence, namely how to make systems more secure. This draws attention away from the intentional causes of insecurity. To put it another way, threats to security are “naturalised,” namely assumed to exist as part of nature or human society, so the challenge becomes adapting to circumstances that are treated as inevitable. To question the assumption about the inevitability of threats opens the door to some other methods of promoting security.

In warfare, the enemy’s cyber attacks are the immediate source of vulnerability, so one option is to neutralise the enemy’s capacity for attack, for example by destroying equipment or killing personnel. Another option, mainly for insurgents, is to use communication methods that do not rely on digital technology. A quite different approach is to not go to war in the first place. Here, a few examples of this more structural approach to vulnerabilities are outlined.

Managers often see whistleblowing as a threat, as in the cases of Chelsea Manning and Edward Snowden. The usual responses are to make whistleblowing more difficult, to stigmatise whistleblowers and to subject them to severe penalties with the aim of discouraging others from following their example. An alternative approach is to remove the reasons for employees to blow the whistle to public audiences. This means not undertaking actions that are seen by members of the public, and by some employees, as unjust. Manning’s leaks were triggered by the wars in Afghanistan and Iraq, including killing of civilians. To prevent a Manning-type disclosure, there should have been no invasions of Afghanistan and Iraq. Given the enormous public opposition to the invasion of Iraq, and the lack of justification for it in international law, the incentive to speak out is obvious enough. Similarly, Snowden decided to leak due to his concerns about massive surveillance of citizens, concerns shared by many members of the public. So one “solution” to Manning-and-Snowden-style breaches of security is not to launch illegal wars and not to maintain massive covert surveillance of citizens.

Consider the possibility of cyber intrusions into nuclear power plants with the aim of triggering a major accident (Soltanieh and Esmailbagi, 2017). A long-term solution is to eliminate nuclear power by promoting energy efficiency and renewable energy sources.

It is important to question the assumption that cybersecurity is necessarily a “good thing.” The word “security” has a positive connotation, which is one reason why “national security” is so readily used to justify actions that may be unsavoury, dangerous and/or illegal. National security — and the associated cybersecurity — has been used as a cover for building massive military machines, curtailing civil liberties, assassinating foreign leaders and torturing opponents. So when talking about cybersecurity, it always needs to be asked, “for what purpose?” For example, the great firewall of China is a form of cybersecurity for the Chinese government but at the same time a gross violation of free communication for the Chinese people.

Defenders of systems of secrecy and surveillance can argue that curtailing freedoms is an unfortunate side effect of the more important task of protecting society against foreign and internal enemies. This assumes there are no alternatives. It can be argued that much secrecy is unnecessary (Horton, 2015). Rather than undertake massive spying on citizens, one option is open source intelligence, relying on information that is freely available (Stalder and Hirsch, 2002). Another, more radical, option is what is called publicly shared intelligence, in which independent intelligence enterprises receive information from members of the public and, crucially, openly publish their reports, which thus become available for scrutiny and improvement. The example of the Shipping Research Bureau, which tracked ships trying to break the embargo on trade with South Africa under apartheid, provides a precedent: the Bureau’s reports were more accurate than those of the Dutch intelligence service (de Valk and Martin, 2006).

These examples are intended not as definitive options but rather to emphasise the point that most discussions of cybersecurity do not question the need for greater security and do not explore options outside the usual template of security versus threats. Removing or reducing the threats, rather than defending against them, should be an option.

The electromagnetic pulse, noted at the outset of this chapter, is a rarely considered vulnerability of systems based on microelectronics. One mode of defence is providing protection for crucial circuits, for example via a Faraday shield. Another option is to promote disarmament, because when there are no nuclear or other EMP weapons there is no danger. Of course disarmament, like other political solutions such as fostering of human rights, will not occur overnight. But it can be put on the agenda for a long-term effort to foster cybersecurity that serves human interests.

Acknowledgements

Thanks to Mika Kerttunen and Steve Wright for valuable comments on drafts of this chapter.

References

Bamford, J. (1982) The Puzzle Palace: A Report on America’s Most Secret Agency. Boston, Houghton Mifflin.

Campbell, D. (1982) War Plan UK: The Truth about Civil Defence in Britain. London, Burnett.

de Valk, G. & Martin, B. (2006) Publicly shared intelligence. First Monday: Peer-reviewed Journal on the Internet. 11 (9), http://firstmonday.org/ojs/index.php/fm/article/view/1397/1315

Dobson, W. J. (2012) The Dictator’s Learning Curve: Inside the Global Battle for Democracy. New York: Doubleday.

Dunn Cavelty, M. (2008) Cyber-Security and Threat Politics: US Efforts to Secure the Information Age. London, Routledge.

Fowler, D. G. (1977) Unmailable: Congress and the Post Office. Athens, GA, University of Georgia Press.

Ghonim, W. (2012) Revolution 2.0. London, Fourth Estate.

Graham, S. (2011) Cities under Siege: The New Military Urbanism. London, Verso.

Greenwald, G. (2014) No Place to Hide, Edward Snowden, the NSA and the Surveillance State. London, Hamish Hamilton.

Gurak, L. J. (1999) Persuasion and Privacy in Cyberspace: The Online Protests over Lotus MarketPlace and the Clipper Chip. New Haven, CT, Yale University Press.

Gurnow, M. (2014) The Edward Snowden Affair: Exposing the Politics and Media Behind the NSA Scandal. Indianapolis, IN: Blue River Press.

Hager, N. (1996) Secret Power: New Zealand’s Role in the International Spy Network. Nelson, New Zealand: Craig Potton.

Harding, L. (2014) The Snowden Files: The Inside Story of the World’s Most Wanted Man. London, Guardian Books.

Horton, S. (2015) Lords of Secrecy: The National Security Elite and America’s Stealth Warfare. New York, Nation Books.

Joyce, H. (1893) The History of the Post Office from its Establishment Down to 1836. London, Richard Bentley and Son.

Landau, S. (2017) Listening In: Cybersecurity in an Insecure Age. New Haven, CT: Yale University Press.

Laurie, P. (1970) Beneath the City Streets. London, Penguin.

Lerner, E. J. (1981) Electromagnetic pulses: potential crippler. IEEE Spectrum. 18 (5), 41–46.

Lindsay, J. R. (2013) Stuxnet and the limits of cyber warfare. Security Studies. 22, 365–404.

Muir-Wood, R. (2016) The Cure for Catastrophe: How We Can Stop Manufacturing Natural Disasters. New York, Basic Books.

Perrow, C. (1984) Normal Accidents. New York, Basic Books.

Pozen, D. E. (2013) The leaky leviathan: why the government condemns and condones unlawful disclosures of information. Harvard Law Review. 127,512–635.

Schmid, A. P. & de Graaf, J. (1982) Violence as Communication: Insurgent Terrorism and the Western News Media. London, Sage.

Soltanieh, A. A. & Esmailbagi, H. (2017) Security of cyber-space in nuclear facilities. In Ramírez, J. M. & García-Segura (eds.), Cyberspace: Risks and Benefits for Society, Security and Development, pp. 265–274.Cham, Switzerland: Springer.

Stalder, F. & Hirsh, J. (2002) Open Source Intelligence. First Monday. 7 (6), at http://firstmonday.org/ojs/index.php/fm/article/view/961/882.

Tufekci, Z. (2017) Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, CT: Yale University Press.

Wik, M. et al. (1985) URSI factual statement on nuclear electromagnetic pulse (EMP) and associated effects. International Union of Radio Science Information Bulletin. 232, 4–12.

Wright, S. (2005) The Echelon trail: an illegal vision. Surveillance & Society. 3 (2/3), 198–215.

Wright, S. (2015) Watching them: watching us — where are the ethical boundaries? Ethical Space: The International Journal of Communication Ethics. 12 (3/4), 47–57.

Wright, S. (2017) Mythology of cyber-crime — insecurity & governance in cyberspace: some critical perspectives. In Ramírez, J. M. & García-Segura (eds.), Cyberspace: Risks and Benefits for Society, Security and Development, pp. 211–227.Cham, Switzerland: Springer.

Zetter, K. (2014) Countdown to Zero Day: Stuxnet and the Launch of the World’s First Digital Weapon. New York, Crown.