The consequences of NSA spying could be disastrous
“Suspicious Surveillance” was originally published in the Berkeley Political Review on December 9, 2014, and authored by Berkeley Political Review staff writer Griffin Potrock.
Suspicious Surveillance is part 2 of a series on U.S. cybersecurity.
In June of last year, now-infamous National Security Agency contractor Edward Snowden began leaking classified documents to the media. Since then, the steady stream of revelatory documents has generated a firestorm of criticism against NSA practices. The NSA, which is at the forefront of U.S. cybersecurity and intelligence operations, is one of the first lines of defense against both cyber and physical attacks. Unfortunately, the ever-increasing demand for new intelligence has led the NSA to leave the U.S. potentially vulnerable.
Most often discussed are NSA efforts to tap into phone metadata and various internet services (metadata is information such as the time an email or phone call is made/sent, who makes/sends it, and who receives it; it does not include content). Further documents have revealed that there is little the NSA has not done in search of intelligence, including breaking into< a href=”http://www.pcworld.com/article/2066984/the-weakest-link-how-the-nsa-may-have-hacked-google-and-yahoo.html”> U.S. companies, hacking foreign corporations, wiretapping allies, and generally building an arsenal of techniques and backdoors used to attack valuable targets. These actions raise important privacy, legal, and constitutional concerns; the justifications for and practicality of such programs are disputed, and the programs are often approved by secretive FISA courts, whose decisions and practices have limited oversight and effectiveness.
More critical, however, is the immediate impact that NSA actions have had on U.S. cybersecurity. The quest for intelligence has left the United States vulnerable on multiple fronts. In order to gather its intelligence, the NSA often makes use of unintentional flaws in code or poorly implemented cryptography to break into target systems (e.g. systems owned by telecommunications and Internet firms, foreign governments, and persons of interest). Unfortunately, in order to continue exploiting the backdoors (unauthorized routes of access into a system) that it finds, the NSA does not disclose the existence of the flaw. The risk is that hackers—or other countries’ spy agencies—could also exploit such flaws. By building a library of flaws, the NSA prevents the affected software from being repaired. Some evidence suggests that the NSA may have known about the Heartbleed OpenSSL bug, known as one of most egregious security catastrophes in the history of the Internet, for years before it was revealed by private security researchers (the agency has vigorously denied this). Unfortunately, Heartbleed is just one of many potential backdoors. Veteran tech analyst Steven Levy notes that “It is a well-known principle of cybersecurity that any flaw will eventually be discovered and exploited. If in fact the NSA was not reporting known security holes, then it risked exposing domestic information and secrets to evildoers. It may even have allowed foreign governments to snatch high-value corporate secrets.” The continued existence of these flaws makes U.S. consumers, corporations, government facilities, and critical infrastructure inherently less secure.
The NSA has argued that it makes a judgement call in such situations—former NSA chief Michael Hayden notes that if no one but the NSA (“no one but us” or “NOBUS”) could exploit such a flaw for technical reasons, there is no reason to report it rather than use it. Unfortunately, the NSA seems to have rather lax standards in this regard. For example, the NSA and other government agencies spend tens of millions of dollars per year in grey markets to “buy” exploits found by other hackers. The U.S. government is the single biggest buyer in these markets. While getting these exploits out of the market is a good idea, such backdoors should inherently fail the “NOBUS” test and be reported rather than exploited. However, this is not always the case.
To make matters worse, when the NSA cannot find flaws in commercial software, it may attempt to introduce them itself. In 2007, the National Institute for Standards and Technology (the national standards body for cryptographers) added an algorithm called Dual_EC_DRBG, at the NSA’s urging, to its list of approved random number generators, critical for effectively secured communications. However, it was proven not long afterward that Dual_EC_DRBG contained what could only be described as a backdoor. Despite the evidence against it, the algorithm remained in the NIST standard and was eventually adopted in hundreds of systems, affecting millions of devices. The Snowden documents appear to have confirmed that Dual_EC_DRBG was indeed an attempt by the NSA to insert a backdoor. Questions about the integrity of the algorithm have since prompted NIST to remove Dual_EC_DRBG from its list of approved algorithms.
The NIST incident is by no means isolated. Documents reveal that it was likely just one part of the $250 million Bullrun decryption program, a clandestine war the NSA has been waging for years against encryption by intentionally inserting flaws into widely used software. Encryption of data has meant that even if the NSA acquires data, it may be unable to read it; cracking these systems has become an agency priority. Sometimes this is achieved covertly, sometimes with corporate cooperation, and sometimes through secret court orders. While this increases the NSA’s ability to gather and analyze intelligence, these flaws also present potential security weaknesses that could be exploitedby an enterprising hacker or spy agency to steal private correspondence or break into key systems.
Perhaps most catastrophic of all in the long-term is the effect that NSA disclosures have had on cooperation between the government and corporations. As noted in Part I, U.S. cybersecurity—the privacy of individual communications, the integrity of intellectual property, the confidentiality of critical business, government, and diplomatic communications, and the security of key infrastructure—depends heavily on cooperation between corporations/individuals and the government. Before the Snowden revelations, companies —from big banks to Google— would turn to the NSA for expertise in dealing with cybersecurity issues. Unfortunately, “you’d be crazy to ask the NSA for help now,” according to Alan Davidson, former head of public policy at Google. Companies like RSA, a major computer security firm which earns millions from government contracts, discourage even doing business with the NSA. This is a net loss for national cybersecurity, as firms will not benefit from the technical knowledge the NSA has, and the NSA will find it difficult to collaborate with top engineering firms.
Furthermore, public disclosure of the NSA’s actions has led to a tech backlash. Companies like Google and Apple, subject to secret court orders under the Foreign Intelligence Surveillance Act which required them to turn over data, have begun working to undermine the NSA’s surveillance capabilities. While unable to resist court orders outright, the tech giants (and others) appear to be doing everything in their power to actively thwart the NSA. Google has begun encrypting more of its data, particularly in between data centers, and even laying its own fiber cables in order to thwart the NSA’s ability to tap the fiber optic connections between data centers. Apple has taken an even harder line, building encryption into iOS 8 so that the encryption keys are stored on the device, making it technically impossiblefor Apple to retrieve data from the device, even if served a warrant. Microsoft, Facebook, and Yahoo are also beefing up encryption. All are increasingly fighting requests for data whenever possible, spurred by negative user reaction, damage to their business, and anger over privacy violations.
Tech companies are not the only ones losing out. It is “an unquestionable loss for our nation that companies are losing the willingness to cooperate legally and voluntarily,” according to Robert Litt, general counsel of the Office of the Director of National Intelligence. However, it’s hard to argue that this reaction was unpredictable. Government data collection, particularly when it targets foreign citizens, has dealt a serious blow to these companies’ bottom lines. Some nations are considering instituting requirements for data to be physically stored in the country, a move that risks destroying the Internet as we know it by balkanizing the net into a series of “splinternets” that don’t communicate with each other. Unfortunately, this is also a loss for law enforcement. In the case of systems like Apple’s, law enforcement will now longer be able to access information on a device, even if it has a legitimate, targeted warrant. Tech companies will be wary of any requests to access user information, even if it is for valid purposes. And many will think twice before choosing to collaborate with or work for the U.S. government.
The NSA has clearly taken its intelligence gathering mission seriously, but at the expense of its mission to ensure U.S. cybersecurity. Its actions have also been extremely risky; it almost appears as if the NSA never expected anyone to find out about what it was doing, an absurd assumption when the program involved hundreds, if not thousands of people, many of whom were not on government payrolls. The detriments to U.S. cybersecurity, to the competitiveness of business, to the privacy of Americans, to public-private partnerships, and to foreign standing outweigh the benefits that have been presented. Intelligence gathering is not the NSA’s only mission; it is also charged with ensuring U.S. cybersecurity. The NSA should refocus on this second goal, or risk leaving the U.S. exposed in the future.