October 21, 2013
Workshop Regarding Surveillance Programs Operated Pursuant to Section 215 of the USA PATRIOT Act and Section 702 of the Foreign Intelligence Surveillance Act
Privacy and Civil Liberties Oversight Board
July 9, 2013
Nathan A. Sales*
Chairman Medine and members of the Board, thank you for inviting me to participate in this workshop. The NSA’s recently disclosed surveillance programs raise a number of vitally important questions about the interplay between the government’s compelling need to prevent terrorist attacks and our nation’s fundamental commitment to civil liberties and privacy. Briefly summarized, my statement will address the potential national security benefits of bulk data collection; it will propose some guiding principles to help ensure that any such surveillance regime is consistent with basic privacy and civil liberties values; and it will offer some preliminary thoughts on how to modify the NSA programs to ensure that they better comport with these first principles. I understand that the Board is especially interested in policy recommendations, so my statement will focus more on policy considerations than legal analysis.
While programmatic surveillance can be an important counterterrorism tool, it also—given the sweeping scope of the data collection on which it usually relies—has the potential to raise profound concerns about civil liberties and privacy. It therefore becomes critical to establish a set of first principles to govern when and how this monitoring is to be conducted. It is especially important to think about these baseline rules now, when programmatic surveillance is still in its relative youth. This will allow the technique to be nudged in privacy-protective directions as it develops into maturity. The critical question is how to take advantage of the potentially significant national security benefits offered by programmatic surveillance without running afoul of fundamental civil liberties and privacy values. In other words, what can be done to domesticate programmatic surveillance?
This is not the place to flesh out the precise details of the ideal surveillance regime, but we can identify certain basic principles that policymakers and others should consider when thinking about bulk data collection and analysis. Two broad categories of principles should govern any such system; one concerns its formation, the other its operation. First, there are the architectural or structural considerations—the principles that address when programmatic surveillance should take place, the process by which such a regime should be adopted, and how the system should be organized. Second, there are the operational considerations—the principles that inform the manner in which programmatic surveillance should be carried out in practice.
As for the structural considerations, one of the most important is what might be called an anti-unilateralism principle. A system of programmatic surveillance should not be put into effect simply on the say-so of the executive branch, but rather should be a collaborative effort that involves Congress (in the form of authorizing legislation) and the judiciary (in the form of a FISA court order reviewing and approving the executive’s proposed surveillance activities). An example of the former is FISA itself, which Congress enacted with the executive’s (perhaps reluctant) consent in 1978. FISA’s famously convoluted definition of “electronic surveillance”1 can be seen as a congressional effort to preserve the NSA’s preexisting practice of collecting certain foreign-foreign and foreign-domestic communications without prior judicial approval. An example of the latter concerns the Terrorist Surveillance Program. After that program came under harsh criticism when its existence was revealed in late 2005, the executive branch persuaded the FISA court to issue orders allowing the program to proceed subject to various clarifications and limits.2 That accommodation eventually proved unworkable, and the executive then worked with Congress to put the program on a more solid legislative footing through the temporary Protect America Act of 2007 and the permanent FISA Amendments Act of 2008.
Anti-unilateralism is important for several reasons. For one, the risk of executive overreach is lessened if that branch must enlist its partners before commencing a new surveillance initiative. Congress might decline to permit bulk collection in circumstances where it concludes that ordinary, individualized monitoring would suffice, or it might authorize programmatic surveillance subject to various privacy protections. In addition, inviting many voices to the decisionmaking table increases the probability of sound policy outcomes. More participants can also help mitigate groupthink tendencies. In short, if we’re going to engage in programmatic surveillance, it should be the result of give and take among all three branches of the federal government (or at least between its two political branches), not the result of executive edict.
A second structural principle follows from the first: Programmatic surveillance should, where possible, have explicit statutory authorization. Congress does not “hide elephants in mouseholes,”3 the saying goes, and we should not presume that Congress meant to conceal its approval of a useful but potentially controversial programmatic surveillance system in the penumbrae and interstices of obscure federal statutes. Instead, Congress normally should use express and specific legislation when it wishes the executive branch to engage in bulk data collection. Clear laws will help remove any doubt about the authorized scope of the approved surveillance. Express congressional backing also helps bring an air of legitimacy to the monitoring. And a requirement that programmatic surveillance usually should be approved by clear legislation helps promote accountability by minimizing the risk of congressional shirking.
Of course, exacting legislative clarity may not be possible in all cases; sometimes, explicit statutory language might reveal operational details and compromise intelligence sources and methods or provoke a diplomatic row. But clarity often will be feasible, and the Protect America Act and FISA Amendments Act are good examples of what the process could look like. In both cases, Congress clearly and unambiguously approved monitoring that the executive branch previously claimed4 was implicitly authorized by a combination of FISA (which at the time made it unlawful to engage in electronic surveillance “except as authorized by statute”5), the September 18, 2001 Authorization for Use of Military Force (which authorizes the president to use “all necessary and appropriate force” against those responsible for 9/116), and the Supreme Court’s decision in Hamdi v. Rumsfeld (which interpreted the AUMF’s reference to “all necessary and appropriate force” to include “fundamental and accepted” incidents of war, such as detention7).
Next, there is the question of transparency. Whenever possible, programmatic surveillance systems should be adopted through open and transparent debates that allow an informed public to meaningfully participate. The systems also should be operated in as transparent a manner as possible. This in turn requires the government to reveal enough information about the proposed surveillance, even if at a fairly high level of generality, that the public is able to effectively weigh its benefits and costs. Transparency is important because it helps promote accountability; it enables the public to hold their representatives in Congress and in the executive branch responsible for the choices they make. Transparency also fosters democratic participation, ensuring that the people are ultimately able to decide what our national security policies should be. And it can help dispel suspicions about programs that otherwise might seem nefarious. Again, perfect transparency will not always be feasible—a public debate about the fine-grained details of proposed surveillance can compromise extremely sensitive intelligence sources and methods. But transparency should be the default rule, and even where the government’s operational needs rule out detailed disclosures, a generic description of the proposed program is better than none at all.
Finally, any programmatic surveillance regime should observe an anti-mission-creep principle. Bulk data collection should only be used to investigate and prevent terrorism, espionage, and other serious threats to the national security. It should be off limits in regular criminal investigations. And if programmatic surveillance happens to turn up evidence of low-grade criminal activity, intelligence authorities normally should not be able to refer it to their law enforcement counterparts—though there should be an exception for truly grave crimes, such as offenses involving a risk of death or serious bodily injury and crimes involving the exploitation of children. This is a simple matter of costs and benefits. The upside of preventing deadly terrorist attacks and other national security perils can be so significant that we as a nation may be willing to resort to extraordinary investigative techniques like bulk data collection. But the calculus looks very different where the promised upside is prosecuting ordinary crimes like income tax evasion or insurance fraud. We might be willing to tolerate an additional burden on our privacy interests to stop the next 9/11, but not to stop tax cheats and fraudsters.
As for the operational considerations, among the most important is the need for external checks on programmatic surveillance, whether judicial, legislative, or both. In particular, bulk data collection should have to undergo some form of judicial review, such as by the FISA court, in which the government demonstrates that it meets the Fourth Amendment standards that apply to the acquisition of the data in question. Ideally, the judiciary would give its approval before collection begins. But this will not always be possible, in which case timely post-collection judicial review will have to suffice. (FISA contains a comparable mechanism for temporary warrantless surveillance in emergency situations.) Programmatic surveillance also should be subject to robust congressional oversight. This could take a variety of forms, including informal consultations with congressional leadership and the appropriate committees when designing the surveillance regime, as well as regular briefings to appropriate personnel on the operation of the system and periodic oversight hearings.
Oversight by the courts and Congress provides an obvious, first-order level of protection for privacy and civil liberties—an external veto serves as a direct check on possible executive misconduct, such as engaging in monitoring when it is not justified or using surveillance against political enemies or dissident groups. Judicial and legislative checks also offer a less noticed but equally important second-order form of protection. The mere possibility of an outsider’s veto can have a chilling effect on executive misconduct, discouraging officials from questionable activities that would have to undergo, and might not survive, external review. Moreover, external checks can channel the executive’s scarce resources into truly important surveillance and away from relatively unimportant monitoring. This is so because oversight increases the executive’s costs of collecting bulk data—e.g., preparing a surveillance application, persuading the judiciary to approve it, briefing the courts and Congress about how the program has been implemented, and so on. These increased costs encourage the executive to prioritize collection that is expected to yield truly valuable intelligence and, conversely, to forego collection that is expected to produce information of lesser value.
Of course, judicial review in the context of bulk collection won’t necessarily look the same as it does in the familiar setting of individualized monitoring of specific targets. If investigators want to examine a particular terrorism suspect’s telephony metadata, they apply to the FISA court for a pen register/trap and trace order upon a showing that the information sought is relevant to an ongoing national security investigation. But, as explained above, that kind of particularized showing usually won’t be possible where authorities are dealing with unknown threats, and where the very purpose of the surveillance is to identify the threats. In these situations, reviewing courts may find it necessary to allow the government to collect large amounts of data without an individualized showing of relevance. This doesn’t mean that privacy safeguards must be abandoned and the executive given free rein. Instead of serving as a gatekeeper for the government’s collection of data, courts could require that authorities demonstrate some level of individualized suspicion before they access the data that has been collected. Protections for privacy and civil liberties can migrate from the front end of the intelligence cycle to the back end.
In more general terms, because programmatic surveillance involves the collection of large troves of data, it inevitably means some dilution of the familiar ex ante restrictions that protect privacy by constraining the government from acquiring information in the first place. It therefore becomes critically important to devise meaningful ex post safeguards that can achieve similar forms of privacy protection. In short, meaningful restrictions on the government’s ability to use data that it has gathered must substitute for restrictions on the government’s ability to gather that data at all; what I have elsewhere called use limits must stand in for collection limits.8
In addition to oversight by outsiders, a programmatic surveillance regime also should feature a system of internal checks within the executive branch, to review collection before it occurs, after the fact, or both. These sorts of internal restraints are familiar features of the post-1970s national security state, and there is no reason to exempt programmatic surveillance. As for the ex ante checks, internal watchdogs should be charged with scrutinizing proposed bulk collection to verify it complies with the applicable constitutional and statutory rules, and also to ensure that appropriate protections are in place for privacy and civil liberties. The Justice Department’s Office of Intelligence is a well known example. The office, which presents the government’s surveillance applications to the FISA court, subjects proposals to exacting scrutiny, sometimes including multiple rounds of revisions, with the goal of increasing the likelihood of surviving judicial review. Indeed, the office has a strong incentive to ensure that the applications it presents are in good order, so as to preserve its credibility with the FISA court.
Ex post checks include such common mechanisms as agency-level inspectors general, who can be charged with auditing bulk collection programs and also making policy recommendations to improve their operation, as well as entities like the Privacy and Civil Liberties Oversight Board, which perform similar functions across the executive branch as a whole. Another important ex post check is to offer meaningful whistleblower protections to officials who know about programs that violate constitutional or statutory rules. Allowing officials to bring their concerns to ombudsmen within the executive branch can help root out lawlessness and also relieve the felt necessity of leaking information about highly classified programs to the media.
These and other mechanisms can be an effective way of preventing executive misconduct. Done properly, internal checks can achieve all three of the benefits promised by traditional judicial and legislative oversight—executive branch watchdogs can veto surveillance they conclude would be unlawful, the mere possibility of such vetoes can chill overreach, and increasing the costs of monitoring can redirect scarce resources toward truly important surveillance. External and internal checks thus operate together as a system; the two types of restraints are rough substitutes for one another. If outside players like Congress and the courts are subjecting the executive’s programmatic surveillance activities to especially rigorous scrutiny, the need for comparably robust safeguards within the executive branch tends to diminish. Conversely, if the executive’s discretion is constrained internally through strict approval processes, audit requirements, and so on, the legislature and judiciary may choose not to hold the executive to the exacting standards they otherwise would. In short, certain situations may see less need to use traditional interbranch separation of powers and checks and balances to protect privacy and civil liberties, because the executive branch itself is subject to an “internal separation of powers.”9
A word of caution. It’s important not to take these in-house review mechanisms too far. Internal oversight can do more than deter executive branch overreach. It can also deter necessary national security operations, with potentially deadly results. The pre-9/11 information sharing wall is a notorious example of an internal check gone awry—executive branch lawyers interpreted FISA to sharply restrict intelligence officials from coordinating or sharing information with their law enforcement counterparts, leading one prophetic FBI agent to lament on the eve of 9/11 that “someday somebody will die.”10 There are other examples as well. In the 1990s, executive branch lawyers vetoed CIA plans to use targeted killing against Osama bin Laden, and JAG lawyers have occasionally ruled out air strikes on policy grounds even though they would be permissible under the laws of war.11 There is no universally applicable answer to the question, how much internal oversight is enough? Too little imperils privacy, too much threatens security. The right balance cannot be known a priori, but rather must be struck on a case by case basis taking account of the highly contingent and unique circumstances presented by a given surveillance program, the threat it seeks to combat, and other factors.
A third operational consideration is the need for strong minimization requirements. Virtually all surveillance raises the risk that officials will intercept innocuous data in the course of gathering evidence of illicit activity. Inevitably, some chaff will be swept up with the wheat. The risk is especially acute with programmatic surveillance, in which the government assembles large amounts of data in the search for clues about a small handful of terrorists, spies, and other threats to the national security. Minimization is one way to deal with the problem. Minimization rules limit what the government may do with data that does not appear pertinent to a national security investigation—e.g., how long it may be retained, the conditions under which it will be stored, the rules for accessing it, the purposes for which it may be used, the entities with which it may be shared, and so on. Congress appropriately has required intelligence officials to adopt minimization procedures, both under FISA’s longstanding particularized surveillance regime and under the more recent authorities permitting bulk collection. But the rules need not be identical. Because programmatic surveillance often involves the acquisition of a much larger trove of non-pertinent information, the minimization rules for bulk collection ideally would contain stricter limits on the use of information unrelated to national security threats. In other words, the minimization procedures should reflect the anti-mission-creep principle described above.
Finally, programmatic surveillance systems should have technological safeguards that protect privacy and civil liberties by restricting access to sensitive information and tracking what officials do with it. Permissioning and authentication technologies can help ensure that sensitive databases are only available to officials who need them to perform various counterterrorism functions. And auditing tools can track who accesses the information, when, in what manner, and for what purposes. These kinds of mechanisms show promise but have a mixed record at preventing unauthorized access and use of sensitive data. The use of access logs helped the State Department quickly identify and discipline the outside contractors who in 2008 improperly accessed the private passport files of various presidential candidates. But people like Edward Snowden and Bradley Manning obviously have been able to exfiltrate huge amounts of classified information from protected systems, either because access controls were not in place or because they were able to evade them. Even if technological controls are not now an infallible safeguard against abuse, the basic principle seems sound: A commitment to privacy can be baked into a programmatic surveillance regime at the level of systems architecture.
* * *
Bulk data collection is probably here to stay. Programmatic surveillance that aims at identifying previously unknown terrorists and spies has the potential to be an important addition to the national security toolkit. And in an era where private companies like Amazon and Google assemble detailed digital dossiers to predict their customers’ buying habits, it’s more or less inevitable that counterterrorism officials will want to take advantage of the same sorts of technologies to stop the next 9/11. That’s why it’s critical to establish a baseline set of rules to govern the creation and operation of any system of programmatic surveillance. These first principles can ensure that the government is equipped a valuable tool for preventing terrorist atrocities while simultaneously preserving our national commitment to civil liberties and privacy.
1 50 U.S.C. § 1801(f).
2 David Kris, A Guide to the New FISA Bill, Part II, Balkinization (July 29, 2013, 12:45 PM), http://balkin.blogspot.com/2008/06/guide-to-new-fisa-bill-part-ii.html.
3 Whitman v. Am. Trucking Ass’ns, 531 U.S. 457, 468 (2001).
4 Letter from William E. Moschella, Assistant Att’y Gen., Off. of Legis. Aff., U.S. Dep’t of Justice., to Pat Roberts, Chairman, Senate Select Comm. on Intelligence, et al. (Dec. 22, 2005), available at http://www.fas.org/irp/agency/doj/fisa/doj122205.pdf.
5 50 U.S.C. § 1809(a)(1).
6 Pub. L. No. 107-40, § 2(a), 115 Stat. 224 (2001).
7 542 U.S. 507, 518 (2004).
8 Nathan Alexander Sales, Run for the Border: Laptop Searches and the Fourth Amendment, 43 U. Rich. L. Rev. 1091, 1124-27 (2009).
9 Neal Kumar Katyal, Internal Separation of Powers: Checking Today’s Most Dangerous Branch from Within, 115 Yale L.J. 2314 (2006).
10 National Commission on Terrorist Attacks Upon the United States, The 9/11 Commission Report 271 (2004).
11 Nathan Alexander Sales, Self Restraint and National Security, 6 J. Nat’l Sec. L. & Pol’y 227, 247-56 (2012).
*Nathan A. Sales is an Assistant Professor of Law at George Mason University. He was previously the first Deputy Assistant Secretary for Policy Development at the U.S Department of Homeland Security, and from 2001-2003 he served at the Office of Legal Policy at the U.S Department of Justice, where he focused on counterterrorism policy and helped draft the USA PATRIOT Act.
The article has been adapted from testimony at the Privacy and Civil Liberties Oversight Board workshop on July 9, 2013.