According to press reports in December 2015, terrorist Tashfeen Malik posted her allegiance to the Islamic State of Iraq and al Sham (“ISIS”) on her Facebook account before killing 14 innocent civilians at the County Health Department in San Bernardino, California. Though Facebook had removed her account as violative of internal company rules, the company did not immediately alert the government to the existence of the post — or the possibility of an attack. In a more recent example, gunman Omar Mateen checked his own Facebook posts and other social media accounts to verify that his pledge to Abu Bakr al-Baghdadi, the leader of ISIS, had been properly publicized during the five-hour standoff in the Orlando bar where he killed 49 people on June 19, 2016. We suggest in this Article that social media companies, like other corporate entities, should be legally required to institute compliance programs that discover and report terrorist activity at the earliest possible opportunity. Most of these companies, such as Facebook, Twitter, YouTube, and Instagram, already have in place internal rules against messages that might violate the federal prohibition against material support to terrorists or to a Foreign Terrorist Organization (FTO). Additionally, many of these companies already have both a method of internal reporting by other users against rule-breakers and computer programs that seek out key words to alert company monitors that a breach of internal rules might be occurring. Companies without policies, such as Dropbox and LinkedIn, and lesser-known and new sites, such as Tumblr and Soundcloud, and even nonprofit organizations such as Internet Archive in San Francisco, should be forced to follow suit. We suggest two supplementary federal proposals. The first would create a new substantive offense by criminalizing the failure of social media companies to institute programs that discover terrorism-related posts by their users and to immediately release such posts to the government. A social media company would be guilty of this new crime if it knowingly, recklessly, or even negligently failed to institute a government-approved compliance program and report any suspicious results it discovered through its program to federal authorities. This proposal is limited to public wall-postings and similar shared content; it excludes e-mails or other private communications solely between two individuals. We realize that this proposal is strong medicine. However, we believe that the danger of online terror activity warrants such a vigorous federal response. This proposal does not replicate the Online Terrorism Activity Act recently proposed by Senator Dianne Feinstein, though we agree that her bill ought to be enacted. We are not suggesting merely that the social media companies be required to report known terrorist activity to federal law enforcement agents. Rather, we would require such companies to develop programs that would monitor users for compliance with 18 U.S.C. §§ 2339 to 2339D and other terrorism offenses on pain of criminal liability, and report all offending posts to law enforcement officials. And rather than automatically shutting down such accounts when they are discovered, which may have adverse unintended consequences, we would shift this decision to the Federal Bureau of Investigation (FBI) experts best suited to make them. In some cases, it might serve intelligence needs to allow the postings to continue. Moving the loci of such decision-making from private companies to the government might also allow innocent and aggrieved users to pursue avenues of redress.The second proposal is a fallback in the event that Congress does not enact our first proposal. We recognize that Internet companies would strenuously oppose our first proposal, and that they have tremendous power on Capitol Hill. This second proposal would grant those social media companies that instituted the anti-terror compliance programs suggested in proposal number one leniency at sentencing should they be held criminally liable under the federal doctrine of respondeat superior for the material support crimes of their agents. Perhaps more importantly, prosecutors would consider the existence and effectiveness of such a program in their charging decision against the social media companies. The federal government does this already with corporate sentencing, primarily in the white-collar crime arena, to assist the government in discovering who within the corporation committed the federal criminal offense, and to prevent its recurrence. The Federal Sentencing Guidelines grant corporations large sentencing discounts if they had instituted a corporate compliance program prior to the commission of the offense by their agent. This strategy will likely not be nearly as effective as would our first proposal as a tool against terrorism, as federal prosecutors have not yet attempted to charge social media companies for the crimes committed or assisted by their agents. Such a strategy works best when the corporation faces a high likelihood of criminal liability, with its attendant high dollar fines for violations. Unless federal prosecutors take the lead from private plaintiffs now suing under 18 U.S.C. § 2333(a) and step up prosecutions of social media companies in situations where their Internet services are used in terrorist-related posts, social media companies may not consider themselves sufficiently exposed to bother with the expense of such programs. However, because it will be less effective at criminalizing the behavior of social media companies, and because it does not as directly or as frequently impinge on the privacy rights of social media users, this proposal might be more politically palatable. It applies to a social media company only after there is probable cause to believe it has committed a serious federal felony, and it does not require the company to reveal offending user posts to the government until after the company has been charged. In Part I of this Article, we review the development of terror activity in today’s globalized environment, including the high rate of reliance on the Internet and mobile applications. In describing the well-known danger of terrorism, we focus on “lone-wolf" terrorists and the difficulty of finding such individuals and stopping them before they attack. The Internet has made this problem all but impossible to solve, and therefore companies that make their fortunes utilizing the Internet must become part of the solution. A Brookings Institute report estimates that between 46,000 and 70,000 Twitter accounts were used by ISIS supporters from September 2014 to December of 2014, and a George Washington University study counted approximately 300 Americans and/or U.S.-based ISIS sympathizers active on social media. In Part II, we respond to perceived insufficiencies in existing legislation and recent legislative proposals. We will also set forth proposals to address the liabilities of companies to enable the governmental review and discretion of potential terror activity online. In addition to both of our proposals, we also offer precedents for such governmental action, including the Federal Sentencing Guidelines pertaining to organizations, the Bank Secrecy Act, and international bodies in the enforcement of copyright law. Once compared to these other criminal and regulatory measures, our proposals are not as unconventional as they might first appear. In Part III, we respond to both historical and anticipated opposition, grounded in constitutional arguments, to the proposed legislative framework in Part II. We believe that neither proposal would violate the First Amendment’s protection of speech and association or the Fourth Amendment protection against unreasonable searches and seizures. We cannot deny the concerns of civil libertarians that when firms monitor posts for content, at the behest of the government, there might be some chilling of speech that is not illegal under the material support analysis. However, the Court’s relatively recent 6-3 opinion in Holder v. Humanitarian Law Project, upholding the material support statute against a First Amendment freedom of speech and freedom of association and Fifth Amendment Due Process Clause vagueness challenge, lends significant support to the validity of our proposals. A long line of precedent confirms that the Fourth Amendment offers no reasonable expectation of privacy in communications voluntarily revealed to third parties. Were either of our proposals to extend to e-mails intended to remain private between two individuals, the issue becomes a much closer one.
Susan R. Klein, Social Media Compliance Programs and the War Against Terrorism, 8 Harvard National Security Journal 53 (2017) (with Crystal Flinn).