Marcus Hutchins reversed WannaCry, saved the internet, and then got arrested by the FBI at DEF CON for malware he wrote as a teenager. Aaron Swartz downloaded academic papers from JSTOR and faced 35 years in federal prison. Andrew “weev” Auernheimer scraped publicly accessible AT&T URLs and got convicted under the CFAA. The line between security research and a federal felony is drawn in pencil, and the people holding the eraser aren’t security researchers — they’re prosecutors. If you work in this field, understanding the legal landscape isn’t optional. It’s self-preservation.

The TLDR

Cybersecurity law in the United States is primarily governed by the Computer Fraud and Abuse Act (CFAA), the Electronic Communications Privacy Act (ECPA), and the Wiretap Act. These laws criminalize unauthorized access to computer systems, interception of communications, and various forms of computer-related fraud. Export controls (EAR/ITAR) restrict what cryptographic tools and security research can be shared internationally. On the ethics side, the ISC2 Code of Ethics establishes four canons that all certified professionals must follow. The practical tension: security research often requires doing things that look a lot like the activities these laws were written to prevent. Understanding where the line is — and how courts interpret it — is the difference between a career and a conviction.

The Reality

The CFAA was written in 1986, back when “computer crime” meant WarGames-style dialing into government mainframes. It’s been amended multiple times but still carries language broad enough to criminalize activities that most security professionals consider routine. The key phrase — “exceeds authorized access” — has been interpreted so broadly by some courts that checking your personal email on a work computer could theoretically qualify.

The Department of Justice updated its CFAA prosecution guidance in 2022, stating that good-faith security research should not be prosecuted. That’s progress. But “good faith” is defined by prosecutors, not researchers, and the guidance is policy, not law. It can be changed by the next administration.

Meanwhile, the security industry operates in a gray zone. Penetration testers need explicit written authorization or they’re committing crimes. Bug bounty hunters rely on safe harbor provisions that vary by program. Security researchers who discover vulnerabilities face a disclosure dilemma with legal stakes. And the people most qualified to find vulnerabilities are often the people most at risk of prosecution for doing exactly that.

How It Works

The Computer Fraud and Abuse Act (CFAA)

The CFAA (18 U.S.C. § 1030) is the primary federal computer crime statute. It criminalizes:

The penalties scale with intent and damage — from misdemeanors for first-time unauthorized access up to 20 years for repeat offenders or cases involving critical infrastructure. The CFAA also provides a civil cause of action, meaning companies can sue under it, not just prosecutors.

The “authorization” problem. The Supreme Court’s 2021 decision in Van Buren v. United States narrowed the “exceeds authorized access” provision, ruling that it applies to people who access information they’re not entitled to within systems they’re otherwise authorized to use — not people who misuse information they’re entitled to access. This was significant because it closed off the most expansive interpretations. But “without authorization” remains broadly interpreted, and courts disagree on what constitutes authorization in the context of public-facing systems.

The Electronic Communications Privacy Act (ECPA) & Wiretap Act

The ECPA governs the interception and access of electronic communications. It has three main components:

For security professionals: running a packet capture on your own network is generally fine. Running one on a network you don’t own or administer, without consent, is a federal crime. The line is thinner than you’d think.

Export Controls on Cryptography

Cryptographic software and security tools are subject to Export Administration Regulations (EAR) administered by the Bureau of Industry and Security. Historically, strong cryptography was classified as a munition under ITAR (International Traffic in Arms Regulations) — the same category as missile guidance systems.

The rules have relaxed significantly since the crypto wars of the 1990s, but they haven’t disappeared. Open-source cryptographic software generally qualifies for license exceptions (EAR 740.13), but you’re still required to notify the BIS before making it publicly available. Proprietary encryption tools, especially those designed for specific security applications, may require export licenses for certain countries.

This matters for security researchers who publish tools, contribute to open-source projects, or share research internationally. The penalties for export control violations are severe — up to $1 million per violation and 20 years imprisonment for willful violations.

Responsible Disclosure vs Full Disclosure

When you find a vulnerability, you have options — and each carries legal and ethical weight.

Responsible disclosure (coordinated disclosure): Report the vulnerability to the vendor privately, give them time to patch (typically 90 days, per Google Project Zero’s policy), then publish. This is the industry norm and the approach most likely to keep you out of legal trouble.

Full disclosure: Publish the vulnerability immediately, with or without vendor notification. The argument: vendors don’t fix things until public pressure forces them to. The risk: you’ve just armed every script kiddie on the internet, and the vendor’s legal team is now very interested in how you found this.

Bug bounty programs provide a middle ground. Programs like HackerOne and Bugcrowd offer legal safe harbor — explicit authorization to test within defined scope. The DOJ’s CFAA guidance specifically references bug bounty participation as good-faith research. If a program exists, use it. If it doesn’t, document your methodology, stay within reasonable bounds, and consider whether the vendor has a history of shooting the messenger.

The ISC2 Code of Ethics

The ISC2 Code of Ethics is mandatory for all ISC2-certified professionals (CISSP, SSCP, CCSP, etc.). It has four canons, in order of priority:

  1. Protect society, the common good, necessary public trust and confidence, and the infrastructure. Society comes first. If your employer asks you to do something that harms public safety, this canon says you refuse.
  2. Act honorably, honestly, justly, responsibly, and legally. The “legally” part is explicit — certified professionals are expected to operate within the law.
  3. Provide diligent and competent service to principals. Do your job well for your employer or client.
  4. Advance and protect the profession. Don’t do things that make all security professionals look bad.

The ordering matters. If canons conflict, higher-numbered canons yield to lower-numbered ones. Protecting society trumps loyalty to your employer. Violating the code can result in revocation of certification — which for many professionals means career disruption.

Due Care vs Due Diligence

These legal concepts come up in negligence cases and liability discussions:

Due care is doing what a reasonable person would do. Implementing security controls, training staff, responding to known vulnerabilities. It’s the standard of action.

Due diligence is verifying that due care is actually working. Auditing controls, testing procedures, reviewing compliance. It’s the standard of verification.

An organization that implements a firewall (due care) but never reviews the rules or tests whether it’s working (no due diligence) may be found negligent if a breach exploits a misconfigured rule. Both are required for a defensible security posture.

How It Gets Exploited

Legal threats as vulnerability suppression. Some organizations respond to security researchers with legal threats instead of patches. The chilling effect is real — researchers stop reporting to organizations with a reputation for shooting the messenger, and the vulnerabilities stay unpatched. The usual suspects find them eventually.

Overcriminalization of security research. Broad laws like the CFAA can be weaponized against researchers whose work embarrasses powerful organizations. The EFF has extensively documented cases where the CFAA was used disproportionately against researchers, journalists, and activists.

The insider threat legal gap. Employees who exfiltrate data often operate in a legal gray zone. Is downloading files you have authorized access to in order to take them to a competitor “exceeding authorized access”? Post-Van Buren, maybe not under the CFAA — but trade secret law, NDAs, and employment agreements fill some of that gap.

What You Can Do

For Security Professionals

Get written authorization before you test anything. Scope documents, rules of engagement, signed agreements — these aren’t bureaucratic overhead, they’re your legal protection. If someone tells you to test a system and you don’t have it in writing, stop.

Know the laws in your jurisdiction. The CFAA is federal, but states have their own computer crime statutes that may be broader or narrower. International work adds another layer — the UK’s Computer Misuse Act, the EU’s NIS Directive, and country-specific laws all define “unauthorized access” differently.

If you discover a vulnerability, use coordinated disclosure. Document your methodology. Stay within scope. Report through official channels when they exist. If you’re unsure about the legal implications, the EFF’s Coders’ Rights Project provides resources and, in some cases, legal support.

For Organizations

Create a vulnerability disclosure policy. Make it easy for researchers to report issues without fear of legal retaliation. The ISO 29147 standard provides a framework for vulnerability disclosure. Organizations that welcome responsible disclosure get their vulnerabilities fixed. Organizations that threaten researchers get their vulnerabilities published on Twitter.

Understand that due care and due diligence are not optional. If you know about a vulnerability and don’t remediate it within a reasonable timeframe, and it subsequently gets exploited, the negligence argument writes itself. Document your decisions, your risk acceptances, and your remediation timelines.

Sources & Further Reading