In 1975, Jerome Saltzer and Michael Schroeder published a paper at MIT called “The Protection of Information in Computer Systems.” In it, they laid out eight design principles for building secure systems. The paper is fifty years old. The principles haven’t aged a day. Every major breach you’ve read about in the last decade — SolarWinds, Equifax, Capital One, MOVEit — violated at least one of them. Not because the principles are obscure. Because they’re inconvenient. Because shortcuts are faster. Because “we’ll fix it later” is the most dangerous sentence in engineering.
The TLDR
Saltzer and Schroeder identified eight principles for secure system design: economy of mechanism, fail-safe defaults, complete mediation, open design, separation of privilege, least privilege, least common mechanism, and psychological acceptability. These aren’t suggestions — they’re the load-bearing walls of system security. Violate economy of mechanism, and your system is too complex to audit. Violate fail-safe defaults, and a misconfiguration grants access instead of denying it. Violate least privilege, and one compromised account takes down everything. Every security architecture decision you make either follows these principles or gambles against them.
The Reality
These principles were written for time-sharing mainframes. They apply equally to Kubernetes clusters, cloud IAM policies, and your home network. That’s because they’re not about technology — they’re about the fundamental constraints of building systems that resist attack.
The NIST SP 800-160 Vol. 1 (Systems Security Engineering) explicitly builds on Saltzer and Schroeder’s work. OWASP’s Security Design Principles maps them directly to modern application development. CISA’s Secure by Design guidance echoes them in every recommendation. Half a century of security engineering, and we keep coming back to the same eight ideas.
The uncomfortable truth: we know what good design looks like. We’ve known since 1975. The breaches keep happening because implementing these principles has a cost — in development time, in performance, in convenience — and organizations keep deciding that cost is too high. Until it isn’t.
How It Works
Economy of Mechanism
Keep it simple. The smaller and simpler a security mechanism, the easier it is to verify it’s correct. Every line of code is a potential bug. Every feature is a potential attack vector. The Harrison-Ruzzo-Ullman model proved that in the general case, you can’t even verify whether a complex access control system is secure. Simplicity isn’t aesthetic — it’s mathematical necessity.
Modern example: WireGuard’s codebase is roughly 4,000 lines of code. OpenVPN is over 100,000. WireGuard has had fewer vulnerabilities not because its developers are better, but because there’s less code to contain bugs. The Linux kernel accepted WireGuard into the mainline specifically because its small codebase made it auditable.
Fail-Safe Defaults
Default to denial. If a system fails, it should fail closed — denying access rather than granting it. If a permission isn’t explicitly granted, it doesn’t exist.
Modern example: A default-deny firewall drops all traffic except what’s explicitly allowed. Compare that to a default-allow firewall that blocks only specific threats — every new attack type gets through until someone writes a rule for it. AWS Security Groups are default-deny. If you haven’t written an allow rule, the traffic doesn’t pass. That’s fail-safe design.
The violation in the wild: Capital One’s 2019 breach exploited a misconfigured WAF with overly permissive rules. The firewall allowed access it shouldn’t have because the default posture was too open. MITRE ATT&CK T1190 (Exploit Public-Facing Application) documents exactly this pattern.
Complete Mediation
Check every access, every time. Don’t cache authorization decisions. Don’t assume that because someone was authorized five minutes ago, they’re still authorized now. Every access to every resource must be validated against the current policy.
Modern example: This is the core tenet of zero trust architecture. Traditional networks checked credentials at the perimeter and trusted everything inside. Zero trust checks every request — every API call, every file access, every database query. The shift from “trust but verify” to “never trust, always verify” is complete mediation taken to its logical endpoint.
Open Design
Security should not depend on secrecy of the design. The system should be secure even if everything about it — except the keys — is public knowledge. This is Kerckhoffs’ principle applied to system architecture.
Modern example: AES, TLS, and every major cryptographic algorithm are fully published. Their security comes from the math, not from hiding the algorithm. Contrast this with proprietary “security through obscurity” — custom encryption schemes, hidden API endpoints, obfuscated code treated as a security boundary. Every time a proprietary algorithm is reverse-engineered and found to be weak (WEP, CSS, A5/1), it validates this principle.
Separation of Privilege
Require multiple conditions for access. No single key should open every door. Requiring two or more independent conditions to grant access means compromising one isn’t enough.
Modern example: Multi-factor authentication is separation of privilege. Something you know (password) plus something you have (hardware key) plus something you are (biometric). Nuclear launch requires two keys turned simultaneously. Your CI/CD pipeline should require code review approval from someone other than the author before merging to production. NIST SP 800-63 formalizes MFA requirements for exactly this reason.
Least Privilege
Grant only the minimum access necessary to perform a function, and nothing more. No standing administrative access. No “just give them admin, it’s easier.” Every permission beyond what’s needed is an expansion of what an attacker gets when they compromise that account.
Modern example: AWS IAM policies should follow least privilege — a Lambda function that reads from one S3 bucket shouldn’t have s3:* on *. Kubernetes pods should run as non-root with read-only file systems. The SolarWinds breach was devastating partly because the compromised Orion software had broad network access across customer environments. Least privilege would have contained the blast radius.
Least Common Mechanism
Minimize shared resources between components. The more mechanisms shared between different levels of trust, the more paths exist for information to leak or escalate. Shared libraries, shared databases, shared network segments — each is a potential bridge between things that should be isolated.
Modern example: Container isolation. Each container gets its own filesystem, its own process namespace, its own network stack. Microservice architectures decompose monoliths into isolated services precisely to minimize shared mechanisms. When everything runs in one process with one database and one set of credentials, a vulnerability in any component compromises all of them.
Psychological Acceptability
Security mechanisms must not make the system harder to use than it would be without them. If security is too cumbersome, people will bypass it. The most secure system in the world is useless if everyone props the door open because the badge reader is annoying.
Modern example: Passkeys are more secure than passwords and easier to use — tap your fingerprint instead of typing a 20-character string. Compare that to the old world of mandatory password rotation every 90 days, which NIST SP 800-63 now explicitly discourages because it led people to create weaker, more predictable passwords. Good security design works with human nature, not against it.
How It Gets Exploited
Complexity as the Enemy
The more complex a system, the more likely it contains exploitable flaws that nobody noticed. The Log4Shell vulnerability (CVE-2021-44228) existed in a logging library feature — JNDI lookup — that most developers didn’t know existed. The feature added complexity that expanded what an attacker could do. Economy of mechanism says: if you don’t need it, remove it.
Cached Trust Decisions
Systems that check authorization once and cache the result are vulnerable to token replay, session hijacking, and privilege escalation after role changes. If a person is terminated but their session token is still valid for 24 hours, that’s a complete mediation violation. MITRE ATT&CK T1550 (Use Alternate Authentication Material) exploits exactly this gap.
Standing Privileges
Accounts with permanent administrative access are the highest-value targets. The Equifax breach in 2017 was amplified by the fact that the compromised Apache Struts server had access to databases containing 147 million people’s personal information. A properly least-privileged architecture would have limited the blast radius.
What You Can Do
Audit against these principles explicitly. Take each principle and ask: does our system follow this? Where it doesn’t, you’ve found your risk. This isn’t theoretical — it’s a practical checklist.
Implement default-deny everywhere. Firewall rules, IAM policies, API authorization. If it’s not explicitly permitted, it’s denied. Review any allow-all rules and replace them with specific grants.
Eliminate standing privileges. Use just-in-time access for administrative tasks. Tools like AWS SSO with temporary role assumption, or PIM (Privileged Identity Management) in Azure, provide time-boxed elevated access instead of permanent admin.
Reduce complexity ruthlessly. Every feature, every integration, every exposed endpoint is something that must be secured. If it’s not actively needed, decommission it. The smallest system that meets the requirements is the most secure one.
Make security the path of least resistance. If the secure option is harder than the insecure one, people will choose the insecure one. Design systems where doing the right thing is also the easiest thing.
Sources & Further Reading
- Saltzer & Schroeder — The Protection of Information in Computer Systems (1975)
- NIST SP 800-160 Vol. 1 Rev. 1 — Systems Security Engineering
- OWASP Security Design Principles
- CISA Secure by Design
- NIST SP 800-63 — Digital Identity Guidelines
- MITRE ATT&CK T1190 — Exploit Public-Facing Application
- ISC2 — Security Architecture Resources