Somewhere in a Department of Defense office in the early 1970s, two mathematicians sat down and tried to answer a deceptively simple question: how do you prove, mathematically, that a computer system won’t leak classified information? The answer they came up with — the Bell-LaPadula model — became the theoretical backbone of military computing. And fifty years later, the same principles they formalized are still running underneath every access control decision your systems make, whether you realize it or not.

The TLDR

Formal security models are mathematical frameworks that define the rules for how information flows in a system. Bell-LaPadula protects confidentiality (no reading above your clearance, no writing below it). Biba protects integrity (no reading below your trust level, no writing above it). Clark-Wilson protects transaction integrity through well-formed procedures and separation of duties. These aren’t historical curiosities — they’re the theoretical foundation of MAC, SELinux, MLS systems, and every access control matrix you’ve ever configured. The CISSP tests them because understanding these models means understanding why access control works the way it does.

The Reality

Every time you set a file permission, configure an IAM policy, or assign a role in Active Directory, you’re implementing a descendant of these models. You just don’t call it that. When your firewall denies traffic because it doesn’t match a rule — that’s the fail-safe default from the state machine concept. When SELinux prevents a web server process from reading /etc/shadow — that’s mandatory access control, descended directly from Bell-LaPadula.

These models were born from a specific problem: the U.S. military needed computers that could handle documents at different classification levels (Unclassified, Confidential, Secret, Top Secret) without information leaking between them. The stakes were existential. Get it wrong, and Top Secret satellite imagery ends up on an Unclassified terminal. The math had to be provable, not just reasonable.

Modern systems rarely implement these models in their pure form. But the principles — information flow control, mandatory restrictions that override discretionary choices, provable security properties — are everywhere. If you’ve ever wondered why SELinux denies things that file permissions allow, you’re looking at Bell-LaPadula’s legacy in action.

How It Works

Bell-LaPadula — Confidentiality First

David Bell and Leonard LaPadula published their model in 1973 for the U.S. Air Force. It has two core rules:

Simple Security Property (No Read Up): A subject at a given security level cannot read an object at a higher security level. A person with Secret clearance cannot read Top Secret documents. Straightforward.

Star Property (No Write Down): A subject at a given security level cannot write to an object at a lower security level. A person with Top Secret access cannot write to a Secret or Unclassified file. This prevents information from flowing downward — you can’t accidentally (or intentionally) declassify information by copying it to a lower-level container.

The combination creates a one-way valve: information flows up, never down. A Secret process can write to Top Secret but not Unclassified. It can read Unclassified and Secret but not Top Secret. The model is formally defined as a state machine — every possible state of the system is evaluated against these rules, and any transition that violates them is prohibited.

The limitation: Bell-LaPadula only addresses confidentiality. It says nothing about data integrity. A low-clearance person could write garbage into a Top Secret file — the model doesn’t prevent that because the information flow direction is acceptable.

Biba — Integrity First

Kenneth Biba addressed Bell-LaPadula’s blind spot in 1977. Biba inverts the rules to protect integrity:

Simple Integrity Property (No Read Down): A subject cannot read objects at a lower integrity level. You don’t trust data from sources less reliable than your own level.

Integrity Star Property (No Write Up): A subject cannot write to objects at a higher integrity level. Untrusted processes can’t modify trusted data.

If Bell-LaPadula is “protect the secrets,” Biba is “protect the truth.” A process running at low integrity can’t corrupt high-integrity system files. This maps directly to how Windows implements integrity levels — Internet Explorer (and later Edge) ran at Low integrity, meaning even if compromised, it couldn’t modify system files running at Medium or High integrity.

Clark-Wilson — Real-World Transactions

David Clark and David Wilson published their model in 1987, and it’s fundamentally different from Bell-LaPadula and Biba. Instead of information flow rules, Clark-Wilson focuses on well-formed transactions and separation of duties.

The model defines two types of data: Constrained Data Items (CDIs) — critical data that must be protected — and Unconstrained Data Items (UDIs) — input that hasn’t been validated yet. Transformation Procedures (TPs) are the only authorized methods for modifying CDIs. Integrity Verification Procedures (IVPs) confirm that CDIs remain consistent.

The key insight: you don’t just restrict who can access data — you restrict how data can be modified. Only approved procedures can change critical records, and no single person should be able to both initiate and approve a transaction. This is why your accounting software won’t let the same person who creates a purchase order also approve it. Clark-Wilson is the theoretical basis for separation of duties in every financial system, every change management process, and every compliance framework.

Brewer-Nash — The Chinese Wall

The Brewer-Nash model (1989) addresses conflict of interest. In consulting or financial services, once you access data from one client in a conflict class (say, Company A’s merger plans), you’re automatically restricted from accessing data from competing clients (Company B, Company C) in the same class.

The access controls are dynamic — they change based on your access history. This model is the theory behind information barriers in investment banking and the ethical walls in consulting firms. The restrictions aren’t based on clearance or role — they’re based on what you’ve already seen.

Harrison-Ruzzo-Ullman — Can You Prove It’s Secure?

The HRU model (1976) asks a more fundamental question: given an access control system, can you prove that a particular right will never be granted? Their answer was sobering — in the general case, the safety problem is undecidable. You literally cannot prove, in finite time, that an arbitrary access control system won’t eventually grant unauthorized access. The implication: simple access control systems are more provably secure than complex ones. Economy of mechanism isn’t just good practice — it’s mathematically justified.

How It Gets Exploited

Covert Channels — Information That Escapes the Model

Bell-LaPadula’s formal model was designed to control overt information flow. But information can leak through covert channels — side effects that the model doesn’t account for. A high-clearance process that takes longer to respond when certain conditions are true is leaking one bit of information per timing variation. MITRE ATT&CK T1029 documents scheduled data transfer techniques, and covert channels exploit the gap between what the formal model controls and what the real system actually does.

Aggregation and Inference

Individual pieces of unclassified information can be combined to infer classified information. A database of ship movements (Unclassified) combined with crew reassignment records (Unclassified) might reveal a fleet deployment (Secret). Bell-LaPadula can’t prevent this because each individual access is authorized. This is the aggregation problem, and it’s one reason modern data classification is so difficult.

Bypassing Separation of Duties

Clark-Wilson assumes that separation of duties is enforced. In practice, people find workarounds. Shared accounts, emergency access procedures that become permanent, developers with production database access “for debugging.” Every workaround is a violation of the model, and every violation is a potential path to fraud or compromise.

What You Can Do

Understand the models to understand your tools. When you configure SELinux policies, you’re implementing Bell-LaPadula concepts. When you set up separation of duties in your CI/CD pipeline, you’re implementing Clark-Wilson. Knowing the theory makes you better at the practice.

Default to mandatory controls where possible. Discretionary access control (DAC) lets people override security decisions. Mandatory access control (MAC) doesn’t. NIST SP 800-162 provides guidance on attribute-based access control, which builds on these foundations.

Enforce separation of duties in code, not policy. If your separation of duties exists only in a policy document, it will be violated. Build it into the system — the person who writes the code shouldn’t be the person who approves the merge, and the system should enforce that, not a checklist.

Audit for covert channels. If you’re operating in a high-assurance environment, timing analysis and resource usage patterns can leak information that access controls don’t cover. This is an advanced concern, but it’s exactly what these models were designed to surface.

Sources & Further Reading