
Verifying Rules with LogicMaps
Proof
One of the great benefits of formal mathematics and formal logic is that they allow us to prove things. If a result is formally proven, it has to be true, no matter what. There is no second guessing, no seeking a second opinion. Once something has been proved, opinions are irrelevant. There are vast areas of the law where this kind of certainty is unavailable, such as argumentation, default reasoning, analogical reasoning, case law and precedent application, original intent of the founders, the public interest, degrees of evidence, reasonableness of doubt. That's why we have judges and juries, and notions like 'preponderance of the evidence', and 'beyond a reasonable doubt.' There is no notion of proof here.
But legal rules themselves, once written down and enacted, in the case of public statues and regulations, or written down and signed by private parties agreeing to contracts, are meant to proscribe behavior with as much logical precision as possible. That is their purpose. Their consistent enforcement depends on eliminating as much interpretation and wiggle room as can be achieved. They aim to be comprehensive and exhaustive. And because of this, we can prove what their consequences are — what sets of facts entail the positive or negative application of the rules, as a matter logic. If you are a logician, you have access to these techniques, so you can examine alleged proofs to determine whether they are actual proofs. But humans make mistakes. Before the advent of computers, and mechanical proof checkers, complicated proofs of surprising theorems could sit in the literature for years, waiting for enough of your colleagues to check your proof and report favorably. You had to trust the collective opinions of experts.
Mechanical proof checkers changed all of that. Once the proof is formalized, it can be checked by machine. Could the software that implements the checker have a bug, leading to an incorrect result? Yes, possibly, so we construct a much simpler proof that the checking software is correct, and check it with an even simple proof checker, and so on. The possibility of a series of bugs in each of the sequence of checkers fortuitously lining up just right to invalidate the checkers of checkers is vanishingly small. So as a practical matter, we can trust mechanical deduction.
Fortunately for the legal profession, the proofs involved in deriving consequences from rules are not very complicated. We do not expect mechanical deduction to resolve surprising results that human experts can’t agree on. But it can be very useful in deriving consequences from existing or proposed rules that were not intended by the rules’ authors. LogicMaps are an external realization of this kind of mechanical deduction — one of the most efficient methods known for deciding the truth of a rule’s conclusion from its conditional premisses. When you walk the LogicMap from the top, you are executing the mechanical deduction. Your human discretion comes into play in deciding the truth of each of the propositional nodes, but you have no wiggle room in deciding how their combined truth values influence the conclusion. That’s the mechanical part. You become a cog in a very elegant logical machine.
Correctness
Because of this built-in mechanical deduction, LogicMaps can be used to verify the correctness of rules. You may be wondering, what does it mean for a legal rule to be correct (or incorrect)? For scientific rules, this concept is straightforward. Scientific rules generalize the behavior of events in the physical world. They postulate invariant relationships between certain classes of phenomena. They could be wrong about this. They are just best guesses after all.
Legal rules, on the other hand, are necessarily exact representations of the jurisdictional landscape. The authorities that promulgate the legal rules thereby define what is allowed and prohibited in their bailiwick. They have the political power to do this. These are the consequences of our rules because we say so! They are definitions of rights and obligations. Well, before the laws are drafted into legal form and enacted, the promulgators have some notion of what ought to be allowed and prohibited by their new rules. They imagine scenarios that they want to sanction and scenarios they want to prevent by their handiwork. The lawmakers, however, are forced to think abstractly when they write laws, attempting to generalize all of the permitted and prohibited behaviors from a few examples. They have trouble at the outset comprehending all of the possible consequences that follow from the rules they propose. Add in the inherent ambiguity of natural language, and you create an additional vehicle for unintended consequence. So legislators get it (somewhat) wrong on the first draft all of the time. That’s why we have amendments and appellate courts.
A similar scenario transpires in the private sector when business leaders consult with their legal teams to draw up new contracts. They are not promulgating, but they are defining the rights and obligations that they expect to govern the relationships between themselves and their clients or partners. They jointly imagine a set of consequences that they intend to be covered by the contract rules.
The notion of correctness here is a self-reflective form of correctness. Do my rules mean what I intend them to mean? The problem is that the jurisdictional landscape is inside the rule author’s head, represented by a few examples. The proposed rules will define a much broader landscape out in the world. How does the author know a priori that the rules are allowing or prohibiting only the cases they intend to allow or prohibit and that they are allowing or prohibiting all of the target cases they intend? In formal logic, these two properties of a system are called soundness and completeness. They are hard to achieve. It is rare for commercial software to achieve either, but legal rules have a chance because their total sets of consequences are comfortably finite. They can be enumerated and organized into LogicMaps. Then one can assess the correctness of the rules by walking the LogicMaps from the top, verifying that every path is an intended path (soundness), and that no intended path is missing (completeness).
Consider, for instance, this opening subsection of the British Nationality Act. It was first written in 1981, and later amended in 2022.
(1) A person born in the United Kingdom after commencement [F1, or in a qualifying territory on or after the appointed day,] shall be a British citizen if at the time of the birth his father or mother is—
(a) a British citizen; or
(b) settled in the United Kingdom [F2or that territory].
This rule seems simple, straightforward, and easily understood. So easily understood, one might argue, that we don’t need a LogicMap to understand its consequences. But let’s view its LogicMap anyway.
Sufficient Conditions for:
The person is a British citizen
The LogicMap is typical of legal rules in that it asserts sufficient conditions for its conclusion. If there is any path conforming to a person’s situation that leads to the positive terminal node, the person qualifies for citizenship.
If this LogicMap had been available in 2022, when the “Qualifying Territory” amendments were made to the BNA, it would have revealed an oversight which makes the subsection incomplete. Let us suppose that the parents were not British citizens at the time of birth, but were officially settled in a qualifying territory as the drafters intended, but just happened to be visiting in the UK when the baby was born prematurely. Now let’s walk the LogicMap from the top. We take the ‘yes’ path out of the first node because the person was born in the UK. We are then forced by the location of birth to take the ‘no’ paths out of the next six nodes, ending without citizenship. There is no path to citizenship if you were born in the UK and your non-citizen parents were settled in one of the territories at the time. Surely this was not intended. A walk down the true paths of the LogicMap by the drafters would have revealed this oversight. The problem is that the amended 1(1) says the non-citizen parents had to be settled in “that” territory, making backward reference to the place ( a particular territory) in which the child had to be born. It’s unlikely that anyone forfeited their citizenship due to this oversight, but the offending words are still in the Act. The problem is not apparent in the rule itself — only in the LogicMap.
Trust
A common argument against expressing legal rules in some formalism more precise or more executable than natural language is that it introduces an element of unverifiable trust. Traditional legal professionals would typically not be able to verify that their natural language rules are the same as those in the new, computable language. They would have to trust this issue to an expert in the new formalism.
This overlooks the fact that this issue of unverifiable trust already exists in the typical natural-language-only processing of rules. The team members often have different levels of legal expertise. When the legislator or business leader offers up ideas about the intended consequences of a proposed set of rules — what should be prohibited, what should be allowed — the drafter (a lawyer) typically puts out a first cut of the rules in legalese. Already we have an issue of unverifiable trust. How does the subject matter author know that the legal draft faithfully conveys the intended meaning? The author’s informal notes and the legal draft are effectively in two different languages. To reach a meeting of the minds, each professional has to trust the other with something they can't themselves verify.
But when we convert the rules into LogicMaps, the pending regulations are expressed in a language that all team members can easily understand. Is this what was intended? The drafter could have been wrong about what the author intended. If so, this will become apparent in the LogicMaps. The team can then discuss and revise and regenerate until they have a set of consequences that accomplishes the subject matter author’s intent.
In the best of cases, this will not necessarily be the author’s original intent, but the revised intent. The process of going from concrete examples, to generalization via legal rules, to derivation of the new sets of consequences airs out the original intent and fills in all of the unanticipated consequences, leading to better rules coming out of the gate to begin with. Both trust and verification become anchored in a common artifact — the LogicMaps.
Verifying Revisions
The above scenario addresses verification of correctness when rules are being created. Another, somewhat different, venue for verification occurs when rules are being revised or refined. Revision occurs when amendments are proposed, and refinement when new regulations are created to flesh out the meaning, intent, or application of terms in an exiting statute. The private sector analog of this is contract renegotiation. We have two natural language rule documents, but the newer one intends to preserve the desirable aspects of the older one, or at least not break its exiting consequences inadvertently. As software developers can tell you, this situation is fraught with the possibility of undesirable side-effects. One has to contend not just with the standalone meanings of the old and the new, but with the possible side-effects of merging the old with the new. Since the authors are different, neither author fully anticipated the intended consequences of the other.
Software developers have no easy solution for this. They combine the two programs, then run regression tests by comparing the known outputs of the original against the outputs of the combined — on the same input. They expect differences in the output, but only desirable or intended differences. This finds some bugs, but the verification is only as good as the coverage of the original test set, which is usually far from complete because the consequences of computer programs are typically infinite.
One (very beneficial) way to look at LogicMaps is as the definitive regression test sets for rules. The LogicMaps of the old and new rules enumerate all of the consequences of each, so comparing the two LogicMaps for differences verifies the net difference in consequences.
Trust but Verify
The prior two verification contexts assume that all parties to the negotiation are on the same side, so a meeting of the minds is a common goal. Adversarial lawmaking, where teams from different political parties negotiate the markup of “bipartisan'“ legislation, or in most cases of private sector contract negotiation, where the eventual signers are represented by opposing counsels, the parties are on opposite sides with likely competing interests. Here the obscurity of legalese can be used for great benefit for one side when opposing counsels don’t have the same appreciation for the total consequences of a proposed new clause, or missed the import of the new clause hidden in the fine print. The possible partisan advantage here comes from one side not foreseeing consequences that are adverse to their interests.
Let’s just say that in these adversarial situations, verification is better than trust, and verification via LogicMaps is better than verification by reading text.