Unification — Brief ☧
Deep version → | Lab → | Back to Relations →
Q: Imagine two witnesses at the temple describe the same man.
Witness A says: "His name is David, but I do not know which tribe he is from."
Witness B says: "He is from the tribe of Judah, but I did not catch his name."
Between them, do we have a complete picture?
A: We do. Each witness knows something the other does not. We can combine what they know:
Witness A: (tribe = ?, name = "David") Witness B: (tribe = "Judah", name = ?) Combined: (tribe = "Judah", name = "David")The unknowns — written as
?— fill in from the other witness's knowledge. This process of combining partial information is called unification.Q: That makes sense when they agree. But what if Witness A says "from Judah" and Witness B says "from Benjamin"?
A: Then we have a contradiction. Judah and Benjamin are not the same tribe. Unification fails — the testimonies cannot be reconciled. In logic programming, a failure is not an error; it simply means "this combination of facts does not hold."
Q: So unification is like solving a puzzle — matching up what we know and checking for contradictions?
A: Exactly. Here are the basic rules:
Case Example Result Unknown meets a value ? unified with "Judah""Judah"Same value meets itself "Judah" unified with "Judah""Judah"Different values clash "Judah" unified with "Benjamin"FAIL Two unknowns meet X unified with YX and Y become linked — learn one, you learn both Nested structures [X, 2] unified with [1, Y]X=1, Y=2Occurs check X unified with [1, X]FAIL (would need X to contain itself — infinite) Q: The nested case is interesting. It works on structures inside structures?
A: Yes. Unification walks through the structure piece by piece. Think of two genealogies written on scrolls — you compare them entry by entry, filling in blanks where one scroll has a name the other is missing, and stopping with failure if you find a disagreement. This is a form of recursion: the algorithm unifies each sub-part, then each sub-sub-part, and so on.
Unification in Practice
Traditional unification works by walking chains of variable bindings — pointer chasing through a data structure called a substitution. Its time complexity is O(n) in the size of the terms being unified.
But there is a beautiful shortcut when the set of possible values is finite. Each unknown can be represented as a bitmask — a boolean vector where each bit position stands for one possible value:
Witness A's "tribe": 111111111111 (could be any of the 12 tribes)
Witness B's "tribe": 000000001000 (must be Judah — bit 3 is set)
AND them together: 000000001000 → Judah. Done in one instruction.
If both witnesses named specific but different tribes:
A: 000000001000 (Judah)
B: 100000000000 (Benjamin)
AND: 000000000000 → EMPTY. No bit survives. Contradiction. Fail.
This insight — that unification over finite domains reduces to a bitwise AND — is the heart of how constraint solving maps to hardware. Traditional unification walks through data structures, following pointers and comparing values one at a time. Bitmask unification replaces all of that with a single machine instruction. The savings are not just a constant factor — they change the character of the computation from a sequential, memory-bound traversal to a parallel, compute-bound operation that maps perfectly to dedicated hardware gates.
Unification = AND
Traditional unification: walk substitution chains, check equality, bind variables
→ O(N) pointer chasing
Bitmask unification: AND two bitmasks
→ O(1) single instruction
graph LR
A[Domain A: 11001010] -->|AND| C[Result: 00001010]
B[Domain B: 00101011] -->|AND| C
C -->|empty?| D{Fail or Succeed}
D -->|all zeros| E[FAIL: contradiction]
D -->|has 1s| F[SUCCESS: narrowed domain]→ Learn more about unification
→ Related: Domains | miniKanren
Soli Deo Gloria