Fixpoint -- Brief ☧
Deep version -> | Back to Propagators ->
Q: A refiner sits before his crucible, watching molten silver. He heats
it, skims off the dross, heats it again, skims again. How does he know
when to stop?
A: Tradition says he stops when he can see his own reflection in the
molten silver. No more dross rises to the surface. The silver is pure.
Q: OK, so "stopping when nothing changes" -- does that idea show up
in computing?
A: It does, and it has a name: fixpoint. You keep applying the
same operation, and eventually the result stops changing. You apply the
constraints, the domains narrow. You apply them again -- same result.
That stable state is the fixpoint.
Q: Can you show me that with actual numbers?
A: Let's try. We have three variables -- X, Y, and Z -- and one
constraint: X + Y = Z. Each variable starts with the domain {1, 2, 3}.
Round 0 (start): X: {1,2,3} Y: {1,2,3} Z: {1,2,3} Round 1 (propagate X+Y=Z): The smallest possible sum is 1+1=2, so Z can't be 1. The largest possible sum is 3+3=6, but Z only goes up to 3, so combinations like (3,3), (3,2), (2,3) are ruled out. X: {1,2,3} Y: {1,2,3} Z: {2,3} -- Z narrowed Round 2 (propagate again): Z is now {2,3}, so X+Y must be 2 or 3. X=1,Y=1 gives 2 (ok). X=1,Y=2 gives 3 (ok). X=2,Y=1 gives 3 (ok). Nothing new to eliminate. X: {1,2,3} Y: {1,2,3} Z: {2,3} -- no change! FIXPOINT REACHED. The refiner sees his reflection.Q: But why does it always stop? Couldn't it keep going forever?
A: No, and here's the key insight: domains can only shrink, never grow.
Once a value is eliminated, it never comes back. A finite set that can only
get smaller must eventually stop getting smaller -- just like dross, once
removed, does not return to the silver. Mathematically, this is guaranteed
by the fact that our domains form a finite lattice.
Single-Pass vs Multi-Pass
Now that you understand what a fixpoint is, the natural question is: how many
rounds of propagation does it take to get there? The answer depends on how the
constraints relate to each other.
| Scenario | Passes Needed | Why |
|---|---|---|
| Independent constraints | 1 | No propagator triggers another |
| Chain of constraints | N | A narrows B narrows C narrows ... |
| Pure AND (our FPGA) | 1 | All constraints applied simultaneously |
The key insight from this table is that our FPGA occupies the best-case
row. Because our hardware applies constraints as simple AND operations on
bitmask domains, and because AND is both commutative (A AND B = B AND A) and
associative ((A AND B) AND C = A AND (B AND C)), the order in which we apply
constraints does not matter. We can fire all of them at once, in parallel,
and the result is the same as applying them in any sequence. That means the
iteration count is typically just one -- a single pass through the
constraints reaches fixpoint immediately. The refiner sees his reflection
on the very first glance.
When constraints form a chain (X constrains Y, Y constrains Z, and so on),
multiple passes become necessary because narrowing X might reveal new
information about Y, which in turn reveals something about Z. Our FPGA handles
this through the M6B scheduler, which can re-issue intersection commands in
subsequent batches. But for the common case of domain-level AND constraints,
one pass is all you need.
Soli Deo Gloria