6.9

Tensor Networks

Tensor contraction, quantum circuit equivalence, graphical notation.

Tensor Networks — Brief ☧

"In whom are hid all the treasures of wisdom and knowledge."

— Colossians 2:3 (KJV)

Deep version → | Back to Category Theory →


Q: You are probably familiar with a simple spreadsheet — rows and

columns, a 2D grid. Now imagine a 3D spreadsheet: rows, columns, and

layers. Each cell is addressed by three coordinates. A regular

array can be 1D (a list), 2D (a table), or 3D

(a cube). What do we call these higher-dimensional arrays in general?

A: A tensor. A 1D tensor is a vector, a 2D tensor is a matrix,

and a 3D tensor is a cube of numbers — but tensors can have any number

of dimensions (called "indices" or "modes"). The key idea is simple:

a tensor is just a multi-dimensional array of values.

In our project, each relation (like appendo(L, S, Out)) is a 3D

Boolean tensor: for every possible combination

of L, S, and Out, the tensor stores 1 (true) or 0 (false).

appendo(L, S, Out) = 3D sparse Boolean tensor
membero(X, List)   = 2D sparse Boolean tensor

Query: appendo(A, B, X), membero(X, [1,2,3])
  = Contract along X: sum over X of appendo[A,B,X] AND membero[X,[1,2,3]]

Q: What does "contract along X" mean? That phrase came up quickly.

A: Good catch — let us build up to it. Imagine two tables that share

a column. In a database, you would join them on that column. Tensor

contraction is the same idea: you have two tensors that share an index

(like X above), and you combine them by summing (or AND-ing) over all

values of that shared index. The shared index disappears from the result,

just like the join column gets "consumed" in a database join.

A tensor network is just a recipe for doing multiple contractions —

combining several tensors step by step. It is like a multi-table join

plan in a database query.

Q: Does it matter what order you do the contractions in?

A: Enormously. The wrong order can create massive intermediate results.

Imagine joining three tables: if you join the two biggest first, you get

a huge intermediate table. If you start with the smallest pair, the

intermediate stays manageable. Finding the optimal contraction order is

actually NP-hard (same difficulty as computing treewidth), so we use

greedy heuristics — the same ones used in quantum physics simulations.

The time complexity of the overall

computation depends heavily on getting this order right.

Key Concepts

ConceptMeaningOur Project
TensorMulti-dimensional arrayRelation table (sparse Boolean)
ContractionSum over shared indexJoin / variable elimination
TreewidthMin width of contraction treeQuery complexity
Optimal orderBest contraction sequencecontraction_order_chirho.py

This table maps each tensor network concept to its concrete realization in our system. The critical insight is the correspondence between tensor contraction and variable elimination: when you "contract along X" in the tensor network, you are eliminating variable X from the constraint problem by computing the AND of all domains that mention X. The treewidth of the resulting contraction tree determines how expensive the query will be -- and finding the optimal contraction order (implemented in our contraction_order_chirho.py) is one of the most impactful optimizations in the entire system.

Connection to our system: Every miniKanren query compiles to a tensor network.

The variables are indices; the relations are tensors; the query is a contraction.

This is not a loose analogy -- it is an exact mathematical correspondence. When you write appendo(A, B, X), membero(X, [1,2,3]), the system builds two Boolean tensors (one for appendo, one for membero), identifies their shared index (X), and performs a contraction: for each combination of A and B, it ANDs the appendo slice with the membero vector, and the result has a 1 only where all constraints are simultaneously satisfied. Our FPGA performs these contractions using bitmask

AND operations on hierarchical domains, turning what could be an exponential

search into structured linear-algebra-like computation. The sparsity of the Boolean tensors is crucial: most entries are 0 (most combinations are invalid), so the hierarchical summary bits let the FPGA skip vast regions of the tensor without examining individual entries.

Learn more in the deep version

Related: Bitmasks | Relations


Soli Deo Gloria

Self-Check 1/1

Tensor contraction generalizes: