5.1

Neural Networks

Neurons, layers, forward pass, MLP structure. The vine and branch metaphor.

Lab

Neural Networks — Brief ☧

"The hearing ear, and the seeing eye, the LORD hath made even both of them."

— Proverbs 20:12 (KJV)

Deep version → | Related: Activations → | Training →


Q: Imagine you receive a photo and want to decide: is this a cat or a dog?

You might notice ear shape, nose size, fur length — dozens of clues.

Each clue matters a different amount. How would you combine them?

A: You could give each clue a importance score — a weight — then

add up (weight times clue) for every clue, plus a starting nudge called a

bias. That sum becomes your confidence. In code:

output = activation(w1x1 + w2x2 + ... + bias).

That one calculation is what a single neuron does.

Q: One neuron handles one combination of clues. But recognizing a face

needs thousands of combinations — edges, then shapes, then features.

What if you lined up many neurons side by side?

A: A row of neurons working in parallel is called a layer. Stack

several layers and you get a deep neural network — each layer builds

more abstract features from the layer before it. This is how the network

goes from raw pixels to "that's a golden retriever."

Q: Jesus said, "I am the vine; ye are the branches" (John 15:5).

How does a vine capture this idea?

A: Nutrients (signals) flow from the root through the vine (layers),

into each branch (neuron), and finally produce fruit (output). Each branch

receives a weighted mixture of what flows through it — just like each

neuron receives weighted inputs from the previous layer.

The Structure

ComponentWhat It DoesVine Analogy
Input layerReceives raw data (pixels, numbers)Soil and water entering the roots
Hidden layer(s)Transforms features step by stepInner branches processing nutrients
Output layerProduces the final predictionThe fruit
WeightScales a connection's strengthHow much nutrient a branch draws
BiasShifts the activation thresholdBaseline growth potential
ActivationIntroduces nonlinearity (bends and curves)The branch deciding whether to grow

The takeaway from this table is that a neural network is built from just a few repeating building blocks. Each neuron does the same simple arithmetic -- multiply inputs by weights, add a bias, and pass through an activation function -- but when you wire thousands of these tiny units together, the collective behavior can be astonishingly complex. It is the same principle that makes a vine remarkable: each individual cell follows simple biological rules, yet the whole organism can climb walls, find sunlight, and produce fruit.

Input      Hidden      Output
(soil)    (branches)   (fruit)

 x1 ──┬──→ h1 ──┬──→ y1
       ╳        ╳
 x2 ──┴──→ h2 ──┴──→ y2

Every connection carries a weight, and every neuron applies an activation

function. The network learns by adjusting these weights through

iteration — repeating the forward pass

and correction cycle thousands of times until predictions improve.

Conceptually, a neural network is an algorithm

that transforms an input array of numbers

through successive layers, much like data flowing through a

graph of connected nodes. The depth of this graph -- how many layers the signal passes through -- determines how abstract the features can become. Early layers might detect edges in an image; middle layers combine edges into shapes; and the final layer recognizes those shapes as "a golden retriever." This hierarchical feature extraction is what makes deep learning so powerful for tasks like vision and language understanding.

Connection to our project: Our differentiable_chirho.py uses neural-style

forward passes over soft domains — weights determine how strongly each

logic relation contributes. When the FPGA propagates constraints through the domain hierarchy, it is performing the same kind of layered transformation: Level 2 summary bits feed into Level 1, which expands into the full Level 0 domain. Each level refines the signal, just as each layer in a neural network refines raw input into a meaningful prediction. The difference is that our "neurons" operate on single bits instead of floating-point numbers, and our "activation function" is a simple AND gate that prunes impossible values.

Learn more in the deep version

Related: Activations | Training


Soli Deo Gloria

Self-Check 1/2

A neural network with multiple hidden layers is called:

Lab

Neural Networks — Python Lab