AI from Scratch5 min read

AI from Scratch #1: Your Brain is a Neural Network (Literally)

You already know how neural networks work — you just don't know it yet. Every time you catch a ball, recognize a face, or learn from a mistake, your brain is doing exactly what AI does.

RM

Raghu Mudumbai

CEO & Chief Scientist, netcausal.ai

The Ball is Coming at Your Face

Imagine someone throws a tennis ball at you. You don't pull out a calculator. You don't measure the angle, compute the velocity, or solve a physics equation. You just... catch it. Or duck.

How?

Your brain has seen thousands of objects flying through the air. Baseballs, frisbees, crumpled paper, your sibling's shoe. Every time, it adjusted a little. First throw — you missed by a foot. Second throw — you missed by six inches. By the hundredth throw, you were snagging them out of the air without thinking.

That's a neural network. That's literally what AI does. And you've been running one in your head since you were two years old.

So What Is a Neural Network, Actually?

A neural network is a system that learns by adjusting itself based on mistakes.

Your brain has about 86 billion neurons — tiny cells that pass signals to each other. When you catch a ball, a chain of neurons fires: eyes see the ball, signals race through your brain, your hand moves. If you catch it, those connections get a little stronger. If you miss, they get a little weaker and try a different path next time.

An artificial neural network works the same way, just with math instead of biology:

  • Inputs — The data coming in (like your eyes seeing the ball)
  • Connections — Each connection has a "weight" — a number that says how important it is (like how strong the signal is between neurons)
  • Output — The answer (move hand left, move hand right, duck)
  • Feedback — Did it work? If not, adjust the weights and try again

That's it. The entire magic of AI — from ChatGPT to self-driving cars to face recognition on your phone — starts with this simple idea: try, fail, adjust, repeat.

The "Learning" Part

Here's where it gets cool. Let's say you're training an AI to recognize cats in photos.

You show it a photo of a cat and ask: "Cat or not?" The first time, it has no idea. It basically flips a coin. Let's say it guesses "not a cat." Wrong.

So you tell it: "Wrong answer. That was a cat." The network adjusts its weights — the connections between its artificial neurons shift slightly, so next time it sees something with pointy ears and whiskers, it's a little more likely to say "cat."

Now show it a thousand cat photos. And a thousand dog photos. And a thousand photos of chairs, pizza, and random stuff. Each time it gets one wrong, it adjusts. Each time it gets one right, it strengthens those connections.

After millions of adjustments, something amazing happens: it can recognize cats it has never seen before. Not because anyone programmed "cats have pointy ears" — but because the network figured it out on its own, the same way you learned to catch a ball.

Why "Deep" Learning?

You've probably heard the term "deep learning." It just means the neural network has many layers — like a relay race where each runner passes the baton to the next.

  • Layer 1 might notice edges and shapes
  • Layer 2 might combine those into features like "pointy ears" or "round eyes"
  • Layer 3 might combine features into "this looks like a face"
  • Layer 4 might decide "cat face" vs. "dog face" vs. "human face"

Each layer builds on the one before it. More layers = more ability to understand complex patterns. That's why it's called "deep" — it's not smarter, it just has more layers of pattern recognition stacked on top of each other.

Try It Yourself

Here's a thought experiment. Imagine you've never seen a bicycle before. Someone shows you 100 photos of bicycles and 100 photos of motorcycles and tells you which is which.

What patterns would you notice? Two wheels? Pedals vs. engine? Size? Color?

Now imagine you couldn't use words — you could only assign numbers. "Two wheels = +5 for bicycle." "Has an engine = -8 for bicycle." "Weighs more than 200 pounds = -3 for bicycle."

Congratulations — you just designed the weights of a neural network. That's exactly what the math is doing, except the computer tries millions of combinations of those numbers until it finds the set that makes the fewest mistakes.

The Big Takeaway

Neural networks aren't magic. They're just systems that learn from mistakes — the same way you learned to walk, talk, catch a ball, and tell your best friend's voice apart from everyone else's.

The difference? Your brain does it with 86 billion neurons and a lifetime of experience. An AI does it with math, data, and a lot of electricity.

What's Next

In the next article, we'll look at how Netflix figures out what movie you want to watch before you do — using the same pattern-matching ideas, but applied to your entire viewing history. Spoiler: your taste in movies is more predictable than you think.


This is part of the AI from Scratch series — making AI and machine learning understandable for everyone, no PhD required. Follow along on Medium or at netcausal.ai/blog.

ai-from-scratchneural-networksbeginnersmachine-learning
Share

Stay ahead of the curve

Get insights on causal AI, network infrastructure, and enterprise technology delivered to your inbox.

No spam. Unsubscribe anytime.