How Neural Networks Work (From The Brain To Artificial Intelligence)


 

To support the production of more 

high-quality content 

consider supporting us on 

Patreon or 

YouTube membership. 

Additionally, consider visiting our 

parent company, EarthOne. 

For sustainable living made simple! 

In the last video in this series, 

we discussed the differences 

between deep-learning and machine-learning. 

How and when the field of deep learning was officially born 

and its rise to mainstream popularity. 

The focus of this video then will be on 

artificial neural networks, more specifically, 

their structure. 

An eagle, a fighter jet, 

while these two distinct entities 

both perform the same task, flight, 

the way they achieve so is quite different. 

The fighter jet is a highly specialized 

and engineered machine designed for very specific task 

and it executes that task extremely well. 

While the eagle, a biological system, 

is arguably much more complex in certain ways, 

capable of a variety of more generalized tasks. 

This analogy draws many parallels to the difference 

between our brains and deep learning systems. 

While they both are capable 

for the task of pattern recognition. 

The brain is an extremely complex general system 

that can perform a huge variety of tasks, 

while deep learning systems are designed 

to excel at very specific tasks. 

To better understand deep learning 

and keeping in line with this analogy of flight, 

let's go back to the basics, 

for once the basic principles of any system are understood 

it is much easier to understand 

the higher level applications and capabilities 

of that said system. 

As we've discussed in videos past, 

deep-learning is derived from the field of connectionism, 

a tribe of machine learning 

in which the goal is to digitally reconstruct the brain. 

Now to digitally reconstruct the brain, 

we must first digitally reconstruct 

the simplest components of the brain, neurons. 

This is an artistic representation of a neuron, 

a multipolar neuron, to be exact. 

There are three primary components to a neuron. 

One, the soma. 

This is the brain, in other words, 

the information processing center of the neuron 

comprised of the cell body and nucleus. 

Two, the axon. 

This is a long tail of the neuron 

that transmits information to and from the cell body. 

And three, the dendrites. 

These are branching arms from the neuron 

that connect to other neurons. 

As we discussed in a previous video 

on neuromorphic computing, 

the brain has over one hundred billion neurons 

with over one hundred trillion synapses, 

with synapses being the connections to other neurons. 

If we are to think in an extremely reductionist perspective, 

we could consider the brain 

to be one gigantic neural network 

that is capable of so much and more we don't even know. 

Hence, it makes sense why the connectionists 

are so adamant on trying to reconstruct the brain, 

to see what emerging properties come about. 

Now taking a step back and going to individual neurons, 

this is one of our very first pictures of neurons 

drawn in the late 19th century by a Spanish anatomist, 

Santiago Ramon y Cajal. 

He used a stain that could be introduced to tissue 

and then used a microscope to draw what he saw. 

Now what you see here is what we've just discussed, 

cell bodies, long tails, 

and dendrites connecting to one another. 

Now let's flip this drawing upside down 

and abstractly map the components of 

the neuron to the right side. 

First, we have the soma, 

which we will represent with a circle, 

and then the axon, 

represented by a long line coming out of the neuron, 

and finally, the dendrites, 

represented by multiple lines leading into the neuron. 

As you can see here, 

we are witnessing how the basic structure 

of a deep-learning neural net came to be. 

To begin discussion on the way that neurons work, 

you can consider the dendrites 

to be the inputs to our neuron. 

In the body, dendrites look for electrical activity 

on their ends, 

whether that be coming from other neurons, 

sensory, or other activity, 

and send those signals through the cell body. 

The soma then takes these signals 

and begins to accumulate them, 

and based on a certain signal threshold, 

the axon is then activated, 

the output of the system. 

Essentially, in a very simplistic way, 

the information processing in a neuron 

is to just add things up, 

and based on that, 

one could correlate dendrite activity 

with the level of axonal activity, 

in other words, 

the more dendrites that are activated, 

and the more frequently they are, 

translates to how often the axon is activated. 

So now that we have an abstract understanding 

of the function of a neuron, 

let's add more to our system 

and begin forming a neural network. 

As I stated earlier, 

the connection between neurons is referred to as a synapse, 

this is where the dendrites, 

the inputs of one neuron, 

are attached to the axon, 

the output, of another. 

Going back to Ramon y Cajal's first drawing of a neuron, 

you can see he saw and drew these little nubs 

on the dendrites. 

This is where the axons of other neurons 

connect to the dendrite of our current neuron. 

In terms of our abstracted drawing, 

we will represent this connection with a circular node. 

Now axons can connect to dendrites strongly, 

weakly, or anything in between. 

For now, we will use the size of the connection node 

to signify the connection strength, 

with connection strength being how active 

the input neuron's connection was passed on 

to the output neuron's dendrite. 

We will also assign this connection strength 

a value between zero and one, 

with one being very strong 

and approaching zero being weak. 

This value, as we'll expand on in future videos, 

is referred to as a connection weight. 

And as you can see, as we begin adding more neurons, 

it gets interesting. 

As many different input neurons can connect 

to the dendrites of a single output neuron, 

with each one having different connection strengths. 

Let's now remove any unconnected dendrites, 

and also remove the nodes that we had 

to represent the connection strength, 

and simply show the thickness of the line 

to represent the weight of that connection. 

Now flipping this diagram horizontally, 

we can see the beginnings of modern 

deep-learning neural network architecture. 

Since the start of this video, 

we went from our immensely complex brains 

with trillions of connections 

and subtleties in operation and interconnectedness, 

to this simple-to-understand neural network model. 

Keep in mind, our system here is just that, 

a model, a very abstract one at that. 

Going from the brain to neural networks 

is a very reductionist process, 

and the true relationship between biological systems 

and neural networks is mostly metaphorical 

and inspirational. 

Our brains, with the limited understanding we have of them, 

are immensely complex with trillions of connections 

and many different types of neurons 

and other tissues operating in parallel, 

and not just connected in adjacent layers 

like neural networks. 

Coming back on topic, no matter the terminology 

we use to describe these networks, 

it remains true that they are still extremely useful 

in deriving representation from large amounts of data, 

as we stated in the last video in this series, 

and now that we have seen how the structure 

of these networks was developed, 

we can see how this representation was built layer-by-layer. 

A way to think about output nodes 

is that they're the sum of the nodes 

that strongly activate them, 

that being the connections with the strongest weight. 

For example, let's say we have five input nodes 

that define the characters A, B, C, D, and E, 

in this case, the output node would then 

be defined by A, C, E. 

Here you are witnessing going from 

a low-level representation, individual letters, 

to higher levels of representation, 

encompassing words, 

and if we kept going on, sentences, and so on. 

This simplistic example is a basis 

of natural language processing, 

beyond letters, this methodology translates 

to any type of input. 

From the pixel values of an image recognition 

to the audio frequencies of speech for speech recognition, 

to more complex abstract inputs 

such as nutritional information and medical history 

to predict the likelihood of cancer, for instance. 

Now before we get ahead of ourselves 

and escalate to the higher level predictive abilities 

of the more complex, abstract applications 

of deep-learning systems in the next 

set of videos in this series, 

we will go through a comprehensive example 

which will introduce many new terms and concepts 

in an intuitive way to help you understand 

how neural networks work. 

However, this doesn't mean you have to wait to learn more. 

If you want to learn more about deep-learning, 

and I mean really learn about the field, 

from how these artificial learning algorithms 

were inspired from the brain 

to their foundational building blocks, the perceptron, 

scaling up to multi-layer networks, 

different types of networks, 

such as convolutional networks, 

recurrent networks and much more, 

then brilliant.org is a place for you to go. 

In a world where automation through algorithms 

will increasingly replace more jobs, 

it up to us as individuals to keep our brains sharp 

and think of creative solutions 

to multi-disciplinary problems, 

and BRILLIANT is a platform that allows you to do so. 

For instance, every day there's a daily challenge 

that can cover a variety of courses in the STEM domain. 

These challenges are crafted in such a way 

in which they draw you in, 

and then allow you to learn a new concept 

through their intuitive explanations. 

To support Futurology 

and learn more about BRILLIANT, 

go to brilliant.org Futurology. 

Additionally, the first 200 people 

that go to that link will get 20% off 

their annual premium subscription. 

(bright, upbeat techno music) 

At this point the video has concluded, 

we'd like to thank you for taking the time to watch it! 

If you enjoyed it, consider supporting us on Patreon or 

YouTube membership to keep this brand growing! 

And if you have any topic suggestions, please leave them in 

the comments below. Consider subscribing for more content 

and check out our website and 

00:10:08,620 --> 00:10:09,860 parent company, EarthOne,  for more information! 

This has been Ankur, 

you've been watching Futurology 

and we'll see you again soon. 

SUBSCRIBE TO OUR NEWSLETTER

Seorang Blogger pemula yang sedang belajar