Education logo

What Is A Neural Network?

Introduction

By TechnogibranPublished 3 years ago 7 min read
technogibran.com

The concept of artificial neural networks has always been fascinating and puzzling for the computer science community. It’s a network inspired by living organisms, with the goal of mimicking human intelligence. As it turns out, that is not impossible but there are some limitations to it.

One major hurdle to learning machine learning is simply finding examples of human behavior. If we are able to use human experiences as our main input, we have no problem learning from them. The challenge arises when trying to learn things about people that we don’t know. Examples of these human behaviors can be found in historical records, literature, art, videos — anything that was available or written by humans. In order to make these data useful for us as models, they need to be labeled, so that we can see what they look like in a visual model. This kind of labeling is known as an annotation.

The first attempts at creating “artificial neural networks” were done in 1956 at Dartmouth College by Donald Hebb and Ray Solomonoff. They wanted to simulate human brain function using machine learning (ML). Their idea was that the brain would look something like this:

Neuron = Weight + Bias + Output

But they didn’t have enough data to label the neurons. So they trained a small group of computers to learn how to teach each other until one could recognize any image out there without supervision. This is how they came up with the famous hidden layers:

Hidden layer 1 uses hidden weights in the neuron. Hidden layers 2 and 3 uses weight plus bias (hidden weights) and output (hidden output), respectively. We don’t know exactly what those values are, but we can guess. Hidden layer 4 uses hidden weights plus bias and output, which in turn combines the values of hidden layers 2, 3 and 4. Hidden layer 5 takes the output and passes it through an activation function into hidden layer 6. Hidden layers 7 and 8 combine the outputs of hidden layer 5 and 6, and finally hidden layer 9 takes all of the outputs. Hidden layer 10 learns the weights weights as well. Hidden layers 11 and 12 add the weights into the hidden weights of each hidden layer. Hidden layers 13,14,15 and 16 use weights between hidden layers 2 and 3. Hidden layer 17 is a softmax (or sigmoid) function. Hidden layer 18 is another sigmoid function. Hidden layer 25 is another sigmoid function. Hidden layer 28 and 32 are fully connected functions. Hidden layer 33 is also a sigmoid function. Hidden layer 36 is a leaky ReLU function. Hidden layer 37 is another leaky ReLU function. Hidden layer 38 is a leaky LeakyLeakyReLU function. Hidden layer 39 is a leaky LeakyLeakyReLU function. Hidden layer 40. Hidden layer 41. Hidden layer 42. Hidden layer 43. Hidden layer 44. Hidden layer 45. Hidden layer 46. Hidden layer 47. Hidden layer 48. Hidden layer 49. Hidden layer 50. Hidden layer 51. Hidden layer 52. Hidden layer 53. Hidden layer 54. Hidden layer 55. Hidden layer 56. Hidden layer 57. Hidden layer 58. Hidden layer 59. Hidden layer 60 Hidden layer 61 Hidden layer 62 Hidden layer 63 Hidden layer 64 Hidden layer 65 Hidden layer 66 Hidden layer 67 Hidden layer 68. Hidden layer 71 Hidden layer 72, 73 and 74 Hidden layer 75 Hidden layer 76 Hidden layer 77 Hidden layer 78 Hidden layer 79 Hidden layer 80 Hidden layer 81 Hidden layer 82 Hidden layer 83 Hidden layer 84 Hidden layer 85 Hidden layer 86 Hidden layer 87 Hidden layer 88 Hidden layer 89 Hidden layer 90 Hidden layer 91 Hidden layer 91 Hidden layer 92 Hidden layer 96 Hidden layer 93 Hidden layer 94 Hidden layer 95 Hidden layer 96 Hidden layer 97 Hidden layer 98 Hidden layer 99 Hidden layer 101 Hidden layer 102 Hidden layer 105 Hidden layer 106 Hidden layer 107 Hidden layer 108 Hidden layer 109 Hidden layer 109 Hidden layer 110 Hidden layer 111 Hidden layer 110 Hidden layer 111 Hidden layer 112 Hidden layer 111 Hidden layer 110 Hidden layer 111 Hidden layer 112 Hidden layer 113 Hidden layer 114 Hidden layer 115 Hidden layer 116 Hidden layer 117 Hidden layer 117 Hidden layer 124 Hidden layer 125 Hidden layer 126 Hidden layer 127 Hidden layer 125 Hidden layer 126 Hidden layer 127 Hidden layer 128 Hidden layer 129 Hidden layer 130 Hidden layer 131 Hidden layer 132 Hidden layer 133 Hidden layer 133 Hidden layer 132 Hidden layer 133 Hidden layer 134 Hidden layer 135 Hidden layer 136 Hidden layer 137 Hidden layer 138 Hidden layer 139 Hidden layer 141 Hidden layer 142 Hidden layer 143 Hidden layer 144 Hidden layer 145 Hidden layer 146 Hidden layer 147 Hidden layer 148 Hidden layer 149 Hidden layer 150 Hidden layer 151 Hidden layer 152 Hidden layer 153 Hidden layer 155 Hidden layer 154 Hidden layer 157 Hidden layer 156 Hidden layer 156 Hidden layer 159 Hidden layer 161 Hidden layer 162 Hidden layer 163 Hidden layer 160 Hidden layer 165 Hidden layer 162 Hidden layer 163 Hidden layer 162 Hidden layer 164 Hidden layer 163 Hidden layer 155 Hidden layer 161 Hidden layer 162 Hidden layer 163 Hidden layer 162 Hidden layer 162 Hidden layer 162 Hidden layer 163 Hidden layer 164 Hidden layer 163 Hidden layer 162 Hidden layer 163 Hidden layer 162 Hidden layer 163 Hidden layer 163 Hidden layer 165 Hidden layer 162 Hidden layer 329 Hidden layer 162 Hidden layer 169 Hidden layer 162 Hidden layer 162 Hidden layer 161 Hidden layer 162 Hidden layer 168 Hidden layer 163 Hidden layer 163 Hidden layer 162 Hidden layer 163 Hidden layer 161 Hidden layer 328 Hidden layer 188 Hidden layer 166 Hidden layer 172 Hidden layer 167 Hidden layer 162 Hidden unit Hidden layer 17 Hidden layer 23 Hidden layer 19 Hidden layer 24 Activation 27 Activation 27 Activation 27 Activation 29 Activation 30 Activation 31 Activation 28 Activation 27 Activation 26 Activation 22 Activation 22 Activation 31 20 Activation 32 Activation 30 21 Activation 33 Activation 30 Activation 31 20 Activation 31 30 Activation 31 30 Activation 30 33 Activation 32 Activation 30 34 Activation 5 Activation 35 Activation 35 Activation 35 Activation 35 Activation 35 Activation 15 Activation 36 Activation 6 Activation 7 Activation 6 Activation 7 Activation 9 Activation 7 Activation 7 Activation 9 Activation 6 Activation 7 Activation 7 Activation 7 Activation 9 Activation 8 Activation 8 Activation 8 Activation 8 Activation 8 Activation 7 Activation 4 Activation 7 Activation 8 Activation 7 Activation 7 Activation 8 Activation 8 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activation 7 Activations 7 Activation 7 Convolutional Neural Network: The convolutional neural network is one of the most popular algorithms in artificial neural network. This allows us to represent complex vectors in a much more compact form. By doing so, we no longer have to keep track of tensors which would otherwise take up more space if we had multiple dimensions. As such, the number of parameters increases.

Anatomy of a typical Neural Network

The two components of the architecture are shown below. Note that even though the term neural network applies to both hidden and input layers, it’s easier to interpret that when discussing about hidden layers. However, note that hidden layers are just a subset of the underlying neural network (hidden layer 1, hidden layer 2 …, hidden layer N-1). The hidden layers in the hidden layer N-1 are identical to the hidden layers in the hidden layer 1. Since hidden layers contain only N-1 neurons, hidden layers at depth N+1 contain only N+1 neurons, while hidden layers at depth N+2 contain only N+2 neurons. When talking about internal structures, hidden layers (in hidden layers N-1, hidden layers N+1) are called nodes of the network.

When speaking about hidden layers, they are often referred to as hidden units. Hidden layers perform the same role that they do in the neuron from a single neuron that receives its input. That function is to pass signals between neurons through the neurons that correspond to the input from the previous neurons. In simple terms, hidden layers can be thought of like antennas that transmit signals between the neurons in the network of the hidden layer. Similarly, the hidden layers that receive inputs are called input units.

There are many types of hidden layers, but the one most commonly used and popular is back or forward hidden layers. Back hidden layers are similar to the hidden layers seen in the example above, except they have fewer neurons than the neuron in the hidden layer. Another difference is that back hidden layers allow you to apply nonlinearities to hidden layers to change the way that those neuron “works” on the hidden layer. This allows us to increase the size of hidden layers while still maintaining the same amount of neurons as compared to hidden layers without nonlinearities. The following function uses three hidden layers.

You can easily imagine the hidden layers as hidden nodes, since they consist of 3 hidden units. Each layer is connected to every other layer, as seen in the images above.

In the diagram above, each node represents a hidden layer. All of the connections are hidden, meaning that they allow a connection across the entire network that the hidden layers allow. Note that the numbers in the diagram are called weights. These weights will be learned, and they are the same value for all the hidden layers in the network. For instance: given a hidden layer m and a weight node i that connects to hidden layer m, then the entire hidden layer from node i has a total of m weights assigned to it. Because the weights are added up over the entire network, they represent different weights for different values of the hidden layers that they connect to. For instance, weight w_1 connects the hidden layer 1, and weight w_3 connects the. Click Here to Read More...

collegecoursesdegreehigh schoolhow tostudentteacher

About the Creator

Technogibran

www.technogibran.com is a blog about technology and the Health.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments

There are no comments for this story

Be the first to respond and start the conversation.

Sign in to comment

    Find us on social media

    Miscellaneous links

    • Explore
    • Contact
    • Privacy Policy
    • Terms of Use
    • Support

    © 2026 Creatd, Inc. All Rights Reserved.