Valcheq Technologies Logo
Back to Blog
AI & Machine Learning

Neural Networks: Understanding Deep Learning Fundamentals

Valcheq Team
August 01, 2025

Neural networks are becoming increasingly more popular and are responsible for some of the most cutting edge advancements in data science including image and speech recognition. They have also been transformative in reducing the need for intensive and often time intensive feature engineering needed for traditional supervised learning tasks. In this lesson, we'll investigate the architecture of neural networks.

Objectives

You will be able to:

  • Explain what neural networks are and what they can achieve
  • List the components of a neural network
  • Explain forward propagation in a neural network
  • Explain backward propagation and discuss how it is related to forward propagation

What is a neural network?

Let's start with an easy example to get an idea of what a neural network is. Imagine a city has 10 ice cream vendors. We would like to predict what the sales amount is for an ice cream vendor given certain input features. Let's say you have several features to predict the sales for each ice cream vendor: the location, the way the ice cream is priced, and the variety in the ice cream offerings.

Let's look at the input feature location. You know that one of the things that really affects the sales is how many people will walk by the ice cream shop, as these are all potential customers. And realistically, the volume of people passing is largely driven by the location.

Next, let's look at the input feature pricing. How the ice cream is priced really tells us something about the affordability, which will affect sales as well.

Last, let's look at the variety in offering. When an ice cream shop offers a lot of different ice cream flavors, this might be perceived as a higher quality shop just because customers have more flavors to choose from (and might really like that!). On the other hand, pricing might also affect perceived quality: customers might feel that the quality is higher if the prices are higher. This shows that several inputs might affect one hidden feature, as these features in the so-called "hidden layer" are called.

In reality, all features will be connected with all nodes in the hidden layer, and weights will be assigned to the edges (more about this later), as you can see in the network below. That's why networks like this are also referred to as densely connected neural networks.

Neural Network Matrix Representation

When we generalize this, a neural network looks like the configuration below. As you can see, to implement a neural network, we need to feed it the inputs xi (location, pricing, and variety in this example) and the outcome y (sales in this example), and all the features in the middle will be figured out automatically in the network. That's why this layer is called the hidden layer, with the nodes representing hidden units.

Neural Network Matrix Representation

The Power of Deep Learning

In our previous example, we have three input units, a hidden layer with 4 units, and 1 output unit. Notice that networks come in all shapes and sizes. This is only one example of what deep learning is capable of! The network described above can be extended almost endlessly:

  • We can add more features (nodes) in the input layer
  • We can add more nodes in the hidden layer. Also, we can simply add more hidden layers. This is what turns a neural network into a "deep" neural network (hence, deep learning)
  • We can also have several nodes in the output layer
Neural Network Matrix Representation

And there is one more thing that makes deep learning extremely powerful: unlike many other statistical and machine learning techniques, deep learning can deal extremely well with unstructured data.

In the ice cream vendor example, the input features can be seen as structured data. The input features very much take a form of a "classical" dataset: observations are rows, features are columns. Examples of unstructured data, however, are images, audio files, text data, etc. Historically, and unlike humans, machines had a very hard time interpreting unstructured data. Deep learning was really able to drastically improve machine performance when using unstructured data!

To illustrate the power of deep learning, we describe some applications of deep learning below:

Input (x)Output (y)
Features of an ice cream shopSales
Pictures of cats vs dogsCat or dog?
Pictures of presidentsWhich president is it?
Dutch textEnglish text
Audio filesText

Types of Neural Networks:

  • Standard neural networks
  • Convolutional neural networks (input = images, video)
  • Recurrent neural networks (input = audio files, text, time series data)
  • Generative adversarial networks

An Introductory Example

Problem Statement and Matrix Representation

Neural Network Matrix Representation

You'll see that there is quite a bit of theory and mathematical notation needed when using neural networks. We'll introduce all this for the first time by using an example. Imagine we have a dataset with images. Some of them have Santa in it, others don't. We'll use a neural network to train the model so it can detect whether Santa is in a picture or not.

As mentioned before, this is a kind of problem where the input data is composed of images. Now how does Python read images? To store an image, your computer stores three matrices which correspond with three color channels: red, green, and blue (also referred to as RGB). The numbers in each of the three matrices correspond with the pixel intensity values in each of the three colors. The picture below denotes a hypothetical representation of a 4 x 4 pixel image (note that 4 x 4 is tiny, generally you'll have much bigger dimensions). Pixel intensity values are on the scale [0, 255].

Neural Network Matrix Representation

Having three matrices associated with one image, we'll need to modify this shape to get to one input feature vector. You'll want to "unrow" your input feature values into one so-called "feature vector". You should start with unrowing the red pixel matrix, then the green one, then the blue one. Unrowing the RGB matrices in the image above would result in:

$$x = \left[\begin{array}{c} 35 \\ 19 \\ \vdots \\ 9 \\ 7 \\ \vdots \\ 4 \\ 6 \\ \vdots \end{array}\right]$$

The resulting feature vector is a matrix with one column and 4 x 4 x 3 = 48 rows. Let's introduce some more notation to formalize this all.

(x, y) = a training sample, where x ∈ Rn, y ∈ {0, 1}. Note that n is the number of inputs in the feature vector (48 in this example).

Let's say you have one training sample. Your training set then looks like this: (x(1), y(1)), …, (x(l), y(l))

Similarly, let's say the test set has m test samples. Note that the resulting matrix x has dimensions (n x l), and looks like this:

$$\begin{array}{ccccc} & x^{(1)} & x^{(2)} & \cdots & x^{(l)} \\ \end{array}$$$$x = \begin{bmatrix} 35 & 23 & \cdots & 1 \\ 19 & 88 & \cdots & 230 \\ \vdots & \vdots & \ddots & \vdots \\ 9 & 3 & \cdots & 222 \\ 7 & 166 & \cdots & 43 \\ \vdots & \vdots & \ddots & \vdots \\ 4 & 202 & \cdots & 98 \\ 6 & 54 & \cdots & 100 \\ \vdots & \vdots & \ddots & \vdots \end{bmatrix}$$

The training set labels matrix has dimensions (1 x l), and would look something like this:

$$y = \begin{bmatrix} 1 & 0 & \cdots & 1 \end{bmatrix}$$

where 1 means that the image contains a Santa, 0 means there is no Santa in the image.

Logistic Regression as a Neural Network

So how will we be able to predict whether y is 0 or 1 for a certain image? You might remember from logistic regression models that the eventual predictor, ŷ, is generally never exactly 0 or 1, but some value in between.

Formally, you'll denote that as ŷ = P(y=1 | x).

Remember that x ∈ Rn. As in classical (logistic) regression we'll need some parameters. We'll need some expression here in order to make a prediction. The parameters here are w ∈ Rn and b ∈ R. Some expression to get to ŷ could be ŷ = wTx + b. The problem here is, however, that this type of expression does not ensure that the eventual outcome ŷ will be between zero and one, and it could be much bigger than one or even negative!

This is why a transformation of wTx + b is needed. For this particular example, we denote ŷ = σ(wTx + b), where z = wTx + b, then ŷ = σ(z). This so-called sigmoid function is a popular activation function (more about activation functions later) in neural networks.

With the expression for a sigmoid given by σ(z) = 1/(1 + exp(-z)), it is clear that σ(z) will always be somewhere between 0 and 1 as you can see in the plot below. The sigmoid function creates an S-shaped curve that smoothly transitions from 0 to 1, making it perfect for binary classification problems.

Neural Network Matrix Representation

Bringing all this together, the neural network can be represented as follows:

Neural Network Matrix Representation

Define the Loss and Cost Function

Problem statement: given that we have (x(1), y(1)), …, (x(l), y(l)), we want to obtain ŷ ≈ y. Neural networks use loss and cost functions here.

The loss function is used to measure the inconsistency between the predicted value (ŷ) and the actual label y.

In logistic regression the loss function is defined as:

L(ŷ, y) = -(y log(ŷ) + (1-y) log(1-ŷ))

The advantage of this loss function expression is that the optimization space here is convex, which makes optimizing using gradient descent easier. The loss function, however, is defined over one particular training sample. The cost function takes the average loss over all the samples:

J(w, b) = (1/l) Σ L(ŷ⁽ⁱ⁾, y⁽ⁱ⁾)

When you train your logistic regression model, the purpose is to find parameters w and b such that your cost function is minimized.

3d_plot.py
1%matplotlib inline 2from mpl_toolkits.mplot3d import Axes3D 3import matplotlib.pyplot as plt 4from matplotlib import cm 5from matplotlib.ticker import LinearLocator, FormatStrFormatter 6import numpy as np 7 8fig = plt.figure() 9ax = fig.add_subplot(projection='3d') 10ax = fig.gca() 11 12# Generate data 13X = np.arange(-5, 5, 0.1) 14Y = np.arange(-5, 5, 0.1) 15X, Y = np.meshgrid(X, Y) 16R = X**2+ Y**2 + 6 17 18# Plot the surface 19surf = ax.plot_surface(X, Y, R, cmap=cm.coolwarm, 20 linewidth=0, antialiased=False) 21 22# Customize the z axis 23ax.set_zlim(0, 50) 24ax.zaxis.set_major_locator(LinearLocator(10)) 25ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f')) 26 27ax.set_xlabel('w', fontsize=12) 28ax.set_ylabel('b', fontsize=12) 29ax.set_zlabel('J(w,b)', fontsize=12) 30 31ax.set_yticklabels([]) 32ax.set_xticklabels([]) 33ax.set_zticklabels([]) 34 35plt.show()

Congratulations! You have gotten to the point where you have the expression for the cost function and the loss function. The step we have just taken is called forward propagation.

The cost function takes a convex form, looking much like a bowl! The idea is that you'll start with some initial values of w and b, and then gradient descent, as you've seen before, takes a step in the steepest direction downhill.

Looking at w and b separately, the idea of the algorithm is that both w and b will be updated repeatedly in each step:

w := w - α (dJ(w)/dw)b := b - α (dJ(b)/db)

Remember that dJ(w)/dw and dJ(b)/db represent the slope of the function J with respect to w and b respectively! We've never seen α before, but for now you should just remember that this is denoted the learning rate. What we have just explained here is called backpropagation. You need to take the derivatives to calculate the difference between the desired and calculated outcome, and repeat these steps until you get to the lowest possible cost value!

Backpropagation for the Logistic Regression Example

The Chain Rule Using One Sample

When using the chain rule, computation graphs are popular. Imagine there are just two features x₁ and x₂. The graph going from our input variables to our loss function is given below:

Neural Network Matrix Representation

You'll first want to compute the derivative to the loss with respect to ŷ:

  1. You'll want to go from L(ŷ, y) to ŷ = σ(z). You can do this by taking the derivative of L(ŷ, y) with respect to ŷ, and it can be shown that this is given by:
    dL(ŷ, y)/dŷ = -y/ŷ + (1-y)/(1-ŷ)
  2. As a next step you'll want to take the derivative with respect to z. It can be shown that:
    dz = dL(ŷ, y)/dz = ŷ - y
  3. Last, and this is where you want to get to, you need to derive L with respect to w₁, w₂ and b. It can be shown that:
    dw₁ = dL(ŷ, y)/dw₁ = x₁ * dzdw₂ = dL(ŷ, y)/dw₂ = x₂ * dzdb = dL(ŷ, y)/db = dz

With dw₁, dw₂ and db now known, you would go ahead and update:

w₁ := w₁ - α * dw₁w₂ := w₂ - α * dw₂b := b - α * db

Extending to Multiple Samples

Remember that this example just incorporates one training sample. Let's look at how this is done when you have multiple training samples! We basically want to compute the derivative of the overall cost function:

dJ(w,b)/dwᵢ = (1/l) Σ dL(ŷ⁽ⁱ⁾, y⁽ⁱ⁾)/dwᵢ

Let's have a look at how we will get to the minimization of the cost function. As mentioned before, we'll have to initialize some values.

Initialize: J = 0, dw₁ = 0, dw₂ = 0, db = 0.

For each training sample 1, ..., l you'll need to compute:

z⁽ⁱ⁾ = wᵀx⁽ⁱ⁾ + bŷ⁽ⁱ⁾ = σ(z⁽ⁱ⁾)dz⁽ⁱ⁾ = ŷ⁽ⁱ⁾ - y⁽ⁱ⁾

Then, you'll need to make updates:

J += -[y⁽ⁱ⁾ log(ŷ⁽ⁱ⁾) + (1-y⁽ⁱ⁾) log(1-ŷ⁽ⁱ⁾)]dw₁ += x₁⁽ⁱ⁾ * dz⁽ⁱ⁾dw₂ += x₂⁽ⁱ⁾ * dz⁽ⁱ⁾db += dz⁽ⁱ⁾

After processing all samples, compute averages: J/l, dw₁/l, dw₂/l, db/l

After that, update:

w₁ := w₁ - α * dw₁w₂ := w₂ - α * dw₂b := b - α * db

Repeat until convergence!

Key Concepts Summary

Neural networks are powerful tools that can learn complex patterns in data through a process of forward and backward propagation. Here are the key concepts we've covered:

Forward Propagation

  • Input data flows through the network from input to output
  • Each layer applies weights, biases, and activation functions
  • The final output is compared to the true label using a loss function

Backward Propagation

  • Gradients are computed using the chain rule
  • Weights and biases are updated to minimize the cost function
  • The process repeats until the model converges

Key Components

  • Neurons/Nodes: Basic processing units that apply transformations to inputs
  • Weights: Parameters that determine the strength of connections between neurons
  • Biases: Additional parameters that allow for more flexible learning
  • Activation Functions: Non-linear functions that enable complex pattern recognition
  • Loss Function: Measures the difference between predictions and actual values
  • Learning Rate: Controls how quickly the model updates its parameters

Applications and Next Steps

Neural networks have revolutionized many fields and continue to drive innovation in artificial intelligence. From the simple logistic regression example we explored to complex deep learning architectures, the fundamental principles remain the same: learn patterns through iterative optimization of parameters.

As you continue your journey in neural networks and deep learning, you'll encounter more sophisticated architectures like convolutional neural networks for image processing, recurrent neural networks for sequential data, and transformer models for natural language processing. However, the foundation you've learned here—forward propagation, backpropagation, and gradient descent—remains at the core of all these advanced techniques.

Ready to Implement Neural Networks in Your Business?

At Valcheq Technologies, we specialize in developing custom neural network solutions for businesses across industries. Whether you need image recognition, natural language processing, predictive analytics, or any other AI-powered solution, our team can help you harness the power of deep learning to solve your specific challenges.

Explore Neural Network Solutions