In this article, we will explain what is Deep Learning in simple terms. As you may feel, this is a very trendy topic out there.
Deep Learning is a subpart of Machine Learning, which is respectively is a subpart of Artificial Intelligence.

What is Deep Learning?

Deep Learning relation to ML and AI

Deep Learning explanation

Deep Learning generally uses several stacked layers to extract high-level information from data. It uses artificial neural networks to perform calculations and make the extraction. The term “deep” comes from the nested layers. Calculating different coefficients in the layers can find patterns, abstractions, or not apparent representations. A supervised learning method is used most of the time, where data labels are needed to adjust weights and biases (different coefficients) on each iteration.

The general purpose is to extract features (knowledge) from complicated data structures like speech, images, and videos. Deep Learning can also work on tabular data (like in Recommendation systems), but there are often more appropriate methods to handle such input. So Deep Learning is not a solution for every problem.

In order to fully understand the Deep Learning concept, it is worth explaining what exactly Machine Learning and Artificial Intelligence are.

  • Machine Learning (ML) – a set of algorithms and approaches to solve computational tasks. Often past experience is used as a reference to improve the behavior in future iterations.
  • Artificial Intelligence (AI) – the ongoing efforts to understand if computers can reason (think rationally) and learn from experience.

Short history

Despite the hype around AI, this topic is not new. Starting from 1950, questions about computer’s ability to think harassed scientific minds. A few years later, computer knowledge separated this as a standalone branch. The problematic part upto recent decades was a missing effiecent way to train large neural network.

Some notable periods and names can be highlighted:

  • 1950 – debates about the definition of intelligence, Alan Turing and his “Turing Test”;
  • 1980 – a rediscovery of the Backpropagation algorithm and application in neural networks;
  • 1989 – Yann LeCun from Bell Labs combined the ideas for backpropagation and convolutional neural networks. Then used them to classify handwritten digits;
  • 1990s – Kernel methods – a group of classification algorithms, like Support Vector Machine (SVM). They put neural networks on the second seat for a while;
  • 2000s to 2010s – Decision trees, gradient boosting, random forest – classify input data points from given examples. Flowchart structure, which is easy to implement and use;
  • 2011 – raise of neural networks with GPU-trained networks.

Nowadays, Deep Learning is everywhere around us. Trivial examples are spam detectors in mailboxes, face-clustering in our phones, voice recognition in modern TVs, and many more.

Why Deep Learning is important

Key argument: makes the problem-solving easier.

It automates the feature engineering phase, which tends to be complex, time-intensive, and niche specific.

 

 

 

Categorized in:

Deep Learning,

Last Update: 26/12/2023