My journey learning AI: Supervised learning and Unsupervised learning

Carlos Gabriel
8 min readJan 10, 2023

--

#BreakIntoAI — Part 2

My first classification algorithm was like this…

Before we start, here is some terminology that we will use in this article:

  • Input: A data or group of data(dataset) provided to an algorithm as part of the learning process. The algorithm uses the input to make predictions and/or decisions based on the patterns and relationships it learned from the data. Example: Problem: Fruit classification; Input: Potato; Label: not a fruit…
  • Label/Output: If on the training data, it’s the correct answer for that data. If not on the training data, it’s the answer that the AI expects that data to produce. In supervised learning, a dataset is made up of many inputs and the corresponding labels, where the label is the correct output for a given input. Example: Problem: Fruit classification; Input data: Orange; Label: is a fruit!
  • Train: Use data or a dataset to teach a learning algorithm to perform a specific task. During training, the learning algorithm is presented with a set of input data and corresponding labels or output data. It learns how to associate patterns in the input data with the correct output data.

In the last article, I explained why I want to start learning AI and why it’s important to everyone learn about it, be it to create AI that will help solve some problems, understand how new AI technologies work or be able to talk about AI ethics.

My last medium article:

This time, as I dipped my toe into the world of AI, I started learning about the two main types of machine learning: supervised learning and unsupervised learning.

Supervised Learning

Do you remember when you were in school, and the math teacher was giving its lecture about Bhaskara (look at the formula below), first teaching with examples that he knew the results and then making exercises for you to solve? This can be said, even if by a very stretched line, to be supervised learning.

Bhaskara, the terror of many school students

The teacher would teach with examples(input) that he already knows the result (label/output) for you to learn. After you learn how to use the Bhaskara, you can solve any exercise(input) and discover the result(label/output), even for exercises you never saw.

Supervised learning is like this, where a learning algorithm (student) receives an input dataset (the examples) with a know label/output (the results). After learning, it can receive unknown inputs (exercises passed by the teacher) and produce a result based on what it learned!

For machine learning, supervised learning can be divided into two main types:

  • Classification
  • Regression
A very simple example of the difference between regression and classification

Supervised Learning: Regression

The regression type of supervised learning aims to predict a value inside an unlimited range of possibilities. Imagine you’re trying to predict the fuel efficiency of a car based on its weight and engine size. You might start by getting information about cars and their weights, engine sizes, and fuel efficiencies. With this data in hand, you could then use a regression algorithm to learn the relationship between the weight, engine size, and fuel efficiency of the cars. Once the algorithm has learned this relationship, you could use it to predict the fuel efficiency of a new car based on its data.

For example, if you have a car weighing 2,000 pounds and an engine size of 2.5 liters, the algorithm might predict a fuel efficiency of around 30 miles per gallon. Of course, the actual fuel efficiency of the car could be different due to other factors such as tire size, aerodynamics, and driving style. However, the regression algorithm would provide a good estimate based on the patterns it has learned from the training data.

Supervised Learning: Classification

The classification type of supervised learning aims to predict a categorical label or class for a given input inside a limited amount of labels. Imagine you’re trying to teach a computer to identify different types of fruit. You might start by showing pictures of apples, oranges, bananas, and other types of fruit, along with labels indicating what each picture is. The computer would learn to recognize patterns in the pictures that correspond to the different types of fruit, and it could then be used to classify new pictures of fruit as belonging to one of the label categories.

This is basically how classification tasks work in supervised learning. You give the computer a bunch of labeled examples, and it learns to classify new data points based on the patterns it has learned. Some examples of classification tasks include spam detection (is this email spam or not?), sentiment analysis (is this text positive, negative, or neutral?), and fraud detection (is this financial transaction fraudulent or not?).

Supervised Learning: Advantages and Limitations

The main advantage of supervised learning is that it can achieve very high accuracy on a wide range of tasks when trained on a large and diverse dataset. This makes it popular for many real-world applications, such as image classification, speech recognition, and natural language processing.

And the supervised learning algorithms can also be easy to implement, especially if the task is well-defined and the input data is well-structured. This can make it easier to get started with supervised learning, especially for those new to the field of AI.

But not is all advantages for the supervised learning. One of the main limitations of supervised learning is that it requires a large amount of labeled data to be effective. Collecting and labeling this data can be time-consuming and expensive, which can be a barrier to using supervised learning in some cases.

Also, the performance of a supervised learning algorithm may degrade if the input data is significantly different from the training data. For example, if a machine learning model is created to classify cats and dogs and only trained on images of cats and dogs, but then it sees airplanes, it will not perform well.

Unsupervised Learning

In our day-to-day life, most of us like to listen to music. I personally like to listen to rock, pop and some samba. Still, I don’t always like to listen to every music in the same playlist. Sometimes I like to listen to more energetic music, other times to more calm songs, and others to more dancing tunes, etc…

For me to listen to specific types of music (energetic, calmer, dancing, etc…), I need to group the musics that I like into specific types based on some information that music has, like BPM, genre, instruments, etc… With this information, I’m able to cluster similar kinds of music in the same group, being able to create playlists for each mood that I want.

Unsupervised learning also works this way, where a learning algorithm (me) receives the input dataset (Many musics) without any label(no playlist) and needs to cluster similar inputs to the same group based on input characteristics (set similar musics to the same playlist).

For unsupervised learning, it has 3 main types:

  • Clustering
  • Anomaly detection
  • Dimensionality reduction

Unsupervised Learning: Clustering

Clustering the musics by types that I like

Clustering allows you to identify patterns in data and group similar data points together, even if you don’t have any predefined labels or categories to work with. This makes it a useful tool for discovering relationships in data that might not be immediately apparent.

However, clustering algorithms can be sensitive to the initial conditions and may produce different results depending on the order in which the data is processed. In addition, it can be difficult to determine the appropriate number of clusters to use in a given dataset, as too few clusters may not capture the complexity of the data, while too many clusters may result in over-fitting.

Despite these limitations, clustering remains a popular technique in unsupervised learning and can be a powerful tool for discovering patterns and relationships in data.

Unsupervised Learning: Anomaly Detection & Dimensionality Reduction

These 2 types are too big to be explained in an article already talking about other concepts, so I’ll dedicate an article for each one of these types in the future. But briefly talking about them:

  • Anomaly Detection: Technique used to identify unusual or unexpected patterns in data that may indicate a problem or issue. It is commonly used in various applications, including cybersecurity, fraud detection, and quality control. For example, an anomaly detection algorithm might be used to identify unusual patterns of network traffic that could indicate a cyberattack or to identify unusual patterns of financial transactions that could indicate fraudulent activity.
  • Dimensionality Reduction: The process of reducing the number of dimensions (input layers) in a dataset while retaining as much information as possible. This can be useful for visualizing high-dimensional data or reducing an algorithm’s computational complexity. For example, you might use a dimensionality reduction algorithm to reduce a dataset of 100 features to just a few key features that capture the most important patterns in the data.

Unsupervised Learning: Advantages and Limitations

The main advantage of unsupervised learning is that we can use unlabeled data as input. The unsupervised learning algorithm will try its best to understand the data characteristics and how to cluster/find anomalies/reduce information.

However, unsupervised learning can be challenging to interpret, as the output is not always clearly defined or labeled. It can be difficult to understand what the algorithm has learned from the data. And it requires a large amount of data to be effective, as it relies on identifying patterns and relationships in the data. This can be a limitation if you don’t have access to a large and diverse dataset.

In addition to all of this, unsupervised learning algorithms can be sensitive to the initial conditions and may produce different results depending on the order in which the data is processed. This can make it difficult to obtain consistent results and to compare the performance of different algorithms.

Conclusion

Well, the first concepts that I learned about AI and I’m already writing about them! Personally, this is good and I find it helpful to write these articles to help me maintain the information in my brain and be a reference guide for me in the future.

About the learning methods, supervised and unsupervised learning are two important types of machine learning that have a wide range of applications in the field of artificial intelligence. From this article you need to understand two things about AI machine learning:

1 - Supervised learning is used to train a learning algorithm on a dataset with known labels/outputs and can achieve high accuracy on a wide range of tasks.

2 - Unsupervised learning, on the other hand, is used to identify patterns and relationships in data without needing labeled outputs.

Both learning types have advantages and limitations, and selecting the right type of learning for a given task will depend on your project’s specific needs and goals.

If you have a bunch of data with the corresponding labels and you need to classify or get a result from this data, you can use the supervised learning method.

If you need to cluster, find anomalies or have a gigantic amount of data that you need to reduce, you can use the unsupervised learning.

This concludes this article about supervised and unsupervised learning in my journey learning AI. Next article will be a more pratical talking about Linear Regression!

I hope that you liked reading it!

--

--

Carlos Gabriel
Carlos Gabriel

Written by Carlos Gabriel

Software developer and tech enthusiast. Writing about AI, architecture, development, AR/VR, and more. Follow me for insights and updates in the tech world!

No responses yet