The Differences between AI, Machine Learning, and Deep Learning

简介: In the past two years, the growth of artificial intelligence and machine learning has been immense. Machine learning, as a type of artificial intellig

Comparison_Between_AI_Machine_Learning_and_Deep_Learning

Introduction

On November 9, 2015, Google released an open source Artificial Intelligence (AI) system known as TensorFlow. Ever since the launch of TensorFlow, the growth of AI and machine learning has been immense. Machine learning, as a type of AI, enables software to elaborate or predict future events based on a large volume of data. Today, leading technology giants are all making substantial investments in machine learning, including Facebook, Apple, Microsoft, and even China's leading search engine – Baidu.

In 2016, Google DeepMind's AlphaGo project defeated South Korean player Lee Se-dol in the world-famous Go game. The media used the terms AI, machine learning, and deep learning to explain the reasons for DeepMind's victory, causing a mass confusion of these terms among the public.

Differences and Similarities

Although conceptually similar, the terms AI, machine learning, and deep learning are not interchangeable. Referencing the interpretations from Michael Copeland of NVIDIA, this article unveils the concepts of AI, machine learning, and deep learning. To understand the relationship between the three, let us look at the figure below:

01

Figure 1

As shown in the figure, machine learning and deep learning belong are subcategories of AI. The concept of AI appeared in the 50's, while machine learning and deep learning are relatively newer topics.

AI: From Irrelevance to Global Adoption

Since 1956, when computer scientists coined the term AI at Dartmouth Conferences, there has been an endless stream of creative ideas about AI. AI was one of the hottest topic or research because many perceived AI as the key to a bright future of human civilization. However, the idea of AI was quickly discarded for being too pretentious and whimsical.

In the past few years, especially after 2015, AI has experienced a new surge. A large contributor to this growth is the widespread use of graphics processors (GPUs) that make parallel processing faster, economical, and powerful. Additionally, the emergence of almost infinite storage spaces and massive data (big data movement) also benefitted the development of AI. These technologies allow unlimited access to all kinds of files, including images, text, transaction data, and map data.

Next, we will look at AI, machine learning, and deep learning one by one from their development processes.

AI and Its Applications

02

Figure 2

When AI pioneers sat in the meeting room in Dartmouth College, they dreamed of creating a complicated machine with a human-level intelligence using the emerging computers at that time. This is what we call the "General AI" (General AI) concept, a machine capable of reasoning and the ability to interact with the five senses. This is a recurrent theme in films, such as the human-friendly C-3PO and the enemy of humanity, Terminator. However, so far general AI machines are only fictional for a simple reason: we cannot achieve it yet, at least so far.

One of the main challenges of general AI is its extensive scope. Instead of creating an all-purpose machine, we can narrow down our requirements to achieve specific goals. This task-specific implementation of AI is also known as narrow AI. There are many examples of narrow AI in reality, but how are they created? Where does the intelligence come from? The answer to these questions lies within the next topic – machine learning.

Machine Learning and Its Applications

03

Figure 3

The concept of machine learning comes from AI and it is a way to achieve general AI. Researchers in earlier days got together and developed algorithms that include decision tree learning, inductive logic programming, enhanced learning, and Bayesian networks. In simple terms, machine learning utilizes algorithms to analyze data, and then learns from the results to make inferences or predictions. Unlike the traditional use of preprogrammed software, machine learning uses data and algorithms to "train" itself and improve on its own program.

Computer vision is one of the most well-known application of machine learning, but it requires a lot of manual coding work to complete tasks. Researchers manually write some classifiers, such as edge detection filters, to help programs identify the boundaries of objects. Based on these manually written classifiers, researchers can then develop algorithms for machines to analyze, identify, and understand images.

Nevertheless, this process is prone to errors because of the primitiveness of existing technologies.

Deep Learning and Its Applications

04

Deep learning is a technology for achieving machine learning. The concept of artificial neural networks is an early development of deep learning, but it remained unknown for decades after its invention. The idea of creating interconnected components was inspired by the human brain, but its implementation differs significantly from biological systems. In humans, the neurons of the human brain can connect to any neuron within a specific range to implement a variety of tasks. However, the data transmission in an artificial neural network must go through different layers in different directions.

For example, you can splice an image into smaller pieces and input them into the first layer of the neural network. The preliminary calculation occurs in the first layer, and then the neurons pass the data to the second layer. Neurons in the second layer will then execute the task. All the layers follow the same rule, and the result is presented as an output.

Each neuron has a specific weight assigned to it. These weights determine the final output, and are calculated based on the correctness and error of the neuron relative to the task of execution. Let us look at an example of a system analyzing a stop sign. The neurons subdivide and "check" for the properties of a stop sign image, such as the shape, color, character, size, and movement. The neural network will produce a probability vector, which is, in fact, an estimation result based on the weights. In this example, the system may be 86 percent sure that the image is a stop sign. The network architecture then determines whether this judgment is correct.

However, the problem is that even the most basic neural network consumes a lot of computing resources, an aspect that made neural network infeasible back then. A small group of enthusiastic researchers, led by Professor Geoffrey Hinton of the University of Toronto, stuck to this method and eventually enabled the supercomputer to execute the algorithm in parallel to prove the viability of the algorithm.

If we go back to the stop sign example, the accuracy of the prediction is dictated by the amount of training the neural network receives. This implies that constant training is necessary. Tens of thousands or even millions of images are needed to train the machine. With sufficient training, the input weights of neurons can be adjusted to a very precise level for a consistently accurate answer.

Currently, deep learning trained machines can outperform humans in image recognition, including challenging and critical tasks such as identifying signs of cancer in blood. Facebook uses a similar type of neural network to recognize faces in pictures, and Google's AlphaGo is capable of beating the world's best Go players by training its algorithm intensively.

Conclusion

The foundation of AI lies in the intelligence of machines, while machine learning specifically refers to the deployment of calculation methods that support AI. Simply put, AI is science, while machine learning is the experimental methods that make AI possible. To some extent, machine learning makes AI. We hope that this article was helpful in explaining the differences and relationships between the three variants of machine intelligence.

目录
相关文章
|
7月前
|
存储 人工智能 搜索推荐
Azure Machine Learning - 什么是 Azure AI 搜索?
Azure Machine Learning - 什么是 Azure AI 搜索?
127 0
|
4月前
|
机器学习/深度学习 人工智能 自然语言处理
|
4月前
|
机器学习/深度学习 人工智能 算法
AI人工智能(ArtificialIntelligence,AI)、 机器学习(MachineLearning,ML)、 深度学习(DeepLearning,DL) 学习路径及推荐书籍
AI人工智能(ArtificialIntelligence,AI)、 机器学习(MachineLearning,ML)、 深度学习(DeepLearning,DL) 学习路径及推荐书籍
136 0
|
5月前
|
机器学习/深度学习 人工智能 大数据
AI大模型企业应用实战(24)-什么是zero-shot, one-shot和few-shot Learning?
零样本学习(Zero-Shot Learning)是机器学习中的一种方法,模型在未见过的类别上进行分类,依赖于类别描述来建立训练与测试集间的联系。例如,通过已知的马、老虎和熊猫特征推断斑马。单样本学习(One-Shot Learning)则是在极少量样本(如一个)的情况下进行学习,目标是减少训练数据需求,适用于新类别出现时无需重新训练的情况。小样本学习(Few-Shot Learning)是处理仅有少量类内样本的学习任务。这三者常用于图像分类、语义分割等场景,One-Shot是Few-Shot的特殊情况。
168 0
|
机器学习/深度学习 人工智能 自然语言处理
要创建一个专属的AI机器人并基于LLM(Language Learning Model)构建AI知识库问答应用
要创建一个专属的AI机器人并基于LLM(Language Learning Model)构建AI知识库问答应用
338 6
|
机器学习/深度学习 人工智能 自然语言处理
全球名校AI课程库(5)| Stanford斯坦福 · 深度学习课程『Deep Learning』
吴恩达与助教在斯坦福开设的深度学习课程,内容覆盖基础知识、各类神经网络、实际应用等排,是很多人的深度学习入门课。
2391 1
全球名校AI课程库(5)| Stanford斯坦福 · 深度学习课程『Deep Learning』
|
机器学习/深度学习 Web App开发 人工智能
全球名校AI课程库(10)| Berkeley伯克利 · 深度强化学习课程『Deep Reinforcement Learning』
课程结合了最新的研究进展,讲解深度强化学习领域的前沿知识和实践,覆盖了使用深度学习神经网络进行强化学习的各类方法模型。
2206 1
全球名校AI课程库(10)| Berkeley伯克利 · 深度强化学习课程『Deep Reinforcement Learning』
|
机器学习/深度学习 Web App开发 人工智能
全球名校AI课程库(9)| Berkeley伯克利 · 深度无监督学习课程『Deep Unsupervised Learning』
研究生级课程,围绕无监督学习的场景展开,包括深度生成模型和自监督学习两大主题,涵盖了许多当前的最新研究和模型。
2167 1
全球名校AI课程库(9)| Berkeley伯克利 · 深度无监督学习课程『Deep Unsupervised Learning』
|
机器学习/深度学习 Web App开发 人工智能
全球名校AI课程库(7)| Berkeley伯克利 · 深度神经网络设计、可视化与理解课程『Deep Learning: Designing, Visualizing and Understand』
课程以深度学习的典型方法、模型设计、可视化与模型理解为主题,讲解了自然语言处理、计算机视觉、强化学习等领域的AI模型全域知识。
2196 1
全球名校AI课程库(7)| Berkeley伯克利 · 深度神经网络设计、可视化与理解课程『Deep Learning: Designing, Visualizing and Understand』
|
机器学习/深度学习 人工智能 自然语言处理
全球名校AI课程库(20)| Stanford斯坦福 · 图机器学习课程『Machine Learning with Graphs』
课程对于graph方向的数据挖掘、机器学习(神经网络)有全面的知识覆盖。如果想学习非结构化的图数据上的各类算法,这是最权威的课程之一。
2315 1
全球名校AI课程库(20)| Stanford斯坦福 · 图机器学习课程『Machine Learning with Graphs』

热门文章

最新文章

下一篇
无影云桌面