How did deep learning lead to a new leap forward in artificial intelligence?

In this blog post, we will look at how deep learning technology, which led to the third leap forward in artificial intelligence, transcended the limitations of perceptrons and combined with big data to create innovation.

 

A third spring is blowing in the field of artificial intelligence. The first spring of AI came with the emergence of the term “artificial intelligence” and the Perceptron theory, which first implemented artificial neural networks. However, the limitations of the Perceptron theory were proven, and computer performance could not support it, leading to the first winter of AI. The second spring of AI came with the introduction of expert systems in the industrial sector. Companies were able to achieve significant cost reductions through these systems. However, with the spread of personal computers, the value of these expert systems declined, and AI entered its second winter. So what brought about the third spring of AI?
The driving force behind the third spring of AI is deep learning. Deep learning can be described as an evolution of the artificial neural networks discussed in the 1950s. There were problems that were considered limitations of artificial neural networks, but with improvements in computer performance, problems that could not be solved with a single perceptron were solved with multi-layer perceptrons, and deep learning has recently become the most advanced field in artificial intelligence. A representative example of deep learning is AlphaGo, which most people think of when they hear the term artificial intelligence. Around this time last year, Google’s AlphaGo defeated professional Go player Lee Sedol. Go was the last bastion that remained when chess was defeated by artificial intelligence, and it was a realm that machines dared not enter. This is because Go has more possible moves than the number of atoms in the universe, requiring not only calculation but also human intuition. However, AlphaGo learned 160,000 game records consisting of 30 million moves through deep learning, and based on this, it played 1 million games against itself and learned from them. By analyzing this data, it became possible to determine the best move by compressing meaningful numbers rather than simply calculating all possible moves. However, even if AlphaGo had excellent thinking abilities, it would not have been able to defeat Lee Sedol without the Go game records. In the process of analyzing and learning data, which is a core function of deep learning, the existence of “data,” or “big data,” is essential.
The important concept here is big data. With the popularization of PCs, smartphones, and the Internet, the amount of data has increased exponentially. Informal data, which is large in scale, diverse in type, and has a short generation cycle, is called big data. The technology that analyzes big data and converts it into meaningful information for us is called big data technology. With the advent of big data technology, machines have become capable of understanding informal data and extracting meaningful information. Using this extracted information, artificial intelligence can perform more successful calculations. For example, suppose there is artificial intelligence that can translate Korean into English naturally. Korean is informal data that is not bound by specific rules or structures as a natural language. Big data technology is necessary to analyze such data. In other words, deep learning is the oar that moves the boat of artificial intelligence, but it is big data that actually rows the boat. As mentioned earlier, deep learning is a field that has developed by overcoming the limitations of the perceptron theory of the 1950s. However, considering that big data technology, which has developed rapidly since the 1980s, plays a major role in data processing in the deep learning process, it can be said that the sudden development of deep learning today is somewhat related to the emergence of big data.
Of course, the current field of artificial intelligence, especially deep learning, is a product of various technologies. Without any one of these, such as computer hardware performance, more advanced algorithms, and data processing, the development of deep learning may have been postponed to the next generation. However, while computer hardware and algorithms have developed as subfields of AI, big data has developed independently and is a broad and important field that is also applicable to AI.
Immediately after AlphaGo swept through South Korea in 2016, the South Korean government invested a total of 3.5 trillion won in artificial intelligence, with a policy of investing 1 trillion won over five years and attracting more than 2.5 trillion won in private investment. However, in order for artificial intelligence to perform more accurate calculations, it is also important to develop the big data technology that underpins it. Currently, South Korea’s big data technology is about 3.3 years behind that of advanced countries, and its market share is only about 1% of the global market (as of 2024). While investing in AI is important, sufficient investment in big data analysis technology for AI will also help the development of AI in South Korea by enabling mutual interaction and growth.

 

About the author

EuroCreon

I collect, refine, and share content that sparks curiosity and supports meaningful learning. My goal is to create a space where ideas flow freely and everyone feels encouraged to grow. Let’s continue to learn, share, and enjoy the process – together.