A Background of the AI Revolution

A Background of the AI Revolution

The introduction of ChatGPT in November 2020 marked the beginning of the modern artificial intelligence or AI revolution. This is in relations to all relevant discourses that have transpired since. However, based on historical developments in the field, this is not entirely accurate. The exact origin of the revolution is hard to pinpoint because the field has experienced monumental turning points and periods of stagnation. It still cannot be denied that the current pace at which artificial intelligence is developing remains unprecedented. This is evident from the emergence of new algorithms and models, increase in investments, the arrival of startups, developments in hardware capabilities, the emergence of practical end-use applications, and the public attention that the field has attracted.

Understanding the Artificial Intelligence Revolution: History, Origins, and Factors that Contributed to the Modern AI Revolution

Early History of Artificial Intelligence

The origins of artificial intelligence can be traced as far back to the ancient times when thinkers imagined the likelihood of creating artificial intelligence equipped with intelligence and a level of consciousness that are comparable with humans. But these ancient ideations were not enough to warrant the development of a new field. However, in considering prominent philosophers such as Plato and Aristotle, the field of artificial intelligence was also founded on earlier philosophies and notions about the nature of knowledge, perception, and reasoning.

Spanish philosopher and theologian Ramon Llull designed a system of universal logic founded on a set of general principles that are operated in a combinatorial process. It was called the Art and its development began in 1274. The system was unusual in its time because it involved the use letters and diagrams to resemble modern processes in algebra and algorithm. This made it a precursor computer science and computer theory. It also made Llull one of the earliest thinkers recognized for developing a system that resembled modern computing.

The works of other philosophers were also inserted in the modern computer theory and the later field of artificial intelligence. Gottfried Wilhelm Leibniz invented calculus and other branches of mathematics and statistics beginning in the 1670s. Some have considered hum the first computer scientists and information theorist. The automata of Rene Descartes, which was first introduced in 1633, and the calculation-based thinking of Thomas Hobbes, which he proposed in 1651 also influenced later discussions and conceptions about computer intelligence.

Several scientists from different fields began discussing the development of an artificial brain in the middle of the 20th century. However, among these researchers, Alan Turing is renowned for pouring substantial research in a field he called machine learning. His pursuit began in 1941. He introduced several theoretical groundworks which include the Turing Test and the mathematical model called the Turing Machine. Walter Pitts and Warren McCulloch also proposed an idea that centered on the development of networks of artificial neurons in 1943.

Nevertheless, during a summer workshop at Dartmouth College in 1956, computer scientist and cognitive scientist John McCarthy officially introduced the term artificial intelligence. This was generally considered as a pivotal event that launched artificial intelligence as a separate field and a formal academic discipline. Researchers from different fields and who were interested in pursuing inquiries about the possibilities of intelligent machines were present in the event. The Dartmouth Workshop also initiated the foundation of future research direction.

Origins of the Modern AI Revolution

The modern AI revolution that became more pronounced beginning in late 2022 was not a direct product of the ideas and accomplishments from ancient times, the Middle Ages, and the middle of the 20th century. There was no timeline jump. It specifically transpired as part of the overall history of artificial intelligence and a culmination of previous and ongoing developments in the field. It is important to underscore the fact that there are a lot of developments prior to 2022 that made the applications of artificial intelligence more practical.

For example, from 1980 to 1987, there was a boom in the field. The period marked the arrival of programs called expert systems that emulated the decision-making process of humans and were used in corporations around the world. These systems resulted from a new AI research paradigm that focused on knowledge. There was also a sizeable increase in investments during this period but it was later halted due to dozens of business failures. The subsequent period has been called the second artificial intelligence winter and it lasted from 1987 to 1993.

It is still important to note that the field of artificial intelligence expanded into different subfields since 1956. Each field had its respective developmental tracks. The most significant ones are the subfields of machine learning and natural language processing. Take note that the supercomputer known as Deep Blue was based on machine learning. It was considered as the first chess-playing computer in the world when it was introduced in 1997. The research on artificial neural networks was also an ongoing pursuit in the 1900s and 2000s following its 1982 revival.

The year 2011 marked the arrival of novel technologies that helped develop AI as a field further while paving the way for the emergence of its practical applications. The period also marked the development of a branch in machine learning called deep learning and a growing interest in Big Data. Deep learning is based on research on artificial neural networks. Access to Big Data was crucial in training deep learning models. This subset of machine learning started to outperform other machine learning methods and models during the late 2000s.

Several applications of AI became more pronounced at this point. It fueled the recommendation systems used in search engines, electronic commerce, digital advertising, social networking and social media platforms, and virtual assistants, among others. AI also began endowing consumer electronic products such as personal computers and smartphones with new sets of features. The more evident earlier end-use applications include predictive text, language translation, voice or speech recognition, image recognition, and computational photography.

What made the current AI revolution possible was the arrival of large language models. One of the most popular models was the Generative Pre-Trained Transformer model of OpenAI. It was based on the transformer architecture of Google which was introduced in 2017. These models have been pivotal in the modern AI revolution because they introduced practical AI applications such as chatbots and intelligent agents, multimodal models with some level of computer vision capabilities, and various generative artificial intelligence applications.

Factors Driving Current Developments

There are several factors that have contributed to and are driving the modern AI revolution. One of the most critical ones is the exponential growth in computing power. Moore’s Law predicted the doubling of transistors every two years. This held true. More powerful processors that were packed with billions of transistors emerged. Hence, as a consequence, processors such as discrete graphics processors provide the computing power needed to fuel more complex AI algorithms and train larger AI models using massive datasets from Big Data.

It is important to underscore the fact that semiconductor companies have been making significant investments in processors intended for AI research and development. Nvidia is one of the largest suppliers of these processors in the world. Other chipmakers have also made end-use chips with local or native capabilities for handling end-use AI workloads. Examples include AI accelerators such as the Neural Engine found in Apple chips and the Deep Learning Boost architecture found in several models of Intel Xeon and Intel Core Ultra processors.

The contributions of chipmaking in the modern AI revolution is two-fold. The first centers on its role in equipping academic researchers and tech companies with the computing power needed to develop more advanced AI algorithms and train larger AI models. The second is centered on the arrival of chips that equip consumer electronic devices with built-in capabilities for running and handling end-use or consumer-oriented AI applications. These two-fold impacts of advances in chipmaking links research and development with commercialization.

Another factor that is driving the development in AI is the increase in investments in various AI pursuits. This has resulted in the expansion of dedicated teams in big technology companies and the emergence of new AI companies. OpenAI is a leading example. The investments in startups involved in AI reached a record USD 133 billion in 2022 and the overall investments across the globe reached USD 350 billion in 2023. The inflow of capital to AI is focused on the AI strategy of companies, specific AI companies, and the AI initiatives of governments.

Both social acceptance and public awareness are also driving the modern AI revolution. This is evident from the rise of effective accelerationism that aims to combat deceleration and effective altruism standpoints. Another example is the ongoing attempts to democratize the field through open-source development and distribution routes. The growing importance of interdisciplinary collaborations to drive growth further, tackle critical challenges, and address alignment needs is also an example of the expanding importance of artificial intelligence.