There is an explosion of artificial intelligence
applications in recent years due to the different developments in its different subfields
. The emergence of foundation models
has resulted in the practical applications of AI to include generative models
that can produce their own data and even content with minimal to zero human intervention.
Artificial intelligence is also changing the manner in which people use the internet and interact with their devices. It is appending different business processes and occupations with the rise of intelligent systems and autonomous agents
while also creating new business models and ecosystems that maximize the full potential of AI applications.
However, despite current use cases and possibilities, artificial intelligence is still confronted with several issues that slow down further developments and widespread deployment of specific AI systems or more specific architectures, algorithms, and models
. This article explores key challenges in the field of artificial intelligence.
Notable Challenges in Advancing Further the Field of Artificial Intelligence: Main Development and Deployment Issues
Technical Infrastructure and Computational Power
Advancing the field of artificial intelligence requires developing and deploying artificial intelligence systems. The entire process relies on the technical infrastructure of the developers which consists of central processing units and discrete graphics processors
, large storage mediums, networking infrastructure, and specific security solutions.
Introducing and implementing a larger artificial intelligence system, such as in the case of a foundation model, would require better and more sophisticated technical infrastructure. Training a model
, for example, is a time-consuming process that depends on computing power. Limited computing power increases the time it takes to train this model.
Scalability is also important. A model that becomes more complex needs to be scaled up. This involves increasing the computing power and storage capacities of its underlying technical infrastructure. Real-time processing is also essential because on-demand AI applications such as speech recognition involve real-time decision-making.
explained that the amount of computing power required to develop large AI models has doubled every 3.5 months since 2012 and improvements in computing have been a critical component of advancing artificial intelligence. Hardware manufacturers are now developing AI-specific chips to address the growing computational demands of the field.
Financial Resources Due to High Cost of Operations
The estimated annual operating cost of OpenAI is between USD 250 million to USD 1 billion. Other tech companies have made significant investments to build and drive their AI capabilities. Microsoft
has invested USD 10 billion in OpenAI. Google
has also made several investments. These include almost USD 4 billion in acquisitions
Cost is another challenge in artificial intelligence. This cost comes from building, operating, expanding, and maintaining the needed technical infrastructure, developing and deploying artificial intelligence architectures, algorithms, and models, and from other operational expenses including the salaries of researchers and other talents.
Training a model can be expensive. A 2020 research noted that the cost of training a model with 1.5 billion parameters
was estimated at USD 1.6 million. Advances in hardware and software have brought down this cost. Another 2023 research revealed that training a model with 12 billion parameters costs around hundreds of thousands of dollars.
However, remember that there are other costs involved. The popular chatbot ChatGPT
costs OpenAI around USD 700000 and using the Google chatbot Bard for web searches costs 10 times as much as performing regular web searches via Google Search. The steep price tag of running these AI applications comes from on-demand processing.
Issues with Data Availability and Privacy Concerns
The capabilities of a particular AI system depend on data. Individual and institutional researchers and developers need to have access to large vast amounts of data or big data
if they want to build and train a particular AI model. Most models are trained with data that are accessible to the public but obtaining and categorizing them require substantial resources.
Note that the foundation language model LLaMA
from Meta Platforms
was trained scraped webpages, Wikipedia articles, public domain books, questions and answers from Stack Exchange websites, and LaTeX source code for scientific papers uploaded to ArXiv. The same is true for large language models
such as GPT-3 and GPT-4
The quantity and quality of data are important. It is not enough to train a model with huge datasets. The model should also be fed with high-quality data. Collecting large amounts of reliable data and ensuring data quality and addressing issues such as missing or inconsistent data can be a significant challenge in the field of artificial intelligence.
Furthermore, aside from quantity and quality, models need to be trained with diverse data to generalize well and handle various scenarios. There are factors that limit access to diversified datasets. These include data privacy and security concerns, the capabilities of developers to categorize and label data, and ethical issues concerning data handling.
Emergence of Negative Externalities and Offshoots
The field of artificial intelligence can bring forth positive changes both at the micro and macro levels of societies and economies. These come with several tradeoffs that have raised concerns, criticisms, and skepticism. Some have even highlighted the sustainability of developing and deploying specific artificial intelligence systems and applications.
One of the more specific challenges in artificial intelligence centers on ethical and legal externalities. Remember that data privacy and security are hampering access to data needed for training models. However, aside from data access, privacy concerns need to be addressed to build trust and wider public acceptance of AI applications.
Ethical dilemmas are another offshoot. Even prominent individuals in tech such as Tesla
chief executive Elon Musk
co-founder Steve Wozniak have expressed concerns over the dangers of advanced AI systems such as artificial general intelligence
. Others noted that AI can lead to unemployment
or displacement of workers.
Another challenge in the field of artificial intelligence is the environmental impacts
that come with the development and deployment of AI systems. Remember that AI modeling and the utilization of AI models require significant computational power. This can lead to increased and misplaced energy consumption
and increased carbon emissions.
FURTHER READINGS AND REFERENCES
- Amodei, D. and Hernandez, D. 2018. “AI and Compute.” OpenAI. Available online
- Biderman, S., Schoelkopf, H., Anthony, Q., Bradley, H., O’Brien, K., Hallahan, E., Khan, M. A., Purohit, S., Prashanth, U. S., Raff, E., Skowron, A., Sutawika, L., and Van Der Wal, O. 2023. “Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling.” arXiv. DOI: 48550/ARXIV.2304.01373
- Kersting, K. and Meyer, U. 2018. “From Big Data to Big Artificial Intelligence?” In Künstliche Intelligenz. 32(1): 3-8. DOI: 1007/s13218-017-0523-7
- Ligozat, A. L., Lefevre, J., Bugeau, A., and Combaz, J. 2022. “Unraveling the Hidden Environmental Impacts of AI Solutions for Environment Life Cycle Assessment of AI Solutions.” Sustainability. 14(9): 5172. DOI: 3390/su14095172