The Contributions of Geoffrey Hinton: The Godfather of AI

The Contributions of Geoffrey Hinton: The Godfather of AI

In 2018, together with Yann LeCun and Yoshua Bengio, Geoffrey Hinton received the Turing Award for conceptual and engineering breakthroughs that have made deep learning through deep artificial neural networks a critical component of modern computing.

The Turing Award, conferred by the Association for Computing Machinery and named after English computer scientist and war hero Alan Mathison Turing, is considered the Nobel Prize for computer science. This award is not the sole accolade under his sleeves.

Why is Geoffrey Hinton Considered the Godfather of AI: A Look Into His Notable Accomplishments and Contributions

Geoffrey Hinton has been regarded as the Godfather of Artificial Intelligence alongside LeCun and Bengio. He is specifically a British-Canadian cognitive psychologist and computer scientist known for his work on artificial neural networks and other concepts that further helped in advancing the different subfields of artificial intelligence.

He was the author or co-author of more than 200 papers and his research centered on using neural networks for machine learning, memory, perception, and symbol processing. Furthermore, in 2013, he joined Google while also serving as a computer science professor at the University of Toronto and holding the Canada Research Chair in Machine Learning.

Backpropagation Algorithm for Artificial Neural Networks and Deep Learning

He co-authored the highly cited 1986 paper with David Rumelhart and Ronald J. Williams that popularized the use of a backpropagation algorithm for training multilayer neural networks. Take note that they were not the first to propose this approach to neural network training but their paper introduced more efficient and effective methods for calculating the gradient of the loss function with respect to the weights of the network.

Backpropagation algorithm is a supervised learning algorithm. It works by calculating the error or loss between the predicted output of the neural network and the true output and then using this error to update the weights of the network to minimize the error in future predictions. This simply means that it tries a combination of numbers, called weights, to come up with different solutions until it gets the correct given output. The algorithm essentially helps a neural network to learn by giving it feedback on its performance.

The succeeding studies and works that drew inspiration from the aforesaid paper have brought forth new approaches and models that enabled faster and more accurate training of artificial neural networks. These became critical factors in the recent success of practical artificial neural network applications and the expansion of deep learning. Specific applications include speech recognition, image recognition, and natural language processing.

Restricted Boltzmann Machines for Learning to Represent Complex Data Patterns

Another notable contribution of Hinton in the field of artificial intelligence is the Restricted Boltzmann Machines or RBMs. He co-invented RBMs with John Hopkins University professor Terry Sejnowski and University of New Mexico professor Dave Ackley in 1985 as part of their collective and ongoing attempt in developing a deep learning algorithm that can detect inherent patterns automatically in the data by reconstructing input.

RBMs are a type of artificial neural network and an example of a generative model that can learn to represent complex patterns in data while also being capable of generating new data that is similar to the data they were trained on. Think of them as two-part machines that can learn to recognize different patterns in data. The first part looks at the data and the other part tries to find patterns in the given data. These two parts work together to learn which patterns show up the most and then generate new data based on their analyses.

They are generally used for unsupervised learning and are capable of learning a probability distribution over a set of input data. These make them useful for dimensionality reduction, classification, regression, collaborative filtering, feature learning, and topic modeling, as well as generative AI applications. Some of their use cases include image and speech recognition, recommendation systems, and natural language processing.

Research on Capsule Neural Networks for Hierarchical Relationship Modelling

Hinton and other researchers described in a 2000 paper an imaging system that segmentation and recognition into a single inference process using parse trees. He later published two papers with other researchers in 2017 and 2018 about capsule neural networks. Note that a capsule neural network is a machine learning system based on an artificial neural network that mimics biological neural organization to model hierarchical relationships.

The paper “Dynamic Routing Between Capsules” described a computer system made using a technique called “routine-by-agreement” that was adept at recognizing handwritten numbers on a famous dataset called MNIST. The system can tell apart numbers that are written really close to each other because it looks at the different features of the number, like how long or curved the lines are, then chooses the best “capsule” or group of neurons to handle that feature. These capsules work together to accurately recognize the number.

Furthermore, in “Matrix Capsules with EM Routing,” Hinton and his coauthors described another that can tell what an object looks like from different angles computer system using capsules. These capsules were trained to work together using a special algorithm called Expectation-Maximization. Each capsule has a special way of looking at an object and a matrix to represent how the object is positioned in space.

Computer Vision Through AlexNet Convolutional Neural Network Architecture

Ukrainian-born Canadian computer scientist Alex Krizhevsky collaborated with Israeli-Canadian computer scientist Ilya Sutskever and Hilton to design AlexNet for the 2012 ImageNet Challenge. Take note that Krizhevsky and Sutskever were students of Hilton and ImageNet is a large visual database project composed of more than 14 million hand-annotated images designed for use in visual object recognition software research.

AlexNet is specifically a convolutional neural network architecture for computer vision designed to categorize images into one of the 1000 object categories in the ImageNet dataset. It achieved a top-5 error rate of 15.3 percent and had an error rate of more than 10.8 percentage points lower than its competitors in the ImageNet Large Scale Visual Recognition Challenge held in 2012. The specific architecture consisted of five convolutional layers, some of them followed by max-pooling layers, and three fully connected layers.

The architecture has influenced the further development of deep learning and CNN architectures. Furthermore, the specific AlexNet paper has been regarded as one of the most influential studies published in the subfield of computer vision because it was instrumental in spurring more papers that employed convolutional neural networks and discrete graphics processing units in accelerating machine learning and the more specific deep learning.

Forward-Forward Algorithm as a Novel Learning Procedure for Neural Networks

In 2022, at the Conference on Neural Information Processing Systems, Hinton introduced a new learning algorithm for neural networks that he called the forward-forward algorithm. He wanted to replace the traditional forward-backward passes of backpropagation based on two forward passes that included positive data and negative data. Further information about this new learning procedure was detailed in a pre-print paper published in December 2022.

The forward-forward algorithm is a greedy multi-layer learning procedure that does not require the computation of derivatives or the storage of neural activities. It enables a system to learn things by looking at both “good” and “bad” examples of something. The result is a system that gets “excited” or “active” when it sees the “good” example, and not so excited when it sees the “bad” example. This procedure also enables the system to learn while pipelining sequential data through a neural network or to learn while performing other tasks.

Furthermore, while it is slower than backpropagation, and although it does not generalize quite as well on several of the toy problems investigated, it is some degree of superiority because it has better “biologically plausible” learning. This algorithm is ideal for learning in the cortex and in situations where the forward computation is unknown or that use very low-power analog hardware without resorting to reinforcement learning. Photo credit: Eviatar Bach / Geoffrey Hinton at UBC / Adapted / CC BY-SA 3.0

FURTHER READINGS AND REFERENCES

  • Hinton, G. 2022. “The Forward-Forward Algorithm: Some Preliminary Investigations.” arXiv. DOI: 48550/ARXIV.2212.13345
  • Hinton, G. E., Sabour, S., and Frosst, N. 2018. “Matrix Capsules with EM Routing.” ICLR 2018 Conference Blind Submission
  • Rumelhart, D. E., Hinton, G. E., and Williams, R. J. 1986. “Learning Representations by Back-Propagating Errors.” Nature. 323(6088): 533-536. DOI: 1038/323533a0
  • Sabour, S., Frosst, N., and Hinton, G. E. “Dynamic Routing Between Capsules.” arXiv. DOI: 48550/ARXIV.1710.09829