top of page

Why AI scientists were awarded Nobel Prize for physics?

  • Abhijit Ahaskar
  • Oct 15, 2024
  • 3 min read

Updated: Mar 13


Geoffrey Hinton, John Hopfield

This year’s Nobel Prize for physics, shared by Geoffrey E Hinton and John J Hopfield, was the first to recognise groundbreaking work in the field of artificial intelligence (AI) and machine learning (ML). 

The award recognizes their “foundational discoveries and inventions that enable machine learning with artificial neural networks,” said Royal Swedish Academy of Science, which awards the Nobel Prize for physics. 


Hopfield, a professor at Princeton University,  started his career as a physicist and later moved on to study how neurons in human brains can be used as a model for machines that can think. His pioneering work called the Hopfield network was designed to save and recreate patterns. Hopfield network was an associative memory model that showed how a neural network in a machine can process information while remembering the connections and patterns. 


Geoffrey E Hinton, who is widely regarded as one of the founding fathers of AI, built on Hopfield’s work. He used the principles of statistical physics and mathematical principles such as back-propagation algorithm to build the Boltzmann machine, which can be trained to recognize data patterns and classify images.


In Boltzmann machine, every node is interconnected allowing bi-directional flow of information. It laid the groundwork for unsupervised learning and discovering hidden patterns in data without explicit labels.


Even though many feel that their initial work on neural networks is now obsolete, it served as the building block for many modern AI systems including transformers, the underlying technology behind chatbots such as ChatGPT and Gemini.

It also revived interest and led to further research on neural networks at a time when many scientists had rejected it as a scientific dead end.


Though their work is not directly related to breakthroughs in physics, the award recognises the theoretical foundation it provides for one of the most important technologies the world has seen since the Internet. 

Many believe that the breakthrough in AI will also help in understanding the physical world better and solving many complex questions and new concepts of physics in the future. 


Ellen Moons, chair of the Nobel Committee for Physics, said the duo’s contribution to AI has been of great benefit to physics. 

“In physics, we use artificial neural networks in a vast range of areas, such as developing new materials with specific properties,” Moons said in a statement. 


Physics experiments generate large volumes of complex data.  Analyzing this data can be time-consuming. ANNs can be trained on this data to analyze data from particle accelerators to discover new particles, analyze interaction between particles to understand the fundamental forces of nature, and analyze astronomical data to identify exoplanets and understand the evolution of the universe.  

ANNs are also being used in image and video recognition software, natural language processing, ad targeting, detecting fake news and to make personalized recommendations on shopping, social media and streaming platforms. 


Despite playing a pioneering role in the development of modern day AI, Hinton has been very critical of AI firms that are rushing the development of LLMs for quick gains. 

Due to some of these concerns he left his job as Vice President and Engineering Fellow at Google in May 2023 after 11 years.  At that time Hinton said that he regretted his life’s work as he feared that AI can be misused by bad actors and they cannot be stopped. 


He recently praised his former student Ilya Sutskever, former chief scientist and co-founder of OpenAI for firing CEO Sam Altman in November 2023. Hinton called it a win for AI safety. Though Altman left for a brief period, the OpenAI board was forced to reinstate him with a lot more authority due to pressure from the firm’s employees and big investors, especially Microsoft.

Sutskever left OpenAI in May to start his own safe AI startup, called Safe Superintelligence, which raised $1 billion in September from top investors such as Andreessen Horowitz and Sequoia. 


Hinton’s fears have been echoed by several scientists and tech executives who believe that the use of more advanced generative AI models need to be regulated. After the release of GPT-4 by AI startup OpenAI in March 2023, over 1,000 industry leaders led by Twitter CEO Elon Musk and Apple co-founder Steve Wozniak signed an open letter urging AI firms to put a moratorium on development of more advanced AI models. 



Image Source: Wikimedia Commons

bottom of page