Google-owned DeepMind’s AlphaGeometry can now solve problems at the same level as the smartest mathematicians in school olympiads.
In a blog post, DeepMind showed how the AI solved 25/30 maths problems sourced from the International Mathematical Olympiad’s papers between 2000-2022.
25 is the average number of problems human gold medalists can solve in these competitions.
So far, the superpowers are mostly limited to “geometry”. As in, it’s able to prove whether statements about 2-dimensional shapes such as polygons or triangles are true or false.
It’s headline news for the simple fact that so far AI isn’t great at logic or reasoning. Solving these problems requires just that. AlphaGeometry can prove if a statement is true or false. And by proof, we mean it generates a detailed answer with logic that leaves no room for questioning the statement’s nature.
It was made possible by integrating a “neural language model” (think ChatGPT or Bard) with a “symbolic deduction engine” that deals with logic and rules. The questions posed required adding “constructs” to the problem, and then explaining why those specific constructs were added and how they helped reach the solution. That’s exactly what AlphaGeometry did.
The model was trained on custom-made geometric shapes (about 500,000) fed to the Symbolic engine.
Thang Luyong, one of the researchers working with AlphaGeometry says that the answers are “less beautiful” than human answers. However, AI is also capable of finding simpler solutions than what we humans come up with.
This is all pretty new and the AI model is limited to the math, shapes, and geometry it has access to. With time, as AI gets access to more knowledge, no doubt it’ll surpass some of the smartest math brains on the planet.