Minerva: Solving Quantitative Reasoning Problems with Language Models

Language models have mastered lots of natural language tasks. However, quantitative reasoning has been thought to be unsolvable using current techniques. A recent article by Google AI presents Minerva, a language model capable of solving mathematical and scientific questions using step-by-step reasoning.

Mathematics - artistic impression. Image credit:  Peter Rosbjerg via Flickr, CC BY-ND 2.0

Mathematics – artistic impression. Image credit:
Peter Rosbjerg via Flickr, CC BY-ND 2.0

The model builds on the Pathways Language Model, with further training on a 118GB dataset of scientific papers from the arXiv and web pages that contain mathematical expressions. It also incorporates recent prompting and evaluation techniques, like the chain of thought or majority voting, to better solve mathematical questions.

Evaluation on college and graduate level problems covering a variety of STEM topics shows that Minerva obtains state-of-the-art results, sometimes by a wide margin. Models like Minerva have many possible applications, from serving as aids for researchers to enabling new learning opportunities for students.

Link: https://ai.googleblog.com/2022/06/minerva-solving-quantitative-reasoning.html