A new paper from Apple’s artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
The group has proposed a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.
The group investigated the “fragility” of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn’t happen.
Continue Reading on AppleInsider | Discuss on our Forums
Source: AppleInsider News
2025's going to be a year of One Piece, and kicks off with the anime…
Do you think The Good Wife writers knew they had a fan-favorite character on their…
Turns out, things aren't quite rosy for James Bond: the Broccolis and Amazon MGM can't…
A company is betting on aluminum to solve K-cups’ sustainability problem. Experts say it’s complicated.
What a fittingly unusual year for Ricoh Pentax, a photo company that itself is quite…
A recent bulletin sent to BMW dealers confirms that production of the iconic BMW M8…