A new paper from Apple’s artificial intelligence scientists has found that engines based on large language models, such as those from Meta and OpenAI, still lack basic reasoning skills.
The group has proposed a new benchmark, GSM-Symbolic, to help others measure the reasoning capabilities of various large language models (LLMs). Their initial testing reveals that slight changes in the wording of queries can result in significantly different answers, undermining the reliability of the models.
The group investigated the “fragility” of mathematical reasoning by adding contextual information to their queries that a human could understand, but which should not affect the fundamental mathematics of the solution. This resulted in varying answers, which shouldn’t happen.
Continue Reading on AppleInsider | Discuss on our Forums
Source: AppleInsider News
Maybe Apple will never fully walk away from Europe, but the European Commission has just…
A new report says Ketchup Entertainment, which picked up The Day the Earth Blew Up:…
We're staying old school today with this 1968 Ford Mustang. This one has had the…
Transportation Secretary Sean Duffy is demanding a "safety plan" from the city.
Apple has a big software year ahead, with major redesigns coming to iOS 19, macOS…
The biggest backer of OpenAI, Microsoft is now building its own AI models and teaming…