People solve new problems readily without any special training or practice by comparing them to familiar problems and extending the solution to the new problem. That process, known as analogical reasoning, has long been thought to be a uniquely human ability.

But now people might have to make room for a new kid on the block.

Research by UCLA psychologists shows that, astonishingly, the artificial intelligence language model GPT-3 performs about as well as college undergraduates when asked to solve the sort of reasoning problems that typically appear on intelligence tests and standardized tests such as the SAT. The study is published in Nature Human Behaviour.


If you enjoy the content we create and would like to support us, please consider becoming a patron on Patreon! By joining our community, you’ll gain access to exclusive perks such as early access to our latest content, behind-the-scenes updates, and the ability to submit questions and suggest topics for us to cover. Your support will enable us to continue creating high-quality content and reach a wider audience.

Join us on Patreon today and let’s work together to create more amazing content! https://www.patreon.com/ScientificInquirer


But the paperโ€™s authors write that the study raises the question: Is GPT-3 mimicking human reasoning as a byproduct of its massive language training dataset or it is using a fundamentally new kind of cognitive process?

Without access to GPT-3โ€™s inner workings โ€” which are guarded by OpenAI, the company that created it โ€” the UCLA scientists canโ€™t say for sure how its reasoning abilities work. They also write that although GPT-3 performs far better than they expected at some reasoning tasks, the popular AI tool still fails spectacularly at others.

โ€œNo matter how impressive our results, itโ€™s important to emphasize that this system has major limitations,โ€ said Taylor Webb, a UCLA postdoctoral researcher in psychology and the studyโ€™s first author. โ€œIt can do analogical reasoning, but it canโ€™t do things that are very easy for people, such as using tools to solve a physical task. When we gave it those sorts of problems โ€” some of which children can solve quickly โ€” the things it suggested were nonsensical.โ€

Webb and his colleagues tested GPT-3โ€™s ability to solve a set of problems inspired by a test known as Ravenโ€™s Progressive Matrices, which ask the subject to predict the next image in a complicated arrangement of shapes. To enable GPT-3 to โ€œsee,โ€ the shapes, Webb converted the images to a text format that GPT-3 could process; that approach also guaranteed that the AI would never have encountered the questions before.

The researchers asked 40 UCLA undergraduate students to solve the same problems.

โ€œSurprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well,โ€ said UCLA psychology professor Hongjing Lu, the studyโ€™s senior author.

GPT-3 solved 80% of the problems correctly โ€” well above the human subjectsโ€™ average score of just below 60%, but well within the range of the highest human scores.

The researchers also prompted GPT-3 to solve a set of SAT analogy questions that they believe had never been published on the internet โ€” meaning that the questions would have been unlikely to have been a part of GPT-3โ€™s training data. The questions ask users to select pairs of words that share the same type of relationships. (For example, in the problem โ€œโ€˜Loveโ€™ is to โ€˜hateโ€™ as โ€˜richโ€™ is to which word?,โ€ the solution would be โ€œpoor.โ€)

They compared GPT-3โ€™s scores to published results of college applicantsโ€™ SAT scores and found that the AI performed better than the average score for the humans.

The researchers then asked GPT-3 and student volunteers to solve analogies based on short stories โ€” prompting them to read one passage and then identify a different story that conveyed the same meaning. The technology did less well than students on those problems, although GPT-4, the latest iteration of OpenAIโ€™s technology, performed better than GPT-3.

The UCLA researchers have developed their own computer model, which is inspired by human cognition, and have been comparing its abilities to those of commercial AI.

โ€œAI was getting better, but our psychological AI model was still the best at doing analogy problems until last December when Taylor got the latest upgrade of GPT-3, and it was as good or better,โ€ said UCLA psychology professor Keith Holyoak, a co-author of the study.

The researchers said GPT-3 has been unable so far to solve problems that require understanding physical space. For example, if provided with descriptions of a set of tools โ€” say, a cardboard tube, scissors and tape โ€” that it could use to transfer gumballs from one bowl to another, GPT-3 proposed bizarre solutions.

โ€œLanguage learning models are just trying to do word prediction so weโ€™re surprised they can do reasoning,โ€ Lu said. โ€œOver the past two years, the technology has taken a big jump from its previous incarnations.โ€

The UCLA scientists hope to explore whether language learning models are actually beginning to โ€œthinkโ€ like humans or are doing something entirely different that merely mimics human thought.

โ€œGPT-3 might be kind of thinking like a human,โ€ Holyoak said. โ€œBut on the other hand, people did not learn by ingesting the entire internet, so the training method is completely different. Weโ€™d like to know if itโ€™s really doing it the way people do, or if itโ€™s something brand new โ€” a real artificial intelligence โ€” which would be amazing in its own right.โ€

To find out, they would need to determine the underlying cognitive processes AI models are using, which would require access to the software and to the data used to train the software โ€” and then administering tests that they are sure the software hasnโ€™t already been given. That, they said, would be the next step in deciding what AI ought to become.

โ€œIt would be very useful for AI and cognitive researchers to have the backend to GPT models,โ€ Webb said. โ€œWeโ€™re just doing inputs and getting outputs and itโ€™s not as decisive as weโ€™d like it to be.โ€

IMAGE CREDIT: NASA.


ON SALE! Charles Darwin Signature T-shirt – “I think.” Two words that changed science and the world, scribbled tantalizingly in Darwin’s Transmutation Notebooks.

Processingโ€ฆ
Success! You're on the list.

Americans support cannabis reclassification, study finds
Most people strongly support the federal governmentโ€™s reclassification of cannabis, according to …
Construction, control, and application of cyborg animal composed of biological and electromechanical systems
As computer technology evolves, research shifts to biohybrid robots, particularly cyborg animals. …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading