Can GPT-3 really outperform human minds in solving reasoning questions?

A recent study from University of California scientists has revealed that GPT-3 can solve complex cognitive tasks and outperform human abilities in solving reasoning questions. However, OpenAI researchers still doubt and wish to investigate further.

Can GPT-3 really outperform human minds in solving reasoning questions?
Can GPT-3 really outperform human minds in solving reasoning questions?

Highlights

  • GPT-3 AI-powered tool from OpenAI exhibits reasoning skills comparable to those of college undergraduate students
  • Researchers acknowledged their lack of understanding of OpenAI's internal workings amid exceptional GPT-3 results

As per a recent scientific research done by University of California, it has been found that GPT-3, anAI-powered tool by OpenAI exhibits reasoning skills comparable to those of college undergraduate students.  

The information has further confirmed that GPT-3 has been proven successful in handling complex cognitive tasks. During the study, the AI large language model was presented with reasoning and standardised tests, like the SAT(Scholastic Assessment Test), which are important in the admissions process for colleges and universities in the United States and other countries, and GPT-3 reportedly cracked it. 

GPT-3 outperformed the human brain

According to the report, scientists at the University of California, Los Angeles (UCLA), tested GPT-3. In the research, the AI was given two wholly new tasks: anticipating the next form in complex arrangements of shapes and responding to SAT analogy questions.  

In the test, 40 UCLA undergrads were also invited to attempt the same puzzles by the researchers. Later it was confirmed that GPT-3 outperformed the human participants' average score, which was just under 60 percent. Moreover, the AI also exceeded the top human top scores, with an accuracy of 80 percent in the shape prediction test. 

"Surprisingly, not only did GPT-3 do about as well as humans but it made similar mistakes as well," said UCLA psychology professor Hongjing Lu, senior author of the study. 

Technology advanced from prior iterations 

We're shocked that language learning machines can reason since they are primarily designed to perform word prediction. The technology has advanced significantly from its prior iterations in the last two years, according to Lu. 

The researchers acknowledged their lack of understanding of OpenAI's internal workings. They question if large language models (LLMs) are indeed beginning to display thinking similar to that of humans, or if they are merely reproducing human thought via a different method. The researchers expressed a wish to investigate this topic in further detail in the next studies.