scorecardresearch

Here’s what Paytm CEO has to say on OpenAI’s 'human extinction' blog

Paytm CEO expressed worries on the recently revealed blog post of OpenAI which states that superintelligence AI can lead to human extinction

advertisement
Here’s what Paytm CEO has to say on OpenAI’s 'human extinction' blog artificial intelligence
Here’s what Paytm CEO has to say on OpenAI’s 'human extinction' blog
profile
New Delhi, UPDATED: Jul 10, 2023 16:03 IST

Highlights

  • Paytm CEO highlights concerns on the growth of AI in replacing humans
  • OpenAI forms the team of researchers and scientists to handle its Superalignment project

Recently, Paytm CEO, Vijay Shekhar Sharma, raised an alarm following a blog post by OpenAI, the company that developed ChatGPT. It claimed that the immense potential possessed by superintelligence artificial intelligence (AI) may cause human disempowerment or perhaps extinction. 

As per the blog published by OpenAI on 5 July, it has clearly stated that superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems. The blog further mentioned that the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction. 

advertisement

Paytm CEO’s take on AI and the post 

On one hand, OpenAI in its blog claimed superintelligence as a hypothetical entity with its intelligence greater than the smartest and most talented people and could arrive in this decade. On the other hand, Sharma emphasised certain concerning facts from the OpenAI post in his tweet. He said that he is truly concerned with the power some people and select countries have already accumulated, owing to this technology.

In continuation, Sharma also brought up another section of the said blog that stated, "In less than 7 years, we may have a system that disempowers humanity and even causes human extinction." 

advertisement

In the blog  titled ‘Introducing Superalignment,’ OpenAI highlights the need for new discoveries in science and technology to manage AI systems that may be more intelligent than people. To solve this problem, OpenAI has formed a team under the direction of Ilya Sutskever, (co-founder and chief scientist of OpenAI) and Jan Leike (ML researcher, and co-leader of superalignment). 

Also Read: ‘Can become better leaders than humans,’ say AI-powered robots at Geneva press conference

OpenAI’s take on handling AI issue 

Currently, AI alignment methods that rely on human supervision include reinforcement learning via human feedback. However, these methods might not be adequate for synchronising superintelligent AI systems that are more intelligent than humans. New scientific and technological developments, according to OpenAI, are required to handle this problem.

The strategy used by OpenAI entails creating an automated alignment researcher with nearly human-level intelligence. In order to scale their efforts and align superintelligence, the company has planned to deploy massive computing resources. Creating scalable training techniques, validating models, and stress-testing the alignment pipeline are included in this process. 

advertisement

OpenAI further emphasises that their work on superintelligence alignment is separate from ongoing initiatives to increase the security of current AI models and address other dangers related to AI.

Published on: Jul 10, 2023 13:14 ISTPosted by: nidhi bhardwaj, Jul 10, 2023 13:14 IST
IN THIS STORY

COMMENTS 0

Advertisement
Recommended