scorecardresearch

Microsoft President dismisses fears of immediate super-intelligent AI, emphasises long-term safety measures

While the debate on AI advancements and safety continues, industry leaders like Brad Smith are urging for proactive measures to ensure the responsible development and deployment of AI technologies.

advertisement
Microsoft's Brad Smith asserts decades before super-intelligent AIartificial intelligence
Microsoft's Brad Smith asserts decades before super-intelligent AI
profile
New Delhi, UPDATED: Dec 1, 2023 11:14 IST

Highlights

  • Microsoft President Brad Smith dismissed the notion of an imminent breakthrough in super-intelligent artificial intelligence
  • He emphasised that the timeline for highly intelligent AI might extend over several decades rather than mere years

In a recent press briefing in Britain, Microsoft President Brad Smith dismissed the notion of an imminent breakthrough in super-intelligent artificial intelligence (AGI). Smith firmly stated that there is "no chance" of witnessing AGI, where computers surpass human capabilities, within the next 12 months. While acknowledging the rapid advancements in AI, he emphasised that the timeline for highly intelligent AI might extend over several decades rather than mere years.

advertisement

OpenAI's Project Q* raises questions about AI safety

The discussion on AI safety coincides with recent developments at OpenAI, where co-founder Sam Altman faced a temporary removal from the CEO position. The move was reportedly linked to concerns raised by researchers regarding Project Q* (pronounced Q-Star), an internal initiative at OpenAI. This project is seen as a potential leap forward in achieving artificial general intelligence (AGI), defined as autonomous systems outperforming humans in economically valuable tasks. However, Smith downplayed any direct connection between Altman's removal and concerns about Project Q*, asserting that the board's decision was not primarily driven by such worries.

Smith calls for AI safety brakes to ensure human control

Amidst these discussions, Smith highlighted the critical need for safety measures in AI systems, particularly those controlling essential infrastructure. Drawing parallels with safety mechanisms in elevators, electrical circuits, and emergency brakes in buses, he advocated for the integration of safety brakes in AI systems. Smith stressed the importance of maintaining human control over these systems, reinforcing the notion that prioritising safety measures should be a current focus in the field of artificial intelligence.

In conclusion, while the debate on AI advancements and safety continues, industry leaders like Brad Smith are urging for proactive measures to ensure the responsible development and deployment of AI technologies. The emphasis on safety brakes underscores the importance of maintaining human oversight in an era of evolving artificial intelligence.

advertisement
Published on: Dec 1, 2023 11:14 ISTPosted by: jasmine anand, Dec 1, 2023 11:14 IST

COMMENTS 0

Advertisement
Recommended