OpenAI researchers warned about AI breakthrough prior to CEO’s exit: Insider sources reveal
More than 700 employees had contemplated resigning and aligning with supporter Microsoft in solidarity with their ousted leader before Sam Altman's eventual return on Tuesday.
artificial intelligence
Highlights
- OpenAI researchers warned board about the potential dangers of AI
- More than 700 employees threatened to quit in solidarity with ousted CEO Sam Altman
- New AI model Q* has demonstrated the capability to solve specific mathematical problems
Before OpenAI CEO Sam Altman faced a four-day exile, insiders informed Reuters that several staff researchers had penned a letter to the board of directors, cautioning about a formidable artificial intelligence discovery with potential threats to humanity.
The letter and the undisclosed AI algorithm were pivotal factors leading to Altman's removal, according to two sources familiar with the matter. More than 700 employees had contemplated resigning and aligning with supporter Microsoft in solidarity with their ousted leader before Altman's eventual return on Tuesday.
Among the board's multiple grievances that culminated in Altman's firing, concerns were raised about the rush to commercialise advancements without fully grasping their consequences. Although Reuters was unable to review the letter, sources pointed to it as one element in the broader context of the board's decision. The authors of the letter did not respond to requests for comment.
OpenAI, which chose not to comment on the matter, internally acknowledged a project named Q* and the letter to the board before the weekend's events, according to an individual familiar with the situation. Mira Murati from OpenAI, sent a message to staff, alerting them to specific media stories without confirming their accuracy.
Some individuals within OpenAI view Q* pr as a potential breakthrough in the pursuit of artificial general intelligence (AGI). AGI, defined by OpenAI as autonomous systems surpassing humans in economically valuable tasks, is considered a frontier by those working on generative AI.
The new model, benefiting from substantial computing resources, demonstrated the capability to solve specific mathematical problems. While the tests were at the level of grade-school students, the researchers are optimistic about Q*'s future success, according to an anonymous source within the company.
Reuters, however, could not independently verify the claimed capabilities of Q* put forth by the researchers. The project has sparked excitement among some at OpenAI, as it represents a potential leap forward in generative AI development, particularly in the realm of mathematics.
In the letter to the board, the researchers flagged the prowess and potential dangers of AI, although specific safety concerns were not detailed. The discussion among computer scientists about the risks associated with highly intelligent machines, especially if they were to decide that the destruction of humanity served their interests, has been ongoing for some time.
COMMENTS 0