A former employee says OpenAI should consider embracing sophisticated security measures to prevent its advancements from spiraling out of control.
What you need to know
- OpenAI was previously placed under fire for prioritizing shiny products over safety.
- A former employee has corroborated similar sentiments while referring to the company as the Titanic of AI.
- The ex-employee says OpenAI's safety measures and guardrails won't be able to control and prevent AI from spiraling out of control if the company doesn't embrace critical and sophisticated measures.
OpenAI has been hitting the headlines for the past few months (debatably for all the wrong reasons). It started when most of its safety team departed from the company, including the former head of alignment, Jan Leike, who indicated the company is focused on shiny products as safety culture and processes take a back seat.
As it turns out, a former OpenAI employee, William Saunders, has seemingly echoed similar sentiments. While speaking on Alex Kantrowitz's podcast on YouTube earlier this month, Saunders indicated:
"I really didn't want to end up working for the Titanic of AI, and so that's why I resigned. During my three years at OpenAI, I would sometimes ask myself a question. Was the path that OpenAI was on more like the Apollo program or more like the Titanic? They're on this trajectory to change the world, and yet when they release things, their priorities are more like a product company. And I think that is what is most unsettling."
OpenAI CEO Sam Altman hasn't been shy about his ambitions and goals for the company, including achieving AGI and superintelligence. In a separate interview, Altman disclosed that these milestones won't necessarily constitute a dynamic change overnight. He added that interest in tech advancements is short-lived and may only cause a two-week freakout.
The former lead of super alignment for OpenAI revealed he disagreed with top OpenAI executives over the firm's decision-making process and core priorities on next-gen models, security, monitoring, preparedness, safety, adversarial robustness, and more. This ultimately prompted his departure from the company as well.
In the grand scheme of things, it's highly concerning if the ChatGPT maker is prioritizing shiny products over safety despite Altman openly admitting there's no big red button to stop the progression of AI.
Safety is OpenAI's biggest issue
Advancements in the AI landscape are majorly riddled by safety and privacy concerns. Remember Windows Recall? Microsoft's privacy nightmare and a hacker's paradise. The Redmond giant recently unveiled crazy next-gen AI features shipping exclusively to Windows 11 Copilot+ PCs, including Live Captions, Windows Studio effects, and the show-stopper, Windows Recall.
Recommended Comments
There are no comments to display.
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.