Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Openai still has a governance problem

https://www.profitableratecpm.com/h3thxini?key=b300c954a3ef8178481db9f902561915


Keep –Logs informed with free updates

It can be difficult to form a chat. Last month, Openai updated in Chatgpt because his “default personality” was too symphantic. (Perhaps the company’s training data was taken out of the transcripts of United States President Donald Trump, Cabinet meetings. . .)

The artificial intelligence company had wanted to make its chat more intuitive, but its responses to user consultations were reduced to being too supportive and unpleasant. “Symphantic interactions can be uncomfortable, disturbing and causing anguish. We were short and we are working to do it,” the company said in a Bloc publication .

Returning of sympathetic chats may not be the most crucial dilemma that faces Openai, but has its greatest challenge: to create a personality of trust for the company in general. This week, Openai was forced to go back to his last company planned updateDesigned to turn the company into a profit organization. Instead, it will move on to a Corporation of public benefits being under the control of a non -profit board.

This will not solve structural tensions in the Openai nucleus. Nor does Elon Musk, one of the co-founders of the company, follow legal actions against Openai to move away from its original purpose. Does the company speed up the deployment of the AI ​​product to maintain its happy financial sponsors? Or do you follow a more deliberative scientific approach to maintaining -true to your humanitarian intentions?

Openai was founded in 2015 as a non -profit research laboratory dedicated to artificial general intelligence development for humanity. But the company’s mission, as well as AGI’s definition, have been blurred since.

Sam Altman, Openai’s CEO, quickly realized that the company needed large amounts of capital to pay the talent of the investigation and computer power needed to keep at the helm of the AI ​​investigation. To that end, Openai created a profit -ending subsidiary in 2019. This was the success of Chatbot Chatgpt’s rupture that investors have loved launching money, valuing Openai a $ 260 million During their last fund collection. With 500 million weekly users, Openai It has become an “accidental” consumer Internet giant.

Altman, who was fired and reduced by the non -profit council in 2023, now says he wants to build a “brain for the world” that can require hundreds of billions, if not trillion, of more investment dollars. The only problem with its wild eyes ambition is: as a technology blogger Ed Citron RANTS In increasing terms: Openai still has to develop a viable business model. Last year, the company spent $ 9 million and lost $ 5 million. Is your financial assessment based on a hallucination? There will be pressure on investors in investors quickly to market their technology.

In addition, AGI’s definition continues to change. It has traditionally referred to the point where the machines overcome humans through a wide range of cognitive tasks. But in a recent interview With Ben Thompson with Stratechery,Altman acknowledged that the term had been “almost completely devalued.” However, he accepted a narrower AGI definition as an autonomous coding agent who could write software and any human.

In this score, the big companies in the AI ​​seem to think that they are near AGI. A gift is reflected in its own contract practices. According to Zeki data The 15 best AI companies in the United States had frantically hired software engineers at a rate of up to 3,000 a month, recruiting a total of 500,000 between 2011 and 2024.But lately their net monthly recruitment rate has dropped to zero as these companies provide that the agents of the AI ​​can perform many of the same tasks.

Line of the contract chart per month showing software engineering contracts by United States AI companies

A recent Research documentFrom Google Deepmind, which also aspires to develop AGI, he emphasized four main risks of increasingly autonomous AI models: a misuse of bad actors; Disaligning when a AI system does unwanted things; errors that cause unintentional damage; and multi-agents when the unpredictable interactions between AI systems produce bad results. These are all the challenges of flexion of the mind that carry some potentially catastrophic risks and may require some collaborative solutions. The more powerful they become the AI ​​models, the more prudent developers should be deploying them.

Therefore, the government of the companies of the Frontier Ai is not only a matter for corporate boards and investors, but for all of us. Openai is still worrying in this regard, with conflicting impulses. The struggle with Sycophance will be the minimum of its problems as we approach AGI, even though you define it.

John.thornhill@ft.com



Source link

اترك ردّاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *