Trust Is Good, Doubt Is Safer: The Illusion of AGI control in Big Tech

  


In recent months, several reports and internal statements have highlighted a worrying truth: leading AI companies, such as OpenAI, Google DeepMind, Anthropic, and Meta, are apparently unable to manage the risks associated with the evolution towards Artificial General Intelligence (AGI). Despite promises of accountability and transparency, concrete efforts to ensure the safety and control of these technologies appear largely insufficient. This article analyzes the main evidence and testimonies that support this thesis, offering a comprehensive picture of the current gaps in the governance of advanced AI.

🔗 Do you like Techelopment? Check out the site for all the details!

A system without brakes: the Future of Life Institute report

A survey by the Future of Life Institute (FLI), conducted in 2024, assessed the leading AI companies' preparedness for managing existential risks. The results are alarming: most received a score of "D" or "D+," well below an acceptable standard. According to the report, none of the companies evaluated were able to provide a credible operational plan for controlling the development of AGI.

source futureoflife.org

This means that, despite public pronouncements about the importance of safety and transparency, big tech companies are moving forward with research and development with little regard for long-term implications. The lack of concrete and verifiable plans, combined with the opaqueness of internal practices, paints a picture in which the risk of catastrophic side effects is far from remote (Source: SFG Media).


The Insiders' Letter: An Insider's Voice

In June 2024, a group of current and former employees of companies such as OpenAI, Google DeepMind, and Anthropic published an open letter expressing deep concern about the lack of effective oversight in AGI development. The letter, reported by The Guardian and CNBC, denounces how financial incentives and the race for technological supremacy are overshadowing any attempt to implement fair and transparent controls.arenti.

Among the most serious criticisms is the silence imposed on insiders: those who work within these organizations often lack the tools or protections to report risks or problems without incurring retaliation. This creates an environment where internal warnings are ignored or suppressed, exacerbating the danger of developing out-of-control AI systems (Sources: The Guardian, CNBC).


Former employees speak out: William Saunders testifies

One of the most authoritative voices is that of William Saunders, a former OpenAI employee, who testified before the United States Congress. According to Saunders, the company repeatedly ignored internal recommendations on safety protocols, preferring instead to accelerate the release of increasingly powerful and complex models.

In his hearing, Saunders emphasized how the entire AI industry is moving "blindly" toward a technology that could soon become uncontrollable. His words add to a growing body of public complaints challenging the reassuring narrative promoted by tech companies (Source: TechTarget).


Insufficient assessments: the marginal role of red teaming

Another study, this time conducted by SaferAI and reported by Digit.FYI, analyzed the effectiveness of the control mechanisms implemented by big tech companies. In particular, "red teaming" was examined, which involves using dedicated teams to simulate attacks and risk scenarios to test the robustness of AI models.

The results are not encouraging: most companies received a "weak" or "very weak" rating in this area. Only Anthropic achieved slightly better scores, thanks to a more systematic commitment to risk assessment. However, even in this case, the measures fall far short of a truly protective standard (Source: Digit.FYI).


A conflict between ethics and profit

A recurring theme that emerges from all these reports is the irreconcilable conflict between ethical research and profit logic. Companies are incentivized to release products quickly to maintain a competitive advantage, but this speed often comes at the expense of safety. Without external regulation and independent verification, it is impossible to ensure that declarations of principle are actually translated into concrete practices.


Conclusion: What Future for Digital Humanity?

In light of these data and testimonies, it is clear that the current rush to develop AGI is marked by a serious lack of accountability. Technology companies are investing billions in increasingly advanced models without implementing adequate safeguards. If the international community does not intervene soon with rigorous regulation, we could face consequences far beyond our capacity to intervene.

Effective governance of AGI requires transparency, public oversight, and a clear separation between commercial interests and collective security. The alternative is to let the future of artificial intelligence be written by a few private actors, without brakes and without responsibility.



Follow me #techelopment

Official site: www.techelopment.it
facebook: Techelopment
instagram: @techelopment
X: techelopment
Bluesky: @techelopment
telegram: @techelopment_channel
whatsapp: Techelopment
youtube: @techelopment