Tether CEO Issues Scary AI Warning


Google News

Tether CEO Paolo Ardoino recently said: X Social Media Networks We warn about the pitfalls of centralized large-scale language models (LLMs).

Ardoino pointed to reports that OpenAI, a major generative AI company, suffered a massive security breach in early 2023, describing the incident as “horrifying.”

OpenAI chose not to publicly disclose the leaks, even though some of its sensitive information was leaked, a recent report said. report From the New York Times.

Former OpenAI researcher Leopold Aschenbrenner criticized the company for its poor security measures, which could leave it vulnerable to bad actors with ties to foreign governments. Aschenbrenner claimed that the AI ​​leader chose to disassociate himself from him for political reasons. However, the company denied that the aforementioned incident was the reason for the researcher’s dismissal, adding that the breach had already come to light before he was hired by OpenAI.

Still, some worry that the company’s secrets could fall into Chinese hands, even though OpenAI insists its current technology poses no national security risks.

Beyond security incidents, centralized AI models have also faced criticism over unethical data usage and censorship, and Tether’s head believes that unleashing the power of local AI models is the “only way” to address privacy concerns and ensure their resilience and independence.

“Locally runnable AI models are the only way to protect people’s privacy and ensure their resilience and independence,” Ardoino said in a post on the X social media network.

He added that modern smartphones and laptops are powerful enough to fine-tune a typical LLM.

About the Author

Alex Dobnya


Leave a Comment