Do Tools Like ChatGPT Pose A COVID-Level Risk To Humanity? Open AI’s CEO Seem To Think So
Sam Altman joins AI entrepreneurs and academics in sounding the alarm on what they say is generative technology’s harmful potential.
Add bookmarkTW: Death, suicide
Trending news reports this week reveal that Elon’s Musk's call to pause AI and Geoffrey Hinton’s Google exit were just the tip of the iceberg on addressing all the global ethical, security and humanitarian concerns surrounding ChatGPT, Google Bard and generative AI at large.
This week, many of artificial intelligence’s heavy hitters–including Sam Altman, CEO of OpenAI–released a joint statement emphasizing their belief that the evolution of artifical intellengce could lead to the end of the human race as we know it.
Published May 30 by the Center for AI Safety, the forward and full short statement read as follows
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
Altman’s signature joins the likes of not just Hinton, but AI executives across companies like Microsoft and Google, as well as 200 academics who have closely studied the growth and use cases of artificial intelligence.
Here at CCW, we have been closely monitoring, analyzing and exploring generative AI’s use cases in not just customer contact and the customer experience, but in the employee experience, patient experience and CX influencer arenas as well, just to name a few. Many of our team members have utilized the technology ourselves, worked with organizations who develop generative technology tools, and spoke to scientific and academic researchers on not just AI’s impact on CX, but on the world.
RELATED INTERVIEW: CCW Las Vegas 2023 | Timnit Gebru and the Ethics of AI
In this industry, which is overwhelmingly personalized, digitized and technologically driven, AI plays an intricate role in customer contact: it learns from consumer behavior and employee emotional intelligence, assisting in shaping human interactions at communication touch points throughout the omnichannel customer journey.
RELATED REPORT: Leveraging Emotion Recognition in Generative AI for Contact Centers
The concerns held by Altman and company echo many considerations that companies, customers, and employees have surrounding generative AI’s eventual abilities to not only digest, but perhaps supercede human knowledge, intellect and even judgment. Job automation, the spread of disinformation, programmed bigotry and bias are all potential risks of utilizing generative AI without the proper education, training, oversight and developmental monitoring that (for now) only human beings can provide.
However, some of the claims experts are making are not supported by their fellow colleagues in business, technology development and even academia. According to NBC, Meredith Whittaker, president of the Signal messaging app and a chief adviser to the AI Now Institute, a nonprofit focused on adopting ethical AI practices, addressed the statement on Twitter implying that the whistleblowing has less to do with AI’s dangers and more to do with developers over-promising the capabilities of their products:
The hotrod I'm selling? Might be a little too fast and a bit too cool and tbh not sure you can handle it bro.
— Meredith Whittaker (@mer__edith) May 30, 2023
(This shit is so tiring.) https://t.co/WMQ6LfBt1X
Others chiming in on the issue online challenged the wording of the statement and the very idea of calling such technology “AI,” suggesting that discussions should instead focus on "AGI"–artificial general intelligence, which is an implied theory that some forms of AI can be as knowledgeable–or even more so–than humans.
fixed it for you! pic.twitter.com/7T0pLUpw1f
— clem 🤗 (@ClementDelangue) May 30, 2023
While AI CEOs like Clément Delangue are giving pushback, growing concern in the U.S. regarding AI use is rising as regulations and legislations surrounding the popularized technology have yet to be instituted in ways that address the gray areas of information ownership, privacy, and data sharing that surround AI. Altman’s own calls to regulate AI made their way to Congress earlier this month, where over a private dinner he urged policymakers to consider limitations on use cases and product development.
Considering that artificial intelligence has created controversy in everything from romance to mental health, many industry leaders in the CX space are following cues from ideas like Altman’s. They are taking careful consideration to monitor use cases, level expectations for employees and customers, and not fall prey to the mad developmental dash that is numerous iterations of AI in the market aiming to top one another.
As public understanding, use, research and development of AI progress, so too will its opportunities to support CX success, and possibly its abilities to negatively impact those whose information it digests.
RELATED LEARNING: The Future of Generative AI in Contact Centers
For now, here at CCW we’ll continue to closely monitor these issues and work alongside customer contact leaders, tech experts and end users to further develop our understanding of what statements like these mean for the future of omnichannel CX.
Photo by Jonathan Kemper on Unsplash: https://unsplash.com/photos/N8AYH8R2rWQ