When ChatGPT, the large-language model chatbot developed by OpenAI, was released in November of 2022, the reactions largely fell in two camps: the first was praise for its capabilities and human-like responses. The resulting media coverage of the AI program made it the fastest growing software application in history with over 100 million users in just a few months. The other response was a “tech panic” with visions of the movie I, Robot or 2001: A Space Odyssey coming to pass where “Killer AI” posed an existential threat to humans.
Sadly, it was America’s tech industry that was the most vocal in calling for a pause in AI innovation. Earlier this summer, the CEO of OpenAI Sam Altman testified to Congress saying a new federal or even international bureaucracy to license and regulate AI was needed (something that a few Senators have already been happy to propose) because essentially every AI product after the one developed by his company naturally could “cause significant harm to the world.” Meanwhile, Google expressed caution because selfishly AI threatens its multi-billion dollar search engine business and tech icons Elon Musk and Steve Wozniak called for a pause in AI development.
Eager to never let a crisis go to waste, some blue states and even cities have introduced regulations on AI. In July, for example, New York City implemented an ordinance with costly mandates regulating the use of AI in hiring. During a recent trip to Silicon Valley, the Biden administration introduced an “AI Bill of Rights” promising that the government would protect us from AI. I feel so relieved now.
The problem with this instinct to “regulate first, ask questions later” is twofold. First, we don’t know the chilling effect regulations could have on the positive impacts of AI. For example, just a few years ago one of Google’s AI platforms was able to predict how proteins fold at the molecular level, a feat that had stumped scientists for over 50 years and may lead to the rapid development of cures to diseases that plague millions of Americans like dementia. What future innovations could we lose out on if you have to get your AI model licensed by a state or federal bureaucrat? A better example to follow would be when the Clinton administration took essentially a regulatory-free approach to the internet which undoubtedly helped foster the technology that redefined our modern way of life.
The second problem with this quick notion to regulate and approve further development of AI is it completely ignores the impact of the free market on the utilization and adoption of AI. Despite kicking off the public debate on AI, ChatGPT has been hemorrhaging users and users spending less time on the site. Some believe this could be because students who were using the software to help write essays and homework were out of class during the summer months but now may return to the platform. However, the market has already responded with new software that can detect the use of AI in written content. Another reason for ChatGPT’s sudden demise is it is becoming increasingly inaccurate with a recent study showing its accuracy in computing prime numbers fell from 98 to just over two percent in a matter of months. Just like calls to regulate the technologies of the past, the government cannot predict how and what technologies the market will ultimately adopt. Heck, there used to be people that called MySpace a monopoly and needed to be regulated, and how did that turn out? If anything, it is the calls to regulate AI that could “kill” us by depriving us of the innovations and inventions AI will deliver. Hopefully, our politicians will not regulate based on the fears of science fiction.