Experts: AI May Lead To ‘Nuclear-Level’ Disaster
A study by Stanford University revealed that fully one-third of natural language processing researchers believe that artificial intelligence may lead to a “nuclear-level catastrophe.”
AI has jumped into the global consciousness recently after the successful debut of the popular ChatGPT AI chatbot. The entity is capable of producing written works and even imagery that is difficult to discern from human creations.
There is a race among Big Tech firms to produce similar products, though the ethical and legal questions of its proliferation are just now being discussed.
Some nations are already putting the cautionary brakes on this development. Italy temporarily prohibited ChatGPT over privacy issues and the European Union is working on new rules to govern the technology.
Stanford’s findings came from its 2023 Artificial Intelligence Index Report, which surveyed numerous experts on the state of the cutting edge AI sector. It found that a large majority of respondents believe AI is paving the way for revolutionary societal changes.
A sizable minority, 36%, issued a warning. This group believed that the rapidly growing influence of AI may lead to disaster.
And while the survey did not illustrate specifically how AI technology could lead to a nuclear disaster, it demonstrated how AI may be implemented in nuclear sciences. For example, nuclear fission.
It did not take long for AI to shift towards a much darker task than providing amusing banter with users. An anonymous user developed ChaosGPT, using the open-source program for Open AI’s model.
Theoretically, these autonomous chatbots pursue their own course towards the programmed goal without human intervention.
Some would call this thinking and making their own corrections.
The programmer for ChaosGPT gave it five protocols. The AI was to destroy humanity, establish global dominance, cause chaos and destruction, control humanity through manipulation, and attain immortality.
It is not clear how humanity would be controlled once it was destroyed, but the AI was set to continuous mode. Without intervention, it is programmed to run forever until it successfully completes its assignments.
Safeguards are hardly foolproof, as noted in Stanford’s report. Researcher Matt Korda worked around these precautions to get details on the construction of a dirty bomb, including recommendations on how to gather the necessary components.
There is a lot of promise to AI, but as with any 21st century development, there are hordes of individuals and groups looking to utilize it for their schemes. Allowed to run amok, AI could be more destructive to our society than productive.