Does the self-created monster get out of control? Are we witnessing the birth of Skynet from Schwarzenegger’s Terminator movies? According to the open letter “Pause Giant Artificial Intelligence ChatGPT Experiments: An Open Letter”, in which experts call for an AI development freeze, we are currently at risk of losing control of our civilization. Pandora’s box has long been open.
What have Elon Musk (head of Tesla, Twitter, Space-X), Steve Wozniak (Apple co-founder), Yoshua Bengio (founder and scientific director of Mila, Turing Prize winner and professor at the University of Montreal), Stuart Russell (Berkeley, Professor of Computer Science, Director of the Center for Intelligent Systems and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”), Sean O’Heigeartaigh (Executive Director, Cambridge Center for the Study of Existential Risk) – and several other high-ranking representatives of education and the tech industry in common?
Correct: You warn of potentially incalculable consequences of general artificial intelligence – and in an open letter calling for an immediate moratorium on the further development of Artificial Intelligence ChatGPT. This should initially apply for at least six months. Well, over a thousand people have now digitally signed the letter.
Artificial Intelligence ChatGPT black boxes that no one can control anymore
The signatories fear that AI will quickly become so good at self-optimization that “no one — not even its inventors — can understand, predict, or reliably control it.” The global race in AI development is already out of control. This fits with the ABC News interview broadcast just over a week ago, in which the OpenAI founder warned against his creation of ChatGPT.
Undoubtedly, within a well-defined framework, AI can perform many tasks much better than humans could ever do. Just think of the evaluation of huge amounts of data, for example in medical diagnostics or in “predictive maintenance” applications for machine management. But there are also enormous risks in AI technology.
Read Also: What Is ChatGPT And Why Does It Matter?
“Especially democracies are at risk”
The authors of the letters fear that “AI systems with an intelligence equal to that of humans can pose far-reaching risks for society and humanity”. Extensive research would back this up and be recognized by leading AI labs. Advanced Artificial Intelligence ChatGPT can bring about profound change in the history of life on Earth and should be planned and managed with due care and resources. “Unfortunately, such planning and management do not take place.”
The critics explicitly point out that the new technology could spread propaganda and hate comments on an unprecedented scale. They fear negative effects on the world of work and worry that high-quality jobs could also be lost: “Should we automate all jobs, including the fulfilling ones? Should we develop non-human intelligence that could eventually outnumber, outsmart, obliterate, and replace us? Should we risk losing control of our civilization?” Such decisions should not be allowed to be delegated to unelected technology leaders.
First, create clear rules for AI development
The initiators of AI development stop and therefore demand that an ethical framework with clear limits be created first, which should not be exceeded in AI development. Powerful AI systems should only be developed if their “effect is positive and the risks are manageable”. The minimum six-month moratorium refers explicitly to the training of AI systems that are more powerful than GPT-4. During these six months, ongoing developments are to be reviewed by external experts, and developers are to design and implement security protocols together.
These protocols should ensure that systems that adhere to them are unequivocally secure. This does not mean a general break in AI development, but merely a departure from the dangerous race to ever larger, ultimately unpredictable black box models with emergent capabilities. “AI research and development should be focused on making today’s high-performing, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, tuned, trusted, and loyal.”
In parallel, AI developers would need to work with policymakers “to dramatically accelerate the development of robust AI governance systems.” At a minimum, these should include: new and capable regulators dedicated to AI; monitoring and tracking high-performance AI systems and large pools of computing power; lineage and watermarking systems that help distinguish real from synthetic data and track model leaks; a robust examination and certification system; liability for damage caused by AI; solid public funding for technical AI security research; and well-resourced institutions to deal with the dramatic economic and political disruptions that AI will cause – “, especially for democracy”.
Problem: Globally harmonized rules that are also controlled
The problem with this demand for more control is obvious: firstly, such a regulation would have to apply to all actors worldwide, and secondly, compliance would also have to be monitored and documented. How is this supposed to work? And who should do that? China’s Xi will certainly not let US AI controllers into the country – and vice versa. It is becoming increasingly clear that technological development is now light years ahead of regulatory authorities.
Slowing down the hitherto uninhibited AI hype is almost hopeless. The situation is comparable to the feather filling of a pillow, which you first scatter to the four winds and then try to collect all the feathers again. New Artificial Intelligence ChatGPT applications are popping up around the world practically every week, and companies are outdoing each other with their announcements.
Take a break now – it also worked with other risk technologies
Nevertheless, one should not give up: Society has always taken a break from other technologies with potentially catastrophic effects on society – such as human cloning, human germline modification, function extension research, and eugenics. The authors advocate: “We can do that here too!”
Humanity can experience a prosperous future with AI – “Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, develop these systems for the clear benefit of all and give society the chance to adapt.” (me)