I’m closing out March—AI Danger Awareness Month—with a warning about what’s predicted when artificial superintelligence (ASI) arrives in the not-too-distant future.
Note: On the topic of AI, I’m no expert. Instead, I’m more like the sixth-grade crossing guard who knows just enough to help the younger students avoid the risks.
Predictions about AI’s Future
Consider the future potential of ASI.
- In early 2026, the sellers of AI technology still give few warnings about its use.
- Those retailers offer fewer mentions of what people around the globe can expect to happen when AI morphs from a so-called ‘AI tool’ into ‘ASI’ (i.e., artificial superintelligence).
- When (not if, according to many experts) AI becomes ASI, the outcome doesn’t just affect one person negatively, but all of humanity.
For background and definitions, read: Don’t Confuse AI with a Benign Tool.
Historical Doomsday Events We Barely Avoided
Again, you may think I’m reacting to a few doomsday websites and reports, but by the grace of God, humanity has so far survived several potential extinction-level events.
For example:
Cuban Missile Crisis (October 16 to 28, 1962)
The Cuban Missile Crisis of October 1962 gave us another near miss with a doomsday event. Only diplomacy avoided the firing of missiles from Cuba’s shores, along with those placed elsewhere around the globe. What could have happened if AI decided whether to fire the missiles?
https://en.wikipedia.org/wiki/Cuban_Missile_Crisis
Stanislav Petrov (September 26, 1983)
Many credit Russian Stanislav Petrov with avoiding an all-out war between his home and the US. Russian policy called for immediate notification to the high command if the early warning system detected incoming missiles, and they would retaliate forthwith. Petrov suspected the September 26, 1983, warning was false, and he delayed notification. Because of one man’s bravery, both countries avoided a nuclear winter. What could have happened if Russia had used AI to automate retaliation?
https://en.wikipedia.org/wiki/Stanislav_Petrov
False Report of Soviet Nuclear Attack (June 3, 1980)
A computer communications device failure caused warning messages to flash sporadically at North American Aerospace Defense Command and U.S. Air Force command posts around the world that a Soviet nuclear attack was taking place. The malfunction happened again on June 6. The false alarm provided inspiration for the 1983 film WarGames. Today, we read too often about AI models that caused issues because of errant vibe coding or poor prompts, so in the future, what could happen when ASI controls power grids, water processing, and nuclear retaliation?
https://en.wikipedia.org/wiki/Gain-of-function_research
Chernobyl Disaster (April 26, 1986)
The Chernobyl Nuclear Power Plant, in Ukraine, exploded on the 26th of April 1986. The explosion (attributed to human error) resulted in many immediate casualties and thousands of long-term health complications stemming from the spread of radioactive material in the atmosphere. On the International Nuclear Event Scale, it ranked a 7, the top of the disaster scale. An emergency operation over time put out the fires, stabilized the reactor, and helped stem the spread of more radioactivity in the atmosphere. In the future, what could happen if ASI controlled the response to a disaster like Chernobyl (e.g., would ASI even care about the loss of human lives)?
https://en.wikipedia.org/wiki/Chernobyl_disaster
The Black Brant Scare (January 25, 1995)
A Norwegian rocket carrying scientific equipment was launched on January 25, 1995, reaching an altitude of 903 miles. It resembled the US Navy-launched Trident missile. The Russians feared a high-altitude nuclear attack, the kind that could blind their radar systems. Russian President Boris Yeltsin had only minutes to decide whether to launch a retaliatory nuclear strike on the United States. Fortunately, Russian observers prevented the attack, and Yeltsin did not order a retaliation. What could happen if ASI controls the decision to retaliate?
https://en.wikipedia.org/wiki/Norwegian_rocket_incident
Gain-of-Function Research (Early 2011 to 2021)
A little-known event in 2011 ushered in a modified H5N1 Influenza A through gain-of-function research. That could have led to a pandemic, but fortunately, that function was halted in 2014. Unfortunately, gain-of-function resumed under new leadership in 2017. In the future, consider what could happen when ASI controls how gain-of-function research is performed?
Humans, Not AI, Saved the Day (Barely!)
No doubt humans created the potential for the actual disasters cited above. Men and women also saved the day.
What no person can predict with any accuracy is what will happen when ASI creates a disaster. Will it have sufficient incentives and resources to save the day? Will ASI care about humanity? If humans don’t know what AI did to cause the problem, will they have any chance to turn things around?
Many have grave doubts, as supported by the evidence in these linked articles.
Note: I cannot foretell the future, but I can choose to avoid known risks; thus, consider AI’s actual and potential dangers to make informed decisions.
Your Thoughts?
What do you think we will happen in the future when ASI arrives?
Note: At the India AI Summit, Sam Altman says superintelligence could be here by 2028. As you read Altman’s speech, reflect on how few times people agree globally about anything. Then imagine what can happen as AI development speeds up without adequate safeguards. Think ‘gain of function’ on steroids (e.g., COVID-19) and you’ll grasp a sense of the dangers.
Read these articles with AI EXECUTIVE CHOICES in the headings. Scary!


Leave a Reply to lyndhurstlauraCancel reply