Game Over When AI Turns into ASI

Game Over When AI Turns into ASI

I’m closing out March—AI Danger Awareness Month—with a warning about what’s predicted when artificial superintelligence (ASI) arrives in the not-too-distant future.

Note: On the topic of AI, I’m no expert. Instead, I’m more like the sixth-grade crossing guard who knows just enough to help the younger students avoid the risks.

Predictions about AI’s Future

Consider the future potential of ASI.

  • In early 2026, the sellers of AI technology still give few warnings about its use.
  • Those retailers offer fewer mentions of what people around the globe can expect to happen when AI morphs from a so-called ‘AI tool’ into ‘ASI’ (i.e., artificial superintelligence).
  • When (not if, according to many experts) AI becomes ASI, the outcome doesn’t just affect one person negatively, but all of humanity.

For background and definitions, read: Don’t Confuse AI with a Benign Tool.

Historical Doomsday Events We Barely Avoided

Again, you may think I’m reacting to a few doomsday websites and reports, but by the grace of God, humanity has so far survived several potential extinction-level events.

For example:

Cuban Missile Crisis (October 16 to 28, 1962)

The Cuban Missile Crisis of October 1962 gave us another near miss with a doomsday event. Only diplomacy avoided the firing of missiles from Cuba’s shores, along with those placed elsewhere around the globe. What could have happened if AI decided whether to fire the missiles?

https://en.wikipedia.org/wiki/Cuban_Missile_Crisis

Stanislav Petrov (September 26, 1983)

Many credit Russian Stanislav Petrov with avoiding an all-out war between his home and the US. Russian policy called for immediate notification to the high command if the early warning system detected incoming missiles, and they would retaliate forthwith. Petrov suspected the September 26, 1983, warning was false, and he delayed notification. Because of one man’s bravery, both countries avoided a nuclear winter. What could have happened if Russia had used AI to automate retaliation?

https://en.wikipedia.org/wiki/Stanislav_Petrov

False Report of Soviet Nuclear Attack (June 3, 1980)

A computer communications device failure caused warning messages to flash sporadically at North American Aerospace Defense Command and U.S. Air Force command posts around the world that a Soviet nuclear attack was taking place. The malfunction happened again on June 6. The false alarm provided inspiration for the 1983 film WarGames. Today, we read too often about AI models that caused issues because of errant vibe coding or poor prompts, so in the future, what could happen when ASI controls power grids, water processing, and nuclear retaliation?

https://en.wikipedia.org/wiki/Gain-of-function_research

Chernobyl Disaster (April 26, 1986)

The Chernobyl Nuclear Power Plant, in Ukraine, exploded on the 26th of April 1986. The explosion (attributed to human error) resulted in many immediate casualties and thousands of long-term health complications stemming from the spread of radioactive material in the atmosphere. On the International Nuclear Event Scale, it ranked a 7, the top of the disaster scale. An emergency operation over time put out the fires, stabilized the reactor, and helped stem the spread of more radioactivity in the atmosphere. In the future, what could happen if ASI controlled the response to a disaster like Chernobyl (e.g., would ASI even care about the loss of human lives)?

https://en.wikipedia.org/wiki/Chernobyl_disaster

The Black Brant Scare (January 25, 1995)

A Norwegian rocket carrying scientific equipment was launched on January 25, 1995, reaching an altitude of 903 miles. It resembled the US Navy-launched Trident missile. The Russians feared a high-altitude nuclear attack, the kind that could blind their radar systems. Russian President Boris Yeltsin had only minutes to decide whether to launch a retaliatory nuclear strike on the United States. Fortunately, Russian observers prevented the attack, and Yeltsin did not order a retaliation. What could happen if ASI controls the decision to retaliate?

https://en.wikipedia.org/wiki/Norwegian_rocket_incident

Gain-of-Function Research (Early 2011 to 2021)

A little-known event in 2011 ushered in a modified H5N1 Influenza A through gain-of-function research. That could have led to a pandemic, but fortunately, that function was halted in 2014. Unfortunately, gain-of-function resumed under new leadership in 2017. In the future, consider what could happen when ASI controls how gain-of-function research is performed?

https://en.wikipedia.org/wiki/Gain-of-function_research

Humans, Not AI, Saved the Day (Barely!)

No doubt humans created the potential for the actual disasters cited above. Men and women also saved the day.

What no person can predict with any accuracy is what will happen when ASI creates a disaster. Will it have sufficient incentives and resources to save the day? Will ASI care about humanity? If humans don’t know what AI did to cause the problem, will they have any chance to turn things around?

Many have grave doubts, as supported by the evidence in these linked articles.

Note: I cannot foretell the future, but I can choose to avoid known risks; thus, consider AI’s actual and potential dangers to make informed decisions.

Your Thoughts?

What do you think we will happen in the future when ASI arrives?

Note: At the India AI Summit, Sam Altman says superintelligence could be here by 2028. As you read Altman’s speech, reflect on how few times people agree globally about anything. Then imagine what can happen as AI development speeds up without adequate safeguards. Think ‘gain of function’ on steroids (e.g., COVID-19) and you’ll grasp a sense of the dangers.

Read these articles with AI EXECUTIVE CHOICES in the headings. Scary!

20 responses to “Game Over When AI Turns into ASI”

  1. Wynne Leon Avatar

    What an interesting list of near extinction events. Thanks, Grant.

    1. Grant at Tame Your Book Avatar

      Thanks for stopping by, Wynne. Please see the list under Laura Lyndhurst’s comments and spread the word!

  2. lyndhurstlaura Avatar

    Given the human propensity for not learning from history, Grant, I suspect we’re doomed. I also recall that, far from being rewarded for acerting catastrophe, Petrov was disciplined, fired and died in obscurity. There’s hope, however, Anthropic’s refusal to let the Pentagon use its technology without human oversight. We live in perilous times, but are least when the s**t hits you can say ‘Well, I tried’. Thanks for your efforts.

    1. Grant at Tame Your Book Avatar

      I appreciate your view, Laura, and here’s a brief story that shapes my efforts.

      As they walked along the storm-strewn beach, the grandfather watched as his grandson returned stranded starfish to the water, one at a time. With hands on hips, irritated at the child’s fruitless effort, the grandfather asked, “Why waste your time? It doesn’t matter?”

      The boy smiled and cast the starfish into the sea. “It mattered to this one.”

      By raising AI awareness, it mattered to:

      • A parent who stopped their child from abusing others with AI images and text.
      • A writer who recognized that using AI traded a short-term gain in output for a long-term degradation of mental acuity and honed skills.
      • A blogger who realized that using AI-generated images didn’t draw people, but actually drove many away.
      • An individual who recognized and avoided an AI-orchestrated phishing or scam.
      • A researcher who corrected a critical error by validating their work with traditional methods.
      • A student who studied and got the desired grade, increased confidence, and self-respect instead of using AI to cheat.
      • A spouse who got help for their loved one spiraling downward with AI-psychosis toward self-harm.

      …and the list goes on.

      Spread the word!

      1. lyndhurstlaura Avatar

        All excellent examples, Grant, and we’ll keep on fighting the good fight – even if for small results. Incidentally, your story reminded me of one I read many years ago; not exactly the same, and pre-AI, but on the same lines. A father took his small son to a beachside restaurant and ordered lobster. The waiter asked him to choose from the tank of live creatures, which he did, before noticing his son looking very downcast. A short conversation ensued, and the father called over the waiter, issuing instructions and waiting until the waiter brought over the lobsters he’d ordered, still live and kicking. Then the father and son made a solemn way down to the water’s edge, where they released the shellfish back into the sea. He paid the bill, they left, and the lobsters lived to fight another day. 🙂

        1. Grant at Tame Your Book Avatar

          Thanks, Laura, and I enjoyed your story. Like with the fight against the tobacco industry, justice from the courts takes time. For now, helping people avoid dangers happens each time we make others aware.

          1. lyndhurstlaura Avatar

            It’s the least we can do, Grant. Many thanks. 🙂

  3. Kay DiBianca Avatar

    I share your concern, Grant. AI is a powerful tool, and I’m sure it can help mankind in many ways, but giving it power to make life-and-death decisions is frightening.

    I know a little about the Cuban missile crisis because my husband was aboard a destroyer during the event. After some documentation was declassified a few years ago, we learned that the commander of one of the Soviet submarines had lost contact with Moscow. Thinking World War III had begun, he considered firing a nuclear torpedo at one of the American ships—possibly the one Frank was on. However, the Soviet protocol stated that the three highest-ranking officers aboard a Soviet submarine had to unanimously agree to fire a nuclear torpedo. When the three men met, two of them voted “yes.” The other man, Vasili Arkhipov, voted “no.” PBS did a special about the event called “The Man Who Saved the World.”

    1. Grant at Tame Your Book Avatar

      Thanks, Kay, for amplifying the key point!

      We do not know all the negative effects as the AI ripples spread across the globe. However, history shows us that people will use that intended for good for nefarious purposes.

      Here’s what Mae Clair wrote last week (March 18, 2026):

      “Ugh! Just seeing “AI” in type or hearing the word makes me cringe. I know there are good uses for it, but–like anything–there’s always someone willing to examine the underbelly.

      In my area, two teen boys pulled regular images of their female classmates from social media posts then manipulated them with AI to create nude photos for sharing in a private chatroom. The case is now going to trial and two teachers (who apparently had some inkling of what was going on) have resigned from their positions at the school. Then there is the psychological damage to all those girls who were innocent victims. So many lives impacted, many ruined. I know “AI” in itself wasn’t at fault, rather the boys who used it, but it’s sad to see the kind of damage it can do in the wrong hands.”

      I can’t imagine the magnitude of heartache felt by the girls, boys, and parents.

      BTW: We have teachers across the US recommending the use of AI tools with little or no supervision.

      Now that’s real-time scary!

  4. Mae Clair Avatar

    Very sobering thoughts, Grant. I was only aware of several of these occurrences. Even more frightening is the prediction ASI could be here by 2028!
    We need to be aware—and be cautious!

    1. Grant at Tame Your Book Avatar

      For those who doubt the rapid advancement, Nvidia (an AI chip maker) CEO Jensen Huang said yesterday that his company had achieved AGI. That’s the reputed precursor to achieving ASI. I’ll wait for the dust to settle from the critics. However, it looks like ASI may come faster that many of the so-called experts predicted.

      1. Mae Clair Avatar
        1. Grant at Tame Your Book Avatar

          Absolutely! Now you know why I keep beating the drum for those how aren’t following the trend.

  5. john buckner Avatar
    john buckner

    I’m with you, Grant, as well as the commenter, PB. When I saw a demonstration of human-scaled robots clocked at a dead run of over 20 mph, the only thing I could say was, “We’re toast against a robot army.” 100 meter champion Usain Bolt as clocked at 27 mph at some point in that championship race in 2009—only 100 meters. Though I use AI to aid the creation of a creative work, in minimum ways, such as spell/grammar check in MS Word, AI-aided drums in Logic Pro and some AI-credited photos, as mentioned before, I could easily live without it, but there is simply no turning back—hence the faith comment. It’s already professed in the Book of Revelations. We need to get our heads wrapped around it because there is no stopping this. And as usual, on the front lines of prediction, SciFi writers have foretold this story, as well.

    1. Grant at Tame Your Book Avatar

      Thanks for your viewpoints, John! Only a fraction of people subscribed to the various AI models have the wisdom and skill to discern where to draw the line in their use.

      We can see in court cases and in the headlines that the decisions of AI executives overwhelming trend toward speed and profit instead of slow and safety. As noted to Priscilla, the decision against Meta offers hope that guardrails and legal recourse are coming.

      You’ve made me curious. When you get a minute, please share the scripture you referenced in the Book of Revelations.

      1. John Cave Buckner Avatar

        Hey Grant, assuming AI will be the death of us all, I was speaking in general terms, as Revelation describes God creating a new Heaven and Earth, with no sun or moon—only Christ light. As the Alpha and Omega, if God wants to use AI for his purposes, who am I to question? But I was thinking of 9:3-7, which describes the releases of locusts from the depths of hell, with “gold crowns” and “breastplates that seemed to be of iron” and “their wings roared like an army of chariots rushing into battle.” For 20 years I wondered about this being an 80 AD drone description from Apostle John, who is attributed to writing this book and I was thinking of the late Michael Crichton and his book “Prey,” about the flying swarm of nanotechnology-produced bots with hive mentality. The coordinated attacks needed with these mass-produced numbers in either example would take an intelligence we could assume as AI, or induced through man from a higher power such as God, using man to serve his purposes that will be used for good or evil. Speculative, for sure, but interesting to think about. So, for me, this ultimately comes down to commenter PB and her wise observation invoking “faith.”

        1. Grant at Tame Your Book Avatar

          Thanks, John! I appreciate the insights, and Crichton’s Prey remains one of my favorites. Priscilla nailed it with her keen observation about faith.

          For those who might not be familiar with Christian science fiction, it’s a growing category on Amazon. There are many subcategories, which I won’t go into here. Within these categories, you’ll find many talented authors who know how to spin an engaging tale and stay within sound doctrine. Authors such as Jamie Lee Grey, Candle Sutton, D. I. Hennessey, Terri Blackstock, Mark Goodwin, Toby Neighbors, and many more. And for classics, decades ago, C. S. Lewis contributed to this genre.

          1. John Buckner Avatar

            Thanks for the information. I’d like to see what that is all about.

  6. Priscilla Bettis Avatar

    If we add quantum computing to the mix, the speed at which ASI escalates is exponential. I don’t know how people without a faith to ground them deal with this stuff.

    1. Grant at Tame Your Book Avatar

      Spot on, Priscilla! I don’t want squash technology, but demand adequate guardrails and legal recourse. The March 24, 2026, multi-million dollar ruling in the social media case against Meta (Facebook) offered encouragement.

Leave a Reply to Wynne LeonCancel reply