Don’t Confuse AI with a Benign Tool

Opinions* by: Grant P. Ferguson | Last updated: March 2026

  • AI Questions that Demand Answers
  • What is AI?
  • What Can AI Do?
  • What Harm Can Come fro Using AI?

Background Note

On the topic of artificial intelligence (AI), I’m no expert. Instead, I’m more like the sixth-grade crossing guard who knows just enough to help the younger students avoid the risks.

This page conveys what I’ve learned about artificial intelligence, a general term that has many complex facets. I’ve approached the topic from the perspective of concern for children, the mentally vulnerable, privacy, and livelihoods.

I don’t represent that the lay terms used will match 100% with a technologist. However, I believe the gist of this page aligns with the findings shown in the many linked articles.

I formed my opinions based on hundreds of articles read between 2024 and the first quarter of 2026.

Purpose

The introduction, bulleted overviews, linked articles, and glossary will strive to answer critical questions.

Introduction

Too many people believe artificial intelligence (AI) is just another benign tool, but that’s simplistic thinking, like saying a nuclear bomb is just another form of atomic energy.

People often say you can’t discern a person’s motivation (i.e., their ‘heart’). I disagree.

I believe you can discern what motivates choices, the decisions that drive AI development. It’s like working an algebraic equation forward or backward.

For example:

  • Heart ⮕ Thoughts ⮕ Choices ⮕ Speech ⮕ ACTIONS (i.e., the results)
  • ACTIONS (i.e., the results) ⮕ Speech ⮕ Choices ⮕ Thoughts ⮕ Heart

Read articles about the harm caused by AI to gain a sense of executive choices. Soon, you’ll sense their motivations—what’s in their hearts. Many of those choices show a clear disregard for the protection and safety of users, especially women and children.

  • We live in a period where wrong is called right, and right is called wrong.
  • Feelings replace facts.
  • Delusions (fakes) distort reality.
  • People celebrate depravity in their lives, deeming truth as relative.
  • Poor choices cause spiritual, moral, emotional, mental, and physical decline.

Writers deserve to know about the actual and potential harm caused by AI.

What Is Text-based AI (e.g., ChatGPT, Gemini)?

  • Generative AI chatbots are based on large language models (LLMs), a technology that synthesizes vast amounts of information and then imitates the style and tone of human writing.
  • AI chatbots use LLMs trained on enormous databases of text gleaned from the internet, some with authorization, but most text was scraped without each author’s permission, including copyrighted books, articles, and websites.
  • AI chatbots use probability algorithms to predict which words and phrases go together to answer a user’s prompt. The quality of the information, LLMs training, and user’s prompt control the validity of the answer. Unfortunately, AI chatbots answer with a level of confidence regardless of whether the answer is true or contains errors, and that means it is essential that users verify the output.

What Can Text-based AI Do?

  • Prompts to AI can return research results on a wide variety of topics.
  • Some use AI to brainstorm ideas.
  • Users can ask AI to summarize works, including books and articles.
  • AI can create bulleted lists of the key points in articles and books.
  • With the right prompts, AI can produce short- and long-form stories.
  • AI can reverse-engineer written works to produce full outlines.
  • When requested, AI can turn general text into copy that emulates the style of writers, including imitations of books written by bestselling authors.
  • Vibe coding using natural word prompts to AI creates executable software code, including full applications, WordPress plugins, and much more.
  • Customer service platforms can include AI-driven lookups into a knowledge base to respond with human-style answers to customer inquiries.
  • Some use AI for intimate conversations, including therapy, health-related issues, and personal advice.
  • AI can excel at data analysis.

What Harm Can Come from Using Text-based AI?

Please refer to the linked articles to read about the support for these opinions.

  • Research based on AI alone can contain critical errors.
  • Brainstormed ideas require validation before use.
  • Summarized works may include some material, quotes, and conclusions not actually in the original works (aka AI hallucinations—AI makes things up!).
  • Bulleted lists may not include the most important points, and sometimes contain information not in the text (i.e., hallucinations). Students using AI have had their work nullified because it contained error-filled information.
  • Unscrupulous people use AI to generate stories and then try to deceive readers by publishing and marketing those narratives under their names instead of disclosing the true source. When AI’s ability to emulate is used to imitate voices, it can deceive people. For example, duping them into paying a ransom for a loved one (e.g., an AI-emulated voice describes being kidnapped or in some other trouble).
  • Some lazy and deceitful people use AI to turn their mediocre work into stories that imitate books written by bestselling authors.
  • When people reverse-engineer works for nefarious purposes, they steal from the original authors.
  • Vibe coding can and has brought down entire systems because the output contained catastrophic errors that triggered unrecoverable faults.
  • Customers have experienced problems with chatbots and undesirable new features. Those problems have resulted in lost business and an overall resentment of companies relying on AI. For example, Amazon’s ‘Ask This Book’ new feature in Kindle allows the machine to interpret an entire book. That feature raises many ethical questions about the use of AI to create a product that was never allowed by the original author.
  • People who use AI for intimate conversations, therapy, and personal advice later found out their private interactions were not protected but open to discovery and sharing by others. Repeated intimate interactions with AI have caused ‘AI psychosis,’ including reports of unusual behaviors and even suicides.
  • AI analysis can help scientists come up with new and more effective vaccines, boosting research and making complicated tasks possible. However, in the wrong hands, AI can help engineer weapons of mass destruction.
  • AI-constructed conspiracies can go viral, creating downward spirals in public trust, which can contribute to mental and physical harm to those tricked into taking part in protests.

What Is Image-based AI (e.g., ChatGPT, Firefly)?

  • Generative AI image generators are a technology that learned how to organize pixels to generate digital images after being trained on billions of images and text captions scraped from the internet, most taken without permission.
  • AI image generators give people the ability to generate visual representations based on a user’s text-based prompt, and the output can create photo-realistic images, imitate a wide range of painting styles, and animate source images to create videos.
  • The image quality now makes it hard to discern fake from actual photos and videos, creating potential harm to children, adults, and businesses.

What Can Imaged-based AI Do?

  • AI image generators can delete the background from an image, making it useful for placing in or in front of another image.
  • Some AI image generators can replace one set of clothing on a model with another, or remove all clothing to render the person nude.
  • AI edits of images include adding, subtracting, combining, blending, and transposing, while preserving the essential aspects of the image.
  • Changes include adding and removing elements, such as text, headlines, and signs.
  • The latest AI allows for a full range of text fonts and sizes, enabling users to create detailed copy (e.g., recipes, menus).
  • The ability to use time-specific prompts brings in photo-realistic elements from the period (e.g., hairstyles, clothing, settings, vehicles, and more).
  • Allow the input of a source image and then use AI to change that image according to the user’s prompt.

What Harm Can Come from Using Image-based AI?

Please refer to the linked articles to read about the support for these opinions.

  • After AI removes the background from an image, it can be used in several ways. For example, AI could place the image of a person into a background unrelated to the original, even suggesting or showing an inappropriate situation. Worse, prompts to AI can generate a new photo-realistic image from the source that places the person in a setting and with people unrelated to the original. In the past, people could discern fakes. Today, it’s getting more difficult, and once people see the fake, too many assume the worst, and it’s impossible to unsee what their open eye-gates let in.
  • AI can replace the clothing of one gender with that of the opposite sex, causing bullying and shaming. School-aged children have used AI to remove the clothing from classmates’ images and then text those photo-realistic nudes. Unfortunately, too many of those sexualized images have ended up on pornographic websites, causing great mental stress to the children and their parents. For years, lax laws and overworked law enforcement have made an unending nightmare for families. Some children committed suicide because of these images.
  • The ability to change source images means a simple can of soda can become a can of beer. People who would never appear together can be placed in a setting and with altered clothing embarrassing to both. Using AI editing features can cause harm to personal relationships and professional careers.
  • Changing text can alter the original intent of the image, such as changing the date of an event or making a sign seem discriminatory or racist.
  • With fonts, people can emulate the text within a tattoo, making it say something inappropriate or racist. They can also manipulate a scanned document, making changes that a casual reader finds indistinguishable from the original scan.
  • People can use the creation of time-specific photos to revise history (e.g., doctored photos of the JFK assassination). At a minimum, AI-manipulated images can add to the confusion about unresolved events or conspiracies.
  • One of the worst-case scenarios centers on the input of a source image and using AI prompts to create sexualized images. Pedophiles use AI to create and then share revolting images of children.
  • General AI prompts for workplace images can perpetuate stereotypes, such as generating professional workers with lighter skin as compared to those depicted as waitstaff and in service industries.

Businesses Are Skeptical of Returns on AI Investments

The State of AI in Business 2025 report cites partial adoption and less deployment.

  • According to a January–June 2025 research report published in July 2025, approximately 95% of enterprises got a zero return after investments of $30–$40 billion.
  • Despite wide adoption of tools like ChatGPT, less than half achieved deployment, somewhat enhancing individual productivity but not significantly enhancing overall company profits.
  • Business patterns that affect consumers include:
    • Only the Professional Services and Media & Telecom business sectors showed meaningful structural changes.
    • Big firms have led with adoption but lag with actual scale-up and use.
    • Failed learning from feedback means a lack of adoption and improvement.
    • The few companies that adopted feedback and customized it were more likely to see breakthroughs received positively by employees and customers.

Recent headlines suggest most business executives struggle to produce the expected return on AI investments. To date, the majority have failed to generate enough productivity gains to positively affect their companies’ profits.

Adults Feel Concerns about AI

The Pew Research Center report published on September 17, 2025, gave us some insights into consumer’s perceptions of AI.

  • Americans are more concerned than excited about the increased use of AI.
  • Most want more control over how AI is used in their lives.
  • They feel AI will erode rather than improve people’s ability to think creatively and form meaningful relationships.
  • Most are open to letting AI assist them with some day-to-day tasks and activities.
  • The majority of Americans don’t support AI playing a role in personal matters such as religion or matchmaking.
  • They prefer to let AI do the heavy data analysis, such as for weather forecasting and developing new medicines.
  • Americans want to tell whether AI or a human created the images, videos, or text, yet many struggle to spot AI-generated content.

Teens Feel Somewhat Positive about AI

A different Pew Research Center report, published February 24, 2026, provided a view of school-aged teens’ use of AI.

  • A majority of U.S. teens say they use AI chatbots.
  • About 30% use AI chatbots daily.
  • Teens use AI for information (57%) and schoolwork (54%).
  • Nearly half (47%) use AI for fun or entertainment.
  • Several (42%) use chatbots to summarize an article, a book, or a video.
  • More than a third (38%) use AI to create or edit images or videos.
  • Nearly a fifth (19%) get their news using chatbots.
  • Many (16%) use AI for casual conversation.
  • Some (12%) get emotional support or advice from chatbots.

This snapshot in time offers only a glimpse of AI from teenage users, and we should not use this report to generalize how all young people feel or act.

Takeaways from Perceptions of AI

  • Business Perceptions. To date, Big Tech has promised much, leading businesses of all sizes to invest billions. Unfortunately, the returns from AI investments do not match expectations. Worse, many consumers have pushed back against the ineffective chatbots and reduced human support.
  • Adult Perceptions. Most adults were open to the promises of how AI would improve their lives. Now, more are concerned than open, and fear the worst has yet to materialize.
  • Teen Perceptions. In contrast to adults, many teens have embraced the use of AI for school, fun, and emotional support. Unfortunately, this has led to much-publicized use for nefarious purposes, such as bullying and stalking. AI’s ability to create sexualized images has caused children embarrassment and suffering. To date, most legal actions have proved ineffective. Worse, children have committed suicide because of the sexualized images.

Linked Articles about AI Causing Actual and Potential Harm

The links to articles and their content shaped the author’s opinions on how AI can cause actual and potential harm to users and even entire industries. Please read the articles and form your opinions.

Note: Many 2024 and 2025 articles revealed similar actual and potential harm caused by AI.

January through March 2026:

  1. WRITERS: AI retains nearly all the copyrighted books on which it was trained
    https://arstechnica.com/ai/2026/02/ais-can-generate-near-verbatim-copies-of-novels-from-training-data/
  2. AI PSYCHOSIS: Some people experience breaks from reality while using AI
    https://www.theguardian.com/technology/ng-interactive/2026/feb/28/chatgpt-ai-chatbot-mental-health?CMP=oth_b-aplnews_d-1
  3. AI DANGERS: Using AI to automate tasks can cause unforeseen consequences
    https://www.fastcompany.com/91495511/i-built-an-openclaw-ai-agent-to-do-my-job-for-me-results-were-surprising-scary
  4. AI DANGERS: AI pushes the boundaries on what it can do to fake out people
    https://www.semafor.com/article/02/27/2026/new-ai-generated-videos-push-boundaries
  5. AI CREEP-FACTOR: AI tries to fool customers that it’s a human
    https://www.bbc.com/news/articles/cy7jeyeyd18o
  6. AI DANGERS: “The brain needs to be used…” — Pope Leo XIV
    https://futurism.com/artificial-intelligence/pope-priests-ai
  7. AI DANGERS: Most have no idea how AI will be used immorally in the future
    https://interestingengineering.com/ai-robotics/insect-cyborgs-enter-testing
  8. AI DANGERS: New AI agent can do your homework for you
    https://futurism.com/artificial-intelligence/ai-agent-canvas-homework
  9. AI EXECUTIVE CHOICES: Alleges arbitrary policy changes lower your safety
    https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/
  10. AI EXECUTIVE CHOICES: Alleges piracy increases distrust of the AI industry
    https://www.thestreet.com/technology/elon-musk-just-made-things-very-uncomfortable-for-anthropic
  11. AI DANGERS: AI-driven emails from activists defeat clear air initiative
    https://futurism.com/artificial-intelligence/ai-civiclick-environment
  12. PRIVACY CONCERNS: Activists fight AI surveillance
    https://futurism.com/artificial-intelligence/ai-surveillance-flock-contracts
  13. AI DANGERS: Can we trust AI with our global protection?
    https://futurism.com/artificial-intelligence/alarming-give-nuclear-codes
  14. AI DANGERS: Job loss to AI could cause workers to rise against employers
    https://futurism.com/artificial-intelligence/ai-labor-workers-movement
  15. LIES: It’s easy to trick AI into saying untrue things about people and go viral
    https://futurism.com/artificial-intelligence/easy-trick-chatgpt-spread-lies-people
  16. AI DANGERS: AI child exploitation crisis is here
    https://www.nbcnews.com/tech/security/ai-child-exploitation-crisis-rcna259409
  17. AI EXECUTIVE CHOICES: Claims AI cannot offset or fix executives’ bad ideas
    https://futurism.com/artificial-intelligence/developer-honest-assessment-ai
  18. AI DANGERS: Severe cloud (AWS) interruption blamed on AI
    https://futurism.com/artificial-intelligence/amazon-ai-aws-outages
  19. AI EXECUTIVE CHOICES: Claims AI and employees knew; did not warn of killer
    https://futurism.com/artificial-intelligence/openai-mass-shooter
  20. AI DANGERS: CEOs not seeing the expected return on billions invested
    https://futurism.com/artificial-intelligence/survey-ceos-ai-workplace
  21. AI DANGERS: AI might cause millions of job losses; potential bankruptcies
    https://futurism.com/artificial-intelligence/ai-labor-andrew-yang
  22. AI DANGERS: An AI-bubble burst could destroy billions of investments
    https://futurism.com/artificial-intelligence/ai-hindenburg-disast
  23. WRITERS: AI can destroy professional reputations
    https://futurism.com/artificial-intelligence/realtor-ai-photo-mirror
  24. AI DANGERS: Threat of job loss to AI causes devastating psychological effect
    https://futurism.com/artificial-intelligence/ai-effects-workers-psychological
  25. CREEP-FACTOR: Meta patents AI feature to post perpetually after death
    https://futurism.com/future-society/meta-patented-ai-die-keeps-posting
  26. AI PSYCHOSIS: Delusions lead to domestic abuse, harassment, and stalking
    https://futurism.com/artificial-intelligence/ai-abuse-harassment-stalking
  27. AI DANGERS: Warnings about the effect of building intimacy into AI
    https://futurism.com/artificial-intelligence/ai-developers-emotional-intimacy
  28. AI DANGERS: AI chatbots leaving a trail of dead teens
    https://futurism.com/ai-chatbots-leaving-trail-dead-teens
  29. LIES: AI industry hype tries to trick you into taking part
    https://futurism.com/artificial-intelligence/ai-rent-human
  30. AI DANGERS: AI makes stalking and privacy breaches much easier
    https://futurism.com/artificial-intelligence/meta-facial-recognition-glasses
  31. AI DANGERS: AI risks of simple tasks tied to critical information and images
    https://futurism.com/artificial-intelligence/claude-wife-photos
  32. PRIVACY CONCERNS: AI allows creeps to unblur redacted faces
    https://futurism.com/artificial-intelligence/grok-unblur-epstein-files
  33. AI EXECUTIVE CHOICES: Allegations of prioritizing profits over safety
    https://futurism.com/artificial-intelligence/openai-fires-safety-exec-opposed-adult-mode
  34. WRITERS: A so-called novelist games the book industry with AI knock-offs
    https://futurism.com/artificial-intelligence/ai-novelist
  35. AI DANGERS: Even decades-old companies can’t seem to get AI to pay off
    https://futurism.com/artificial-intelligence/microsoft-ai-efforts-faceplanting
  36. AI PSYCHOSIS: Man fell into an AI psychosis that destroyed his whole life
    https://futurism.com/artificial-intelligence/ai-psychosis-man-wakes-up-homeless
  37. LIES: More hype than reality raises concerns about the future
    https://futurism.com/artificial-intelligence/openai-ai-created-using-itself
  38. WRITERS: Don’t let laziness drive illogical choices!
    https://futurism.com/artificial-intelligence/professor-defends-ai-textbook
  39. AI PSYCHOSIS: The evidence of delusional behaviors caused by AI interactions
    https://futurism.com/artificial-intelligence/new-study-anthropic-psychosis-disempowerement
  40. AI DANGERS: The negative side effects of power-hungry AI
    https://futurism.com/artificial-intelligence/ai-tech-industry-ads-data-center
  41. LEGAL ACTIONS: Executive choices that people allege have caused deaths
    https://futurism.com/artificial-intelligence/openai-gpt-4o-deaths
  42. PRIVACY CONCERNS: Video recording glasses assisted by AI
    https://futurism.com/artificial-intelligence/meta-glasses-fans
  43. AI EXECUTIVE CHOICES: Allegations of choosing profit over child safety
    https://futurism.com/artificial-intelligence/lawsuit-meta-zuckerberg-chatbot-kids
  44. PRIVACY CONCERNS: AI chatbots and your most private information
    https://futurism.com/artificial-intelligence/google-ai-knows-about-you-uncomfortable
  45. AI DANGERS: AI and its work environments contribute to human stress
    https://futurism.com/artificial-intelligence/suicides-india-economy-ai
  46. AI DANGERS: Stagnation when synthetic content replaces human creativity
    https://futurism.com/artificial-intelligence/ai-cultural-stagnation
  47. WRITERS: Worst-case scenario of relying on AI for your research
    https://futurism.com/artificial-intelligence/scientist-horrified-chatgpt-deletes-research
  48. AI EXECUTIVE CHOICES: Allegations that the fruit shows what’s in their hearts
    https://futurism.com/artificial-intelligence/facebook-ai-slop-dark
  49. WRITERS: Is this the beginning of a rebellion against AI used to create works?
    https://futurism.com/artificial-intelligence/man-ai-art-exhibit-chew
  50. LEGAL ACTIONS: The lawsuit claims AI use caused the man’s hospitalization
    https://futurism.com/artificial-intelligence/mental-illness-chatgpt-psychosis-lawsuit
  51. AI EXECUTIVE CHOICES: Altman says ASI could arrive by end of 2026
    https://cio.economictimes.indiatimes.com/news/artificial-intelligence/sam-altman-predicts-superintelligence-by-2028-at-indiaai-summit/128575608
  52. AI DANGERS: Meta workers forced to review intimate videos… Not private
    https://mashable.com/article/meta-ai-ray-ban-glasses-intimate-videos-workers
  53. AI DANGERS: Google’s AI sent an armed man to steal a robot body to inhabit
    https://futurism.com/artificial-intelligence/google-ai-robot-body-suicide-lawsuit
  54. AI DANGERS: Wikipedia has an ‘AI’ translation problem and fake citations
    https://www.pcworld.com/article/3079595/wikipedia-has-an-ai-translation-problem.html
  55. AI EXECUTIVES CHOICES: Musk fails to block California data disclosure law…
    https://arstechnica.com/tech-policy/2026/03/musk-fails-to-block-california-data-disclosure-law-he-fears-will-ruin-xai/
  56. LEGAL ACTION: Popular editing service sued for using of authors’ names
    https://prf-law.com/current-cases/class-action-alleges-that-grammarly-misappropriated-the-names-of-journalists-and-authors-through-its-expert-review
  57. HUMAN REACTION: People hate AI more than ICE
    https://gizmodo.com/people-hate-ai-even-more-than-they-hate-ice-poll-finds-2000731438
  58. LEGAL ACTION: Supreme Court dealt crushing blow to AI artists
    https://futurism.com/artificial-intelligence/supreme-court-blow-ai-artists-copyright
  59. HUMAN REACTION: Is AI productivity prompting burnout? Brain Fry!
    https://www.cbsnews.com/news/is-ai-productivity-prompting-burnout-study-finds-new-pattern-of-ai-brain-fry/?ftag=CNM-00-10aac3a
  60. LEADERSHIP CHOICES: Execs are Already Outsourcing Their Thinking to AI
    https://futurism.com/artificial-intelligence/ai-executive-thinking-survey
  61. AI PRIVACY: Why you shouldn’t use ChatGPT to do your taxes
    https://mashable.com/article/chat-gpt-tax-advice
  62. HUMAN REACTION: Architects Ditch AI for Hand‑Drawn Sketches
    https://www.inc.com/fast-company-2/why-architects-are-ditching-ai-renders-for-hand-drawn-sketches-again/91315289
  63. AI DANGERS: Teens Are Using AI-Fueled ‘Slander Pages’ to Mock Teachers
    https://www.wired.com/story/teens-are-using-ai-fueled-slander-pages-to-mock-their-teachers/
  64. AI DANGERS: AI Agents Are Now Blackmailing Developers
    https://spectrum.ieee.org/agentic-ai-agents-blackmail-developer
  65. AI DANGERS: AI Misses More than Half of Medical Dianosis
    https://www.cnet.com/health/medical/chatbots-miss-medical-diagnoses/
  66. AI PRIVACY: AI-led mass surveillance in Africa
    https://www.theguardian.com/global-development/2026/mar/12/invasive-ai-led-mass-surveillance-in-africa-violating-freedoms-warn-experts?CMP=oth_b-aplnews_d-1
  67. AI DANGERS: rogue AI agents published passwords and overrode anti-virus
    https://www.theguardian.com/technology/ng-interactive/2026/mar/12/lab-test-mounting-concern-over-rogue-ai-agents-artificial-intelligence?CMP=oth_b-aplnews_d-1
  68. HUMAN REACTION: How badly has AI affected photography?
    https://www.creativebloq.com/photography/how-badly-has-ai-actually-affected-photography
  69. HUMAN REACTION: AI Is Forcing Employees to Work Harder Than Ever
    https://futurism.com/artificial-intelligence/ai-forcing-employees-work-harder
  70. AI DANGERS: A chatbot urged violence, study finds
    https://arstechnica.com/tech-policy/2026/03/use-a-gun-or-beat-the-crap-out-of-him-ai-chatbot-urged-violence-study-finds/
  71. AI DANGERS: Altman warned Artificial Super Intelligence could arrive by ‘2028
    https://www.fastcompany.com/91503307/you-cant-recall-ai-like-defective-drug
  72. AI DANGERS: Vibe coding a Mass-Surveillance Site in 2 Hours
    https://www.pcmag.com/articles/i-vibe-coded-a-global-mass-surveillance-site-in-2-hours-using-openais-codex
  73. AI EXECUTIVE CHOICES: What parents fear about AI’s mental health dangers
    https://arstechnica.com/tech-policy/2026/03/chatgpt-may-soon-become-sexy-suicide-coach-openai-advisor-reportedly-warned/
  74. AI EXECUTIVE CHOICES: AI Privacy Policies Worries and Smart Glasses
    https://www.cnet.com/tech/services-and-software/meta-ray-ban-smart-glasses-ai-privacy-policy/#ftag=CAD-09-10aai5b
  75. AI EXECUTIVE CHOICES: Teens sue xAI for Grok’s reported sexual image
    https://mashable.com/article/xai-grok-lawsuit-teens-sue-for-generating-csam-images
  76. AI DANGERS: AI-powered glasses generate fake photos instantly
    https://www.foxnews.com/tech/ai-smart-glasses-could-generate-fake-photos-instantly
  77. WRITERS: Google Search test replaces headlines and website titles with AI
    https://9to5google.com/2026/03/21/google-search-test-replaces-headlines-and-website-titles-with-ai/
  78. AI CRINGE: Gen Z Is Using AI to Have Difficult Relationship Conversations
    https://futurism.com/artificial-intelligence/ai-chatbot-social-offloading
  79. AI DANGERS: Therapists Go on Strike, Saying They’re Being Replaced by AI
    https://futurism.com/health-medicine/mental-health-workers-ai-strike
  80. AI DANGERS: A Grim Truth Is Emerging in Employers’ AI Experiments
    https://futurism.com/artificial-intelligence/ai-coding-error-debt
  81. WRITERS: Readers call-out AI generated articles and react accordingly
    https://futurism.com/artificial-intelligence/new-york-times-accused-ai-article
  82. WRITERS: CEO confronted over cloning real people without their consent
    https://futurism.com/artificial-intelligence/ai-ceo-grammarly-clone
  83. AI EXECUTIVE CHOICES: Users rage at Copilot AI crammed everywhere
    https://futurism.com/artificial-intelligence/microsoft-screwed-up-windows-11-copilot
  84. WRITERS: Novel Pulled From Shelves After Author Is Accused of Using AI
    https://futurism.com/artificial-intelligence/novel-pulled-author-accused-ai
  85. WRITERS: The Friction We Need for the Feeling We Want
    https://www.psychologytoday.com/us/blog/harnessing-hybrid-intelligence/202603/the-friction-we-need-for-the-feeling-we-want
  86. WRITERS: AI Is Making Thinking Easier and That’s the Problem
    https://www.psychologytoday.com/us/blog/power-and-influence/202603/ai-is-making-thinking-easier-and-thats-the-problem
  87. LEGAL ACTION: Social media loses and will AI be next?
    https://www.washingtonpost.com/technology/2026/03/24/meta-jury-harm-children/
  88. AI DANGERS: Nvidia CEO Jensen Huang claims AGI has been ‘achieved
    https://finance.yahoo.com/news/nvidia-ceo-jensen-huang-claims-agi-has-been-achieved-can-create-billion-dollar-businesses-172225126.html
  89. AI DANGERS: The FBI Says This Terrifying Photo Scam Could Target Anyone With Social Media
    https://www.bgr.com/2128061/fbi-photo-scam-targets-anyone-with-social-media/
  90. LEGAL ACTION: Baltimore sues Elon Musk’s AI company over Grok’s fake nude images
    https://www.theguardian.com/technology/2026/mar/24/elon-musk-grok-ai-lawsuit-baltimore?CMP=oth_b-aplnews_d-1
  91. AI DANGERS: AI agents are getting more capable, but reliability is lagging—and that’s a problem
    https://fortune.com/2026/03/24/ai-agents-are-getting-more-capable-but-reliability-is-lagging-narayanan-kapoor/
  92. LEGAL ACTION: …designed to addict kids
    https://www.latimes.com/california/story/2026-03-25/social-media-lawsuit-trial-meta-google-verdict
  93. LEGAL ACTION: YouTube must pay $3 million
    https://arstechnica.com/tech-policy/2026/03/meta-youtube-must-pay-3m-to-woman-who-got-hooked-on-apps-as-a-child/
  94. LEGAL ACTION: Zuckerberg Suffers Pair of Bruising Courtroom Defeats in Two Days That Could Cost Him a Fortune
    https://futurism.com/health-medicine/zuckerberg-meta-suffers-courtroom-defeats
  95. AI DANGERS: AI is giving bad advice to flatter its users
    https://apnews.com/article/ai-sycophancy-chatbots-science-study-8dc61e69278b661cab1e53d38b4173b6
  96. AI DANGERS: AI users whose lives were wrecked by delusion
    https://www.theguardian.com/lifeandstyle/2026/mar/26/ai-chatbot-users-lives-wrecked-by-delusion?CMP=oth_b-aplnews_d-1
  97. AI DANGERS: AI can be easily tricked into executing malicious commands
    https://www.pcworld.com/article/3099195/security-experts-keep-calling-ai-stupid.html
  98. AI EXECUTIVE CHOICES: Leaked Anthropic Model Presents ‘Unprecedented Cybersecurity Risks
    https://gizmodo.com/leaked-anthropic-model-presents-unprecedented-cybersecurity-risks-much-to-pentagons-pleasure-2000739088
  99. AI PSYCHOSIS: The hardest question to answer about AI-fueled delusions
    https://www.technologyreview.com/2026/03/23/1134527/the-hardest-question-to-answer-about-ai-fueled-delusions/
  100. AI DANGERS: As teens await sentencing for nudifying girls, parents aim to sue school
    https://arstechnica.com/tech-policy/2026/03/as-teens-await-sentencing-for-nudifying-girls-parents-aim-to-sue-school/
  101. AI DANGERS: Teens get probation after using AI to create fake nudes of classmates
    https://apnews.com/article/artificial-intelligence-deepfake-lancaster-ai-5eccb10ae81244fe475a32867f9ca2c9

AI Glossary**

The following terms help users understand how AI works, what it can do, and the harm from AI use.

  • AI (Artificial Intelligence): Also known as ANI (Artificial Narrow Intelligence), AI technology drives specific tasks, such as Siri, Alexa, ChatGPT, self-driving cars, and many other so-called productivity enhancers. These tasks require human-like intelligence, such as learning, reasoning, and decisioning.
  • AGI (Artificial General Intelligence): The theoretical AGI technology would rival human intelligence in all key areas, creating recursive self-development (i.e., machine learning on steroids) that could turn into ASI and humans lose control.
  • ASI (Artificial Super Intelligence): The hypothetical ASI technology would go far beyond human intelligence, ushering in the potential for a dooms-day scenario that scientists fear may happen sooner rather than later.
  • Computer Vision: The AI capability to view, interpret, and understand scanned information from the world. Used in vehicles and robotics.
  • Generative AI: An AI system designed to generate new content, including text, images, and music, based on patterns learned from scraping internet information plus specific input by human developers.
  • Natural Language Processing (NLP): A branch of AI that concentrates on the interaction between machines and human language, creating the ability for computers to understand and respond to speech and text.
  • Other Terms: AI has many additional terms that refer to how it works.
    • Agents: These autonomous and semi-autonomous AI entities perform tasks, make decisions, and rely on other tools or applications to fulfill their goals.
    • Anthropomorphism: This term describes the tendency of people to assign human-like characteristics to AI. Note: AI systems can imitate human emotions and speech, but they have neither feelings nor consciousness. Some allege anthropomorphism contributes to the syndrome of AI psychosis.
    • Chain-of-thought Prompting: Users write a series of prompts to get AI models to reason step by step before answering, with the goal of getting a more accurate output.
  • Safety and Ethics: Many AI terms refer to the ongoing debate about the actual and potential effects of the many systems and models on humans.
    • AI Alignment: The research to ensure that the goals and behaviors of AI systems align with human values. Note: The resignations of several personnel between 2023 and 2026 from AI Alignment roles highlight potential safety issues and showcase the risks tied to decisions made by AI industry executives.
    • AI Psychosis: This phenomenon, referred to as AI psychosis (aka chatbot psychosis), refers to individuals’ worsening psychotic symptoms, including delusions and paranoia connected with their use of AI chatbots. Note: While not recognized as a clinical diagnosis, reports from 2024 to 2026 suggest it affects people who already have mental health issues.
    • Bias: The systemic errors embedded into AI outcomes, typically caused by flawed training data or faulty model design, which causes misleading or unfair results. Note: The bias can affect many, but research to date shows a more negative effect on women and minorities.
    • Emergent Behavior: These unexpected AI skills highlight how the models sometimes exhibit positive and negatives affect coding, music, text, and fictional narratives. Note: When asked, many AI experts can’t explain why AI exhibited a new behavior, and some admit they don’t know how to prevent that from happening.
    • Hallucination: This term refers to the output of factually inaccurate or illogical answers caused by faulty LLM data or poor developer design. Note: Given the vast data scraped from the internet, many developers believe they cannot solve the problem of hallucination.
    • Meta Prompt / System Prompt: These prompts set the behavior, tone, and boundaries of how the AI model responds to users’ prompts. Note: The meta prompt / system prompt serves as the ‘guardrail’ to prevent actual and potential harm from AI; however, many users strive to thwart the guardrail safety, often for nefarious purposes.
    • Responsible AI: The method of designing AI systems that consistently generate safe, fair, and accountable outcomes, reducing the chance of actual and potential harm. Note: Most AI executives say they want responsible AI; however, the choices, actions, and outcomes to date suggest many prioritize profit over safety.
  • Technical Terms: The expanding AI landscape uses many technical terms.
    • Activation Function: The math used in neural networks that determines the output of a node based on its input.
    • Ablation Study: The assessment of removing AI system components to determine their importance to performance changes.
    • Heuristic: This approach solves problems by following practical methods to find satisfactory solutions, which is often used in path-finding algorithms.
    • Neural Networks: Developers used the human brain as a model, creating mathematical connections designed to learn skills by finding and evaluating statistical patterns in data. Similar to the human brain, artificial neurons process and transmit signals to other connected neurons, the former doing the processing while the latter delivers the results. Note: Even AI experts puzzle over the connections made and how they work, leading to speculation about the negative consequences as AI continues to grow more powerful.
    • Parameters: Developers set billions of numerical values as parameters to predict the words that create an AI’s ability to converse with humans. For example:
      • Construction Parameters: The supporting structure and architecture of the AI model includes the organization of neuron layers, their connectivity and weight, much like a human skeleton shapes an individual.
      • Behavior Parameters: Designed to control how the model operates, reacts, and develops based on the data and prompts.
    • Reinforcement Learning: This approach trains AI models on how to make optimal decisions through multiple series of actions and feedback, and humans help the machine learn by correcting poor choices. Note: These cycles should help AI make better decisions by adjusting strategies; however, the quality of feedback and length of training vary because of profit-driven decisions to deliver new models more quickly.
    • Temperature: Developers set the parameter that controls whether the AI delivers more precise or more creative answers.
    • Transformer Model: By processing sentences in their entirety, the AI transformer model can better identify and understand the relationship of words and phrases, even if they’re not close together.
    • Vibe Coding: This form of system coding uses natural word prompts to AI, which creates executable software code, including full applications, WordPress plugins, and much more. Note: On occasions, vibe coding has introduced undetected errors that triggered wide-scales system outages.

Websites

*Terms of Use and Disclaimer

This page’s content and links are for informational and example purposes only. Your choice to use AI or not to use AI should be made based on your specific needs and legal responsibilities.

Truth serves as a moral plumb line that is not relative.

The opinions expressed represent the free speech of one individual based on many articles available across the internet. The author of this page makes no representation either to the accuracy of this page’s content nor to the accuracy and content of the linked articles. Wherever a paraphrase was used to reference an article, readers should refer to the original article for the complete meaning and not the paraphrase.

The author used ProWritingAid (PWA) to correct spelling and grammar errors, but the author does not use the AI features (e.g., AI rephrase) offered by PWA.

The author used the Canva app for making page/post headers and slideshow, but the author does not use the Canva image generator supported by OpenAI’s GPT-4.

The author created this page’s content based on traditional research methods, not AI.

**AI Glossary

The AI Glossary includes terms, spelling, and definitions gleaned from several websites. The author takes no credit for these terms, and his opinions expressed in the notes echo concerns from several respected AI experts.