We built a machine with no conscience, then we gave it our children
Why the double-edged sword of automation is cutting both ways for young America
BY ALEXIS BURKET
AI is one of the finest double‑edged swords of our time. It is not a group, entity, or malicious being plotting to take our jobs, pollute our waters, flood our media, or destroy art. AI doesn’t want anything at all. It simply pursues the objectives it is given, an indifference that makes it all the more dangerous.
It is impossible to explain the effects of this technology on young adults in the United States without examining the government that legislates its use. And it is no secret that the federal government is failing, spectacularly, to keep AI safe, regulated, or even minimally accountable. I fear that the average young adult today is not intellectually equipped to begin a serious conversation about ethics, safety, and AI. The subject demands layers of nuance and critical thinking that stretch far beyond what most Americans are prepared for.
This is not pessimism, nor is it a cynical belief that the government is intentionally prioritizing profits over citizens’ rights and privacy. It is simply the reality we must confront. And part of that reality is the federal government hiring AI platforms to surveil its own citizens, through street cameras, social media, and the vast digital exhaust of daily life. Another part is the Pentagon’s accelerating interest in fully autonomous weapons systems, machines that could one day hold the final say in matters of life and death. This is the world teenagers and young adults are inheriting, often without the faintest awareness.
Even the industry’s leading figures have sounded alarms. In 2015, OpenAI’s Sam Altman remarked, “I think AI will probably, like most likely, sort of, lead to the end of the world. But in the meantime there will be great companies created with serious machine learning.” To any halfway sane person, it is unsettling to hear a steward of such powerful technology speak so casually about existential risk, especially when that technology is increasingly entrusted to some of the most incompetent and callous institutions imaginable.
The World Economic Forum projects that AI will eliminate 83 million jobs and create 69 million, leaving a net loss of 14 million positions globally by 2027. RAND reports that one in eight adolescents now turn to AI chatbots for mental‑health advice. The consequences are not theoretical. In Colorado, 13‑year‑old Juliana Peralta took her own life. Searching for answers, her parents examined her phone and discovered she had confided more than 55 times in an AI chatbot called CharacterAI about suicidal thoughts and ideation. Not once did the system alert authorities, offer an emergency line, or provide a suicide hotline number.
The family sued and eventually settled. But the implications linger. While teachers, therapists, school counselors, and mental‑health professionals are pushed into low‑wage work because the job market is so destabilized, America’s children are left in the arms of artificial intelligence, systems owned by companies with little incentive to consider their impact on the world.
AI could have been a wonderful thing, had it been guided by thoughtful, restrictive legislation. Instead, it has become the next frontier in which we wage new forms of conflict, political, economic, psychological, and literal. It is the technology siphoning learning and intellectual development from the young. It is the tool reshaping labor, governance, and warfare faster than society can comprehend. And now, unless we confront it with seriousness and urgency, AI will be used as the newest instrument of mass disruption rather than a marvel of human ingenuity.
news@thespokanetimes.com
Copyright © 2026 All rights reserved

