I pride myself on being the town grump — at least, when it comes to the issue of technology, social media and artificial intelligence. You may have seen me at school board meetings urging limits on phones in classrooms, or on social media retweeting, reposting and re-echoing the warnings of Jonathan Haidt and Jean Twenge about how the technological age is reshaping childhood — rarely for the better.
I take these concerns seriously. And yet, I find myself increasingly wanting to speak up on the opposite side of things.
I teach a course on educational technology, and I’ve noticed a growing number of students taking what you might call a “principled Luddite” stand: They will not use AI under any conditions. And I can sympathize with the impulse.
In James Marriott’s Free Press essay last fall, “The Dawn of the Post-Literate Society‚” he’s right that most of us don’t take seriously enough the threats new technologies pose to our children.
But despair is not a strategy. And retreating into nostalgia, or declaring all new technological innovations forbidden, does little to prepare us for the world our children will actually inhabit. This genie will not be going back into the bottle.
Between despair-driven Luddism and naive tech optimism lies a better response: calibrated hope — grounded in reality, sustained by agency and disciplined by wisdom.
A countercultural theological posture
As a Latter-day Saint, my posture toward technology is not an innovation. It’s a natural outgrowth of my theology.
From the beginning of our faith, our tradition has worked to resist both fatalism and fear. Hope is not naïveté; it’s a moral discipline. Scripture urges us to be “wise as serpents, and harmless as doves” — reflecting not competing virtues but complementary ones. We are a people who crossed plains with maps and handcarts, who stored grain against lean years, who put our shoulder to the wheel even when the road ahead was uncertain. It’s not temperament that brings us this hope, but covenant.
Central to this theological posture is a robust account of moral agency — agency exercised under conditions that are never perfect and rarely simple. Responsibility does not disappear when tools become powerful; it increases. That conviction makes me skeptical of determinism in any form — whether it arrives dressed as utopian inevitability or apocalyptic despair. Technology does not relieve us of judgment. It sharpens the stakes of our choices.
This theology gives me unexpected calm in the face of artificial intelligence. Our scriptures teach of an embodied God — of intelligence and matter as inseparably connected. That doctrine means my worth, and the worth of my children and students, is not contingent on being the fastest, smartest or most efficient intelligence in the room.
I do not love my children because they are smart, but because they are mine. Human beings are embodied spirits, created in the image of an embodied God, with a divine nature and destiny that technology cannot supersede. Artificial intelligence may exceed us at certain tasks — even many tasks — but a higher meaning and deeper identity remain ours.
My confidence does not come from believing AI is harmless; it comes from knowing what it cannot threaten.
This confidence is reinforced by a typically Latter-day Saint view that innovation is providential — something to be governed, not feared. Brigham Young taught that genuine advances in science and art are not threats to faith but gifts meant to prepare humanity for moral and spiritual progress.
“Every discovery in science and art that is really true and useful to mankind,” he argued, “has been given … to prepare the way for the ultimate triumph of truth.” For Young, innovation was not something to fear, but something to govern — to be gathered, disciplined and placed in the service of human flourishing.
Church leaders today continue to model this same positivity and balance. Elder Gerrit W. Gong, of the Quorum of the Twelve Apostles in The Church of Jesus Christ of Latter-day Saints, teaches that “we can be realistic both of opportunity and challenge” when approaching new technologies, specifically recognizing that “AI has much to contribute to human flourishing and the common good,” while also remembering its limits: “AI algorithms do not love, bless, or relate to us by divine covenant.”
“While generative artificial intelligence may be quick to offer information,” the apostle continues, “it can never replace revelation or generate truth from God.”
Moral formation for this moment
We may well need to ban some things, but living well with powerful tools is more about moral formation than it is about governing by fiat. As a teacher, I would say this same logic applies in the classroom — and even closer to our homes.
We may need less technology, to be sure, but we will need an increase in deliberate agency even more. As Elder David A. Bednar, also of the Quorum of the Twelve Apostles, has said, “Because AI is cloaked in the credibility and promises of scientific progress, we might naively be seduced into surrendering our precious moral agency to a technology that can only ‘think telestial.’”
Maybe it’s an explosion of deliberate, intentional agency that will keep us safe in the explosion of information through AI, as a friend recently suggested to me. What does such an expansion of agency look like?
Latter-day Saint missionaries are increasingly expected to use social media and smartphones in their daily work. They must set limits, create habits and set norms that allow the phones to be a servant rather than a master. They are expected to “have a purpose” before turning to their phones, and they are accountable to their mission companions.
When it comes to my own children and my students, I don’t pretend that agency arrives fully formed. It develops over time, through practice and habit. I see this every semester: young people eager to use powerful tools but often lacking the routines of attention, patience and self-governance that make those tools genuinely useful rather than merely convenient. That is precisely why technology use is required in my classes — because the only thing worse than bad technology is an utterly unprepared student trying to wield it.
In my home, technology is not a right to be granted or denied, but a responsibility to be earned. Before a phone ever enters the picture — or before AI becomes a default shortcut in my classes — I expect evidence of basic self-regulation: managing boredom, disengaging when asked, completing work without constant prompts and participating fully in the shared work of learning.
Last semester, a student wanted to use AI to generate essay outlines. I asked him first to demonstrate that he could sustain focus through a single article without checking his phone. He couldn’t. The problem wasn’t the AI; it was the absence of the discipline needed to use it well.
These limits aren’t foolproof, and they aren’t permanent — but habits form character, and character is what agency rests on.
The moral demand
Recently, after a heavy-hearted kind of day, I was hoping for a bit of inspiration. President Jeffrey R. Holland had recently died, and I was feeling a little low. The inspiration came unexpectedly: the literal voice of President Holland from early 2025 encouraging me to keep trying, keep my head up and know that God had my back.
God had reached me through, of all things, an Instagram reel.
I still worry about technology. I still argue for limits. I still believe much of it is oversold and poorly used. None of that has changed.
What has changed is my refusal to let anxiety have the final word. I do not believe God reassures us so that we can relax into passivity. I believe He reassures us so that we can act without fear.
Agency, not angst, is the moral demand of this moment. If we can raise children who know how to govern themselves, teach students who can use tools without being ruled by them and resist the urge to panic in the face of change, then we will have done what every generation before us has been asked to do: live faithfully, think clearly and choose well — no matter the tools in our hands.

