Sparks + Ember Episode No. 003: Reclaiming Cognitive Authority

This episode of Sparks + Embers is the companion to the Kindling newsletter feature article: How Smart People Surrender Independent Thought.

It is the second installment in the Goodpain Guide to Authentic Human Learning. This series is a part of our Contemplation & Reflection Pillar.

Smart people are unconsciously surrendering their cognitive authority to AI systems, not because they lack intelligence, but because intelligence alone doesn’t protect against deeper patterns of intellectual dependency.

The path forward isn’t avoiding AI but maintaining “cognitive ownership” while benefiting from technological augmentation. This requires daily practice of thinking well—not just efficiently, but wisely—and the character to preserve what makes us human while embracing what makes us more capable.

 

Transcript

TIFFANY: This week we are headed back to the workshop. We’ve been exploring some pretty timely territory in Tyler’s latest piece. Tyler, you’re back in the woodworking shop building a cabinet for our daughter’s medications. What did you discover about surrendering our minds to AI?

TYLER: [laughs] Perfect answers don’t exist in the workshop. I can plan everything out, but I can’t surrender my authority to those plans. I have to sit with uncertainty, embrace the messy work of figuring things out. The second I let something else do that thinking for me, I lose what matters most. That’s when I realized we’re doing exactly this with AI—surrendering what Bonhoeffer called our “inner independence” to systems promising certainty.

TIFFANY: Bonhoeffer? The guy who stood up to the Nazis? What’s he got to do with ChatGPT?

TYLER: Everything. He wrote about how “under the overwhelming impact of rising power, humans are deprived of their inner independence.” We’re doing this with AI—not because we’re stupid, but because surrendering cognitive authority feels easier. And as we discussed last week, algorithms have been training us for this surrender for years.

TIFFANY: This conditioning over time ushered in an eagerness to embrace AI, but you also are finding that something else sets in, and you begin the title of this piece by naming is: “AI Decision Fatigue.” What is that?

TYLER: AI Decision Fatigue is what happens when we start outsourcing our judgment to algorithms without realizing it, then find ourselves exhausted by having to constantly choose between what the AI suggests and what we actually think.

Here’s what it looks like in practice: I ask ChatGPT to help draft an email, and it gives me three options. I spend ten minutes trying to decide which one “sounds like me” – but none of them actually do. I ask it to plan my weekend, then feel overwhelmed trying to figure out if I actually want to do any of those activities. I use AI to help write a report, then spend more time editing it to sound human than it would have taken to write it myself.

The fatigue isn’t from making decisions – it’s from making the wrong kind of decisions. Instead of deciding what I want to say, I am deciding which AI-generated option sounds least artificial. Instead of thinking through a problem, I am evaluating whether the AI’s solution fits my situation.This is surrender of cognitive authority.

The solution isn’t to stop using AI – it’s to get clear about when I want its input and when I need to think for myself.

TIFFANY: Okay, but surrendering “cognitive authority” means what?

TYLER:  Cognitive authority is about choice and agency and surrendering cognitive authority is kinder than using Bonhoeffer’s language: in his world, surrender of cognitive authority as an active choice is stupidity. Smart people — really smart people — are surrendering their ability to think independently. It’s not about IQ; it’s about whether we maintain our role as Editor-in-Chief of our own minds.

TIFFANY: Editor-in-Chief—I like that. But how exactly are we surrendering? Like, what does that actually look like day-to-day?

TYLER: We mapped out four stages, and they’re happening to all of us already. Stage one is editorial surrender—I ask AI something, get a coherent response, and just accept it instead of treating it like a rough draft. Stage two is reality testing surrender—the AI’s recommendation starts feeling more trustworthy than my own experience.

TIFFANY: Oh, that’s uncomfortable. I can recall some purchase decisions I made by trusting the Amazon recommendations and the things that are now just collecting dust.

TYLER: [laughs] Exactly! Stage three is intellectual courage surrender — I start using AI to confirm what I already believe instead of challenging my assumptions. And stage four is decision-making surrender—I am treating AI like an oracle instead of a thinking partner.

TIFFANY: Okay, so I’m probably guilty of at least three of those. But here’s what I want to know—you mentioned this isn’t just about individual thinking. You’re saying this affects how we relate to each other?

TYLER: This is the part that keeps me up at night. When we surrender our cognitive authority, we don’t just lose our ability to think independently—we lose our capacity for “contemplative engagement” with reality and with each other. This is where we build trust and belonging despite apparent differences. We trade this and start living in algorithmic filter bubbles that feel more real than actual human conversation.

TIFFANY: So what are we actually risking here? Like, what’s the worst-case scenario?

TYLER: We risk becoming what Bonhoeffer called “mindless tools”—able to follow sophisticated instructions but unable to evaluate whether those instructions serve human flourishing. We lose our tolerance for uncertainty, our ability to sit with contradiction, our willingness to change our minds when evidence demands it. We lose our ability to compromise and self-govern in a state of collective growth and becoming.

TIFFANY: That’s… that’s sobering. But you’re not just here to shock us, right? What can we do to assert our cognitive authority, or at least reassert it?

TYLER: While it is not easy, it is also not difficult: instead the practices we need to engage are demanding – but they demand the best of what we are capable, the best of our humanity.

I feel there are four foundational preservation practices. First is editorial discernment—treating AI outputs as rough drafts that need my editorial review. Second is cognitive authority—reserving my mental energy for the decisions that require unfiltered human judgment. Third is independent verification—checking AI output against reality and other sources, not just accepting internal coherence.

TIFFANY: And the fourth?

TYLER: Productive discomfort. Learning to sit with uncertainty and contradiction without rushing to false clarity. It’s like working with hand tools before power tools—the difficulty develops sensitivities that transfer to everything else.

TIFFANY: I have to ask—aren’t you being a little dramatic here? I mean, we’re talking about tools that help us write emails and generate ideas.

TYLER: Every AI interaction is a choice point. We can either maintain “cognitive ownership, autonomy, and agency” while benefiting from technological augmentation, or we can gradually outsource our thinking until we can’t tell the difference between our thoughts and the algorithm’s outputs.

TIFFANY: So what’s your solution? What’s the Editor-in-Chief principle you keep mentioning?

TYLER: It’s simple and demanding: “I [Tyler] will use AI as a sophisticated tool while maintaining creative authority over my thinking. I will seek AI’s assistance without surrendering my agency. I will collaborate with artificial intelligence while preserving my human intelligence.”

TIFFANY: That sounds like a pledge.

TYLER: It is. Because this isn’t a one-time decision—I must make this choice every single day, every time I interact with any system, whether it is algorithmic or social, commercial or cultural. The question isn’t whether we’re smart enough to navigate this challenge. Intelligence alone won’t save us. The question is whether we’re wise enough to preserve what makes us human while embracing what makes us more capable.

TIFFANY: Alright, Tyler, last question. What’s the one thing you want people to do after listening to this?

TYLER: Start paying attention to authority check questions: Can I think well when AI tools, mass media, social media, the supercomputer in my pocket, public figures or other leaders are not available? Am I directing this interaction or being directed by it? Would I be able to evaluate this without AI assistance or some other external agent? The moment we start asking those questions, we are already taking back cognitive ownership.

TIFFANY: Tyler, where can people dive deeper into this?

TYLER: The full article breaks down all four stages of surrender and gives you the complete framework for the preservation practices. It’s on the website—”The Surrender of Independent Thought: How Smart People Stop Thinking.” Fair warning: it might change how you look at every AI interaction you have from now on.

TIFFANY: Before we wrap up—what Tyler’s describing here points to something even deeper. These preservation practices? They require character. Intellectual courage, humility, tenacity, honesty. Which brings us to our next conversation: what happens when we can’t eliminate uncertainty, when we have to make decisions without guarantees, when being right isn’t an option and we have to focus on thinking well instead. Tyler’s working on a piece about the wisdom of not knowing, and why our capacity to sit with uncertainty might be the most human thing about us. We’ll be back with that next week.

Leave a Reply

Your email address will not be published. Required fields are marked *