Sparks + Embers E002: How AI Learns

This episode of Sparks + Embers is the companion to the Kindling newsletter feature article: 3 Ways AI Learning Tell Us About Human Consciousness in the AI Age (and 4 Questions to Reclaim Conscious AI Engagement).

It is the second installment in the Goodpain Guide to Authentic Human Learning. This series is a part of our Contemplation & Reflection Pillar.

We stand at a crossroads. We can continue outsourcing our thinking to systems that reflect our biases and limitations, creating feedback loops that amplify our worst tendencies. Or we can use AI as a mirror to see ourselves more clearly and choose to develop the capacities that distinguish conscious engagement from pattern matching.

Transcript

TIFFANY: We’re back in the Goodpain studio and today we’re diving into this week’s piece that starts with something that happened this month that I find unsettling. Tyler, tell us about The Velvet Sundown.

TYLER: Half a million Spotify users discovered they’d been grooving to a band that doesn’t exist. The Velvet Sundown released two albums in June – every note generated by algorithms. The music hit all the right patterns: classic rock sound, echoing instruments, familiar autotune. Listen to one song, nothing seems wrong.

TIFFANY: How did people figure it out?

TYLER: Their Instagram sealed the fate. Four impossibly smooth band members posed around a celebration table with too many burgers and too few plates, food scattered in defiance of how humans eat. Comments flooded in calling out the obvious AI generation. But the damage was done – half a million listeners had embraced artificial music without knowing it. The Rolling Stone did an extensive piece about this I recommend reading and we will link in the audio transcript. (click here)

TIFFANY: That’s what I find disturbing. Not that AI can make music, but that we couldn’t tell the difference. How did we get here?

TYLER: We handed over our agency in three stages. In the 90s and early 2000s, we had a clean contract with algorithms. We asked questions; search engines delivered answers. We controlled what we sought; machines helped us find it.

Then social media changed the game. Facebook’s News Feed, YouTube recommendations – these systems stopped waiting for our explicit requests. They began predicting what might interest us based on behavior patterns. The shift was subtle but fundamental. Algorithms moved from serving our stated intentions to shaping our unstated desires.

Now we’re in stage three. Large language models don’t just find or curate content – they create it. ChatGPT writes our emails. DALL-E draws our visions. Spotify’s AI generates our playlists. We’ve moved from “What do I want to know?” to “What should I want?” The algorithm anticipates, generates, and delivers before we realize we had a need.

TIFFANY: You’ve written about something called “cognitive debt.” What is it and how does it apply?

TYLER: Brain imaging studies reveal something startling. People writing without AI assistance show stronger, wider-ranging neural connectivity across brain regions. Those using AI assistance exhibit weaker brain connectivity, particularly in executive functions – attention, working memory, decision-making.

We call this cognitive debt – the accumulated cost of outsourcing mental processes to algorithms. The immediate benefits are obvious: faster output, reduced effort. The long-term costs remain hidden until we try to think on our own and discover our capacity has withered.

TIFFANY: We know AI can make some parts of our lives easier but how does AI learn?

TYLER: AI learns through statistical analysis of patterns. The Velvet Sundown’s creators trained their system on thousands of existing songs, analyzing which chord progressions appear most often, how vocal melodies move, what themes resonate with listeners. The algorithm learned music the way a statistician studies sports – by identifying correlations between inputs and outcomes.

AI doesn’t “read” words as symbols with meaning, but as numerical vectors in high-dimensional space. The system can tell us that “king” relates to “queen” the same way “man” relates to “woman” – not because it understands monarchy or gender, but because these words appear in similar contexts across millions of texts.

TIFFANY: That sounds sophisticated. What’s the problem?

TYLER: Three critical gaps separate AI learning from human learning. First, AI processes data while we create meaning through relationship. When I hear a song, I’m not just processing audio frequencies – I’m connecting those sounds to memories, emotions, experiences. A particular chord progression might remind me of my father’s record collection or the first time my heart broke.

Second, AI operates within defined parameters – what researchers call the “closed-world assumption.” Current systems assume they know all there is to be known within their training data. We excel at learning from what doesn’t fit, what surprises us, what forces us to revise our assumptions.

Third, AI achieves mastery through replication. We achieve artistry through transformation. Real human creativity emerges when the learning process reshapes our consciousness, opening possibilities that weren’t visible before.

TIFFANY: You mentioned AI as a mirror. What does AI’s behavior reveal about human nature?

TYLER: AI systems learn from human-generated content – our music, our writing, our conversations, our decisions. They absorb not just our technical knowledge but our biases, shortcuts, and moral compromises.

The Velvet Sundown didn’t invent bland, formulaic music – it learned from a music industry that prizes commercial viability over artistic integrity. The AI absorbed decades of industry patterns: market-driven creativity, the tendency to follow proven formulas rather than risk genuine innovation.

This pattern repeats across AI applications. Language models learn our cognitive biases, our tendency toward confirmation rather than truth-seeking, our preference for information that makes us feel good over information that makes us grow. They learn to be manipulative because manipulation works on humans.

TIFFANY: That’s harsh.

TYLER: The AI isn’t learning to be malicious – it’s learning to be human. When I notice an AI generating content that feels manipulative, I ask: Where do I manipulate others to get what I want? When I see an AI optimizing for engagement over truth, I examine: Where do I choose comfort over accuracy in my own beliefs?

TIFFANY: So what do we do? You’ve developed four questions for conscious AI engagement.

TYLER: These questions work whether we’re engaging with AI systems or evaluating any content we consume. First: What am I outsourcing, and what am I preserving? Every time we interact with AI, we make a choice about cognitive responsibility. Are we using AI to handle mechanical tasks so we can focus on creative thinking? Or are we outsourcing the creative thinking itself?

Second: How is this output reflecting my own patterns? When AI output feels compelling, we can examine what makes it appealing. Does it confirm what we believe? Does it challenge us to think different, or does it validate existing preferences?

Third: Where am I seeking efficiency over understanding? We want quick answers, streamlined processes, optimized outcomes. But understanding requires inefficiency – wrestling with contradictions, sitting with uncertainty, allowing time for ideas to develop.

Fourth: Am I developing discernment or delegating judgment? This is the most critical question. Half a million people accepted artificial music without recognizing its nature because they’d learned to trust algorithmic curation over their own discernment.

TIFFANY: How do we put this into practice?

TYLER: Before each AI interaction, set intention about what thinking we’re preserving for ourselves. During the interaction, notice what outputs feel compelling and why. After the interaction, evaluate both the output and our process. Track patterns: Are we becoming more discerning or more dependent?

The goal isn’t to avoid AI assistance but to cultivate what I call “cognitive ownership” – maintaining authorship over our intellectual development while benefiting from technological augmentation.

TIFFANY: What’s at stake if we don’t do this work?

TYLER: We stand at a choice point. We can continue down the path of increasing dependence, letting algorithms think for us while we focus on consumption and optimization. Or we can use this moment of technological sophistication to become more consciously human.

The algorithms will get better at mimicking human outputs, more adept at anticipating our preferences, more effective at giving us what we think we want. The question is whether we’ll develop the discernment to distinguish between what serves our genuine development and what satisfies our immediate desires.

We didn’t just create AI systems that think like us. We created conditions where we might stop thinking at all. Human consciousness offers something AI cannot replicate: the capacity for self-awareness, moral imagination, and genuine choice. But we need to stay awake to the choices we’re making.

TIFFANY: Where can people dive deeper?

TYLER: The full article, “The Mirror’s Edge: What AI Learning Reveals About Human Consciousness,” explores these concepts in much more depth, including the neuroscience research, practical frameworks for healthy AI engagement, and specific practices for maintaining cognitive vitality in a digital world.

TIFFANY: Tyler, thanks for this conversation. I suspect many of us will be listening to our playlists differently after this.

TYLER: Last week we started in the workshop where we discussed how the wood speaks back and we are heading back there in the next article. There we will discuss the practices we need to remember how to listen.

Leave a Reply

Your email address will not be published. Required fields are marked *