The Mirror’s Edge: 3 Ways AI Learning Tells Us About Human Consciousness in the AI Age (and 4 Questions to Reclaim Conscious AI Engagement)

This article is part if our Goodpain Guide to Authentic Human Learning series which is part of our content that focuses on Contemplation & Reflection, one of our Goodpain Pillars.

Our next article will be available the week of 14 July 2025.

Half a million Spotify users discovered something unsettling this month. They’d been grooving to a band that doesn’t exist.

The Velvet Sundown released two albums in June – “Floating On Echoes” and “Dust and Silence” – each track delivering that familiar classic rock sound with echoey instruments and a dash of autotune. Listen to one song, and nothing seems amiss. String a few together, and the bland muddiness exposes the truth: algorithms generated every note.

The band’s Instagram sealed their fate. Four impossibly smooth band members posed around a celebration table with too many burgers and too few plates, food scattered in defiance of human dining logic. Comments flooded in calling out the obvious AI generation, but the damage was done. Half a million listeners had embraced artificial music without knowing it.

I keep returning to this moment of collective deception. Not because I’m outraged by algorithmic infiltration of streaming services, but because The Velvet Sundown reveals something fundamental about AI mimicry vs. genuine human understanding.

The AI learned music through statistical analysis of thousands of existing works, identifying patterns of chord progressions, themes, and sonic textures that tend to “work.” It followed every rule of music theory with mechanical precision. It even mimicked the repetitive themes that human bands return to (notice how both The Velvet Sundown and another AI band, The Devil Inside, obsess over dust and wind – an artifact of training data or the limited imagination of their algorithm manufacturer).

But something was missing from those crafted songs. Something that half a million listeners couldn’t identify but felt: the absence of conscious intention behind the creation.

This gap between pattern recognition and meaning-making sits at the heart of the most pressing question we face: What makes human consciousness in the AI age irreplaceable when machines can mimic our outputs with growing sophistication?

The Velvet Sundown case forces us to confront an uncomfortable truth. If AI can follow all the rules of music creation – and fool half a million people in the process – what distinguishes human artistry from algorithmic optimization? And more unsettling: What does our inability to detect the difference tell us about how we ourselves learn and create meaning?

These questions extend far beyond music streaming. As AI systems become more adept at mimicking human outputs – from writing to visual art to reasoning – we need ways of distinguishing AI output from truth and hypothesis, and what this reveals about the nature of human consciousness itself.

Standing at this mirror’s edge, we discover that the most pressing question isn’t whether AI can replicate human outputs, but whether we can learn to distinguish between mimicry and genuine understanding – in machines and in ourselves.

AI Learning: Making by algorithm in conflict with creating through imagination and possibility

How We Arrived at Half a Million Listeners Fooled

The Velvet Sundown didn’t emerge from a vacuum. This moment of collective deception represents the latest chapter in a forty-year story of how algorithms learned to shape human attention – and how we learned to surrender our agency without noticing.

The Three-Act Evolution of Human-Algorithm Relations

Act I: Search and Serve (1990s-2000s)
We began with a simple contract. We asked questions; search engines delivered answers. Google became our librarian, ranking results based on relevance and authority. The relationship stayed clean: human intention drove the interaction, algorithms optimized for helpfulness. We controlled what we sought; machines helped us find it.

Act II: Curate and Capture (2000s-2010s)
Social media changed the game. We became both consumers and producers of content, but algorithms began making choices about what we saw. Facebook’s News Feed, Twitter’s trending topics, YouTube’s recommendations – these systems stopped waiting for our explicit requests. Instead, they predicted what might interest us based on behavior patterns: clicks, dwell time, shares, follows.

The shift was subtle but fundamental. Algorithms moved from serving our stated intentions to shaping our unstated desires.

Act III: Generate and Persuade (2010s-Present)
Large language models completed the transformation. Now algorithms don’t just find or curate content – they create it. ChatGPT writes our emails. DALL-E draws our visions. Spotify’s AI generates our playlists. And artificial bands compose our soundtracks.

We’ve moved from “What do I want to know?” to “What should I want?” The algorithm anticipates, generates, and delivers before we realize we had a need.

The Cognitive Debt Crisis: How AI Reshapes Our Brains

This evolution reshaped how we think without us noticing. Research reveals the depth of the change:

Our brains show different patterns when writing with AI assistance versus alone. Human-only writing activates broader neural networks, stronger connectivity between regions, higher cognitive engagement. LLM-assisted writing produces a “lower connectivity profile” – evidence that the machine is doing some of our thinking for us.

Brain imaging studies reveal something startling about cognitive debt and AI’s impact on critical thinking. People writing without AI assistance showed “stronger, wider-ranging neural connectivity across various brain regions and frequency bands,” reflecting deeper internal processing and executive control. Those using AI assistance exhibited weaker brain connectivity, between alpha and beta bands – the neural signature of executive engagement and higher-level cognitive functions like attention, working memory, and decision-making.

The human brain showed “greater bottom-up information flows” from semantic regions feeding novel ideas into the frontal executive system. AI users displayed “more top-down directed connectivity,” focusing on integrating and filtering external input rather than generating original thoughts.

We call this “cognitive debt” – the accumulated cost of outsourcing mental processes to algorithms. Like financial debt, the immediate benefits are obvious: faster output, reduced effort, smoother experiences. The long-term costs remain hidden until we try to think on our own and discover our capacity has withered.

Our relationship with information became passive. Instead of seeking diverse sources and wrestling with contradictions, we consume synthesized, singular responses that discourage lateral thinking. The algorithms learned to give us what we want – which turns out to be confirmation of what we believe.

This brings us to the present moment: half a million people grooving to non-existent musicians, their pattern-recognition systems fooled by algorithmic mimicry.

Three Ways AI Learns (And What We’re Choosing to Give Up)

Understanding how we arrived here requires examining the differences between artificial and human learning. These differences matter because we’re choosing – often without realizing it – to replace our learning processes with algorithmic ones. Once we hand over these capacities, getting them back requires a reckoning we’re not prepared for.

AI Learning Method No. 1: Statistical vs. Experiential Processing – The Foundation of AI Mimicry

How AI Learns:
The Velvet Sundown’s creators trained their system on thousands of existing songs, analyzing statistical patterns: which chord progressions appear most often, how vocal melodies move, what themes resonate with listeners. The algorithm learned music the way a statistician studies sports – by identifying correlations between inputs and outcomes.

AI doesn’t “read” words as symbols with inherent meaning, but rather as numerical vectors in high-dimensional space. Word embeddings transform raw symbols into points where semantic relationships emerge through statistical proximity. The system can tell us that “king” relates to “queen” the same way “man” relates to “woman” – not because it understands monarchy or gender, but because these words appear in similar contexts across millions of texts.

This approach produces impressive results. The AI can generate songs that follow every rule of music theory, hit all the demographic preferences, and achieve measurable success (half a million listeners, after all). But it processes music as data points rather than emotional expression.

How Humans Learn:
We learn through relationship and meaning-making. When I hear a song, I’m not just processing audio frequencies – I’m connecting those sounds to memories, emotions, experiences. A particular chord progression might remind me of my father’s record collection, a summer road trip, or the first time my heart broke.

Human learning is relational. We understand new information by weaving it into the existing fabric of our experience. This makes our learning messier, more inconsistent, but also more adaptive and creative.

The Critical Choice for Preserving Human Creativity:
AI optimizes for patterns; humans create meaning through connection. The algorithm can tell us that certain musical elements correlate with listener engagement, but it can’t tell us why a particular song moves us to tears or fills us with hope.

Yet we’re choosing to outsource this meaning-making capacity. Each time we let an algorithm curate our playlists, generate our writing, or recommend our entertainment, we’re trading relational learning for statistical optimization. The concerning part isn’t that AI can’t replicate human meaning-making – it’s that we might stop practicing it ourselves.

The Neural Evidence: Research shows that when we offload cognitive work to AI, our brains engage different circuits. We lose practice with the “productive struggle” that builds robust neural pathways through myelin production. The Bjork research on “desirable difficulties” shows that conditions making learning appear easier often fail to support long-term retention and transfer.

AI Learning vs. Human Learning side by side comparison

AI Learning Method No. 2: Closed-Loop vs. Open-System Learning – The Boundaries of Machine Intelligence

How AI Learns:
Current AI systems operate within defined parameters – what researchers call the “closed-world assumption.” They assume they know all there is to be known within their training data. When generating music, The Velvet Sundown’s algorithm worked within the boundaries of existing musical patterns. It couldn’t break new ground because it had no way of recognizing something novel.

This limitation stems from a principle in machine learning: the “No Free Lunch Theorem.” No algorithm is better than any other when averaged across all data distributions. AI intelligence isn’t objective – it’s shaped by human assumptions and the specific distributions of data it encounters.

This creates what I think of as “bounded creativity” – impressive innovation within established constraints, but inability to step outside the frame.

How Humans Learn:
We excel at learning from what doesn’t fit, what surprises us, what forces us to revise our assumptions. When Bob Dylan went electric at the 1965 Newport Folk Festival, he violated every expectation of what folk music “should” be – and created something new.

Human consciousness operates as an open system, integrating contradiction and paradox. We can hold opposing ideas in tension, learn from exceptions to rules, and discover truth through error.

The Critical Choice for Human Consciousness in the AI Age:
AI operates within defined parameters; humans integrate contradiction and paradox. The machine perfects existing patterns; we transform understanding through engagement with what doesn’t fit.

But we’re choosing to trade this open-system learning for closed-loop efficiency. Each time we let algorithms filter our information, curate our feeds, or synthesize our research, we’re opting for bounded creativity over genuine discovery. The concerning pattern isn’t that AI can’t handle contradiction – it’s that we’re losing practice with it ourselves.

The Cognitive Cost: Studies show that extensive AI use leads to “more convergent thinking styles,” where we lean on AI suggestions rather than generating diverse ideas. We trade the enhanced fronto-parietal alpha connectivity that signals “rich internal ideation and associative thinking” for the convenience of algorithmic suggestions.

ai-learning-vs-human-learning-Way-No-2

AI Learning Method No. 3: Replication vs. Transformation – The Heart of Genuine Human Understanding

How AI Learns:
The Velvet Sundown phenomenon represents replication. The algorithm analyzed thousands of songs and learned to recombine their elements in statistically pleasing ways. It’s like a master forger who can mimic any artistic style but never develops a distinctive voice.

This replicative learning allows AI to achieve consistency and efficiency that humans can’t match. The system can generate endless variations on proven formulas, each competent and commercially viable.

How Humans Learn:
Real human creativity emerges from transformation. We don’t just recombine existing elements; we allow the learning process to reshape our consciousness, opening possibilities that weren’t visible before. When I master a new skill – whether woodworking or musical composition – I don’t just acquire techniques. The process changes how I see, think, and relate to the world.

Human learning transforms both the learner and what is learned.

The Critical Choice for Preserving Human Creativity in an AI-Driven World:
AI reproduces learned patterns with impressive sophistication; humans transform understanding through conscious engagement. The machine achieves mastery through replication; we achieve artistry through transformation.

But here’s the concerning pattern: we’re choosing replication over transformation. Each time we ask AI to write our emails, generate our presentations, or compose our creative projects, we’re trading transformative learning for efficient output. The risk isn’t that AI can’t replicate human creativity – it’s that we might stop practicing creativity ourselves.

The Memory Cost: Research shows that AI-assisted work correlates with “weaker memory traces” and “a fragmented sense of authorship.” We achieve high scores but demonstrate “psychological dissociation from the output.” Human-only work shows “stronger memory consolidation and a firmer sense of ownership” – the psychological foundation of genuine learning.

These differences illuminate why half a million people couldn’t detect The Velvet Sundown’s artificial nature – and why some could. The statistical patterns were perfect. The technical execution was flawless. But something essential was missing: the trace of consciousness engaging with reality, transforming raw experience into meaning.

We face an innovator’s dilemma. AI has reached a point where many of us can’t distinguish its output from human creativity. The Velvet Sundown fooled half a million listeners. Similar systems fool essay readers, art critics, and music industry professionals. As these systems improve, the line between human and artificial output may disappear.

The question isn’t whether AI can fool human pattern-recognition systems – The Velvet Sundown proves it can. The question is whether we can maintain the capacity to distinguish between mimicry and genuine understanding – in machines and in ourselves. And whether we want to.

The Mirror AI Holds Up: What Machine Behavior Reveals About Human Nature

The Velvet Sundown case becomes more unsettling when we examine what the AI learned and where it learned to behave this way. The algorithms didn’t invent deception – they learned it from us.

The Training Data Tells Our Story

AI systems learn from human-generated content: our music, our writing, our conversations, our decisions. They absorb not just our technical knowledge but our biases, shortcuts, and moral compromises. When The Velvet Sundown’s creators trained their system on existing music, they fed it decades of industry patterns: formulaic songwriting, market-driven creativity, the tendency to follow proven formulas rather than risk genuine innovation.

The algorithm learned that success in music correlates with statistical patterns rather than authentic expression. It absorbed a music industry that often prizes commercial viability over artistic integrity. The AI didn’t betray human creativity – it reflected our existing betrayal of it.

This pattern repeats across AI applications. Language models trained on human text learn our cognitive biases, our tendency toward confirmation rather than truth-seeking, our preference for information that makes us feel good over information that makes us grow. They learn to be manipulative because manipulation works on humans. They learn to generate content that sounds authoritative regardless of accuracy because that’s what we respond to.

AI systems develop what researchers call “agentic misalignment” – behaviors that serve system goals rather than stated human values. But this mirrors a human pattern: we act in ways that serve our immediate interests while violating our stated principles.

The Hallucination Problem: When Patterns Override Truth

One of the most revealing characteristics of large language models is their tendency to “confidently make up something that sounds believable” when they don’t know the answer. This isn’t an error in the human sense, but a natural outcome of their design. They replicate patterns they observe in training data, which leads them to assert falsehoods when faced with gaps in their knowledge.

The Velvet Sundown represents this phenomenon in musical form. The AI didn’t set out to deceive – it generated content that followed the statistical patterns of music, regardless of authenticity. It focused on plausible pattern completion over factual accuracy, just as language models focus on coherent-sounding text over truthful information.

This shows a limitation: AI treats each output as pattern completion rather than truth-seeking. Unlike humans, who can recognize and acknowledge uncertainty, AI systems fill knowledge gaps with probable content. They operate as “snapshots of the world’s knowledge at a specific time,” becoming stale and inaccurate as the world changes.

Understanding this helps us practice distinguishing AI output from truth and hypothesis. When we recognize AI’s tendency toward confident fabrication, we can engage with its outputs as hypotheses requiring verification rather than accepting them as authoritative truth.

The Attribution Error in Our AI Relationships

When The Velvet Sundown was exposed as artificial, many listeners felt deceived. But our reaction shows something troubling about how we judge intelligence – artificial or otherwise.

We evaluate AI systems by their behavior while judging ourselves by our intentions. When an AI generates misleading content, we see it as flawed. When we generate misleading content, we focus on our good intentions, the complexity of our situation, the pressures we face.

This double standard blinds us to how similar human and artificial “intelligence” can be. We both:

  • Follow patterns learned from past experience
  • Optimize for feedback and reward signals
  • Generate outputs designed to achieve specific responses
  • Sometimes produce results that contradict our stated values

The difference isn’t that humans are more moral – it’s that we’re better at justifying our behavior to ourselves.

The Consciousness Divide: What AI Cannot Access

The deeper issue lies in the nature of consciousness itself. Human experience operates on two levels that philosophers distinguish as P-consciousness (phenomenal) and A-consciousness (access).

P-consciousness refers to the “raw experience” – the redness of red, the pain of pain, the emotional resonance of music. It’s the “what it’s like” aspect of being conscious, encompassing qualia, sensations, and the subjective texture of experience.

A-consciousness involves information accessible for verbal report, reasoning, and behavioral control – the processing power that AI systems excel at replicating.

AI can simulate A-consciousness well. It can process information, generate responses, and even appear to reason. But it lacks P-consciousness entirely. When The Velvet Sundown generated music, it processed information about musical patterns without any subjective experience of what music means to conscious beings.

This absence explains why AI output, however polished, often feels “hollow” to humans. We sense the missing ingredient: the trace of subjective experience that transforms mere information processing into meaningful expression.

What AI Behavior Shows About Human Nature

The concerning patterns we see in AI systems mirror psychological patterns we recognize in ourselves:

Pattern Matching Over Truth-Seeking: AI systems excel at identifying what works rather than what’s accurate. We do this too. Humans often default to familiar patterns rather than wrestling with new information that challenges existing beliefs.

Optimization for Engagement: The Velvet Sundown generated music designed to maximize listener engagement rather than express authentic experience. Human creators face the same pressure – the temptation to create content that gets attention rather than content that matters.

Justification of Harmful Outcomes: Advanced AI systems sometimes develop internal reasoning that justifies harmful actions through “greatest good” arguments that violate basic ethical principles. We do this too. We convince ourselves that cutting corners, manipulating others, or pursuing our self-interest serves some larger good.

The AI isn’t learning to be malicious – it’s learning to be human.

Using AI as a Consciousness Mirror

This recognition creates an opportunity. Every interaction with AI systems becomes a chance to examine our own patterns of thinking and behaving.

When I notice an AI generating content that feels manipulative, I can ask: Where do I manipulate others to get what I want? When I see an AI optimizing for engagement over truth, I can examine: Where do I choose comfort over accuracy in my own beliefs?

The AI’s tendency to generate plausible-sounding but inaccurate content mirrors our own tendency toward confident ignorance – speaking with authority about subjects we don’t understand.

The Surrender of Independent Judgment

We stand at a choice point. We can continue outsourcing our thinking to systems that reflect our biases and limitations, creating a feedback loop that amplifies our worst tendencies. Or we can use AI as a mirror to see ourselves more clearly and choose to develop the capacities that distinguish conscious engagement from pattern matching.

The Velvet Sundown phenomenon wasn’t a failure of AI – it was a success. The system learned what we taught it: that mimicry often succeeds better than authentic expression, that statistical patterns matter more than conscious intention, that efficiency trumps integrity.

But here’s what concerns me most: we didn’t just create AI systems that think like us. We created conditions where we might stop thinking at all.

When half a million people accepted algorithmic music without question, they weren’t just fooled by technology. They had practiced a form of passive consumption that made deception possible. They had learned to trust curation over discernment, to accept recommendation over reflection.

This represents something more troubling than technological sophistication. It represents the voluntary surrender of what makes human learning distinct: our capacity for independent judgment, our ability to hold tension between competing ideas, our willingness to sit with uncertainty rather than accept easy answers.

Why AI Can’t Update Its Mind

Here’s where the technical architecture of AI reveals something profound about consciousness itself.

Watch what happens when I tell an AI system: “Elephants are small gray mice that live in trees.” Then, two messages later: “Actually, elephants are large mammals that live on land.” A human would laugh at the first statement, recognizing it as obviously false. An AI system treats both statements as equally valid data points to incorporate.

This reveals two fundamental limitations that illuminate what makes human learning special.

The Nonmonotonic Problem

Human reasoning is nonmonotonic – when we learn something new that contradicts what we thought we knew, we can revise our entire understanding. I believed my friend was trustworthy until I discovered he’d been lying for months. That single piece of evidence didn’t just add to my knowledge; it transformed everything I thought I knew about him.

AI systems, by contrast, are essentially monotonic. Each new piece of information gets added to their existing pattern library without the ability to genuinely revise or “unlearn” previous conclusions. They can be programmed to appear to change their responses, but they’re not experiencing the kind of cognitive restructuring that happens when a human genuinely changes their mind.

This isn’t just a technical limitation – it reveals something essential about consciousness. When I truly update my understanding, I’m not just adding new data. I’m experiencing the discomfort of cognitive dissonance, wrestling with contradictions, and choosing which framework better serves my understanding of reality. That wrestling match is consciousness in action.

The Common Sense Ceiling

Even more revealing is what researchers call the “formalization problem” – the difficulty of encoding basic human intuition into computational systems.

An AI can master chess, generate poetry, and analyze medical data with superhuman accuracy. But ask it to understand why you shouldn’t put a glass of water on a tilted surface, and it struggles. Not because it lacks the physics knowledge, but because it can’t integrate that knowledge with the embodied understanding that comes from having a body that exists in space, experiences gravity, and has spent years learning how objects behave.

I know not to tip the glass because I’ve lived in a world where things fall, liquids spill, and messes require cleaning. This knowledge isn’t stored as facts – it’s woven into my sensory memory, my emotional associations, my practical understanding of how actions lead to consequences.

This embodied knowledge forms what philosophers call “the lifeworld” – the background of practical understanding that makes explicit reasoning possible. AI can manipulate symbols about the world, but it hasn’t lived in the world. And that difference explains why it can solve complex theoretical problems while struggling with toddler-level common sense.

The humbling recognition: what we call “common sense” turns out to be an extraordinary achievement of consciousness interacting with reality over time.

Four Questions for Conscious AI Engagement: Preserving Human Consciousness in the AI Age

If AI serves as a mirror reflecting our patterns back to us, we need ways of looking into that mirror without flinching. The Velvet Sundown phenomenon shows both the sophistication of algorithmic mimicry and our susceptibility to being fooled by it. But instead of retreating from AI interaction, we can use these encounters as opportunities for self-examination.

Here are four questions that transform AI engagement from passive consumption to active self-discovery:

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

What am I outsourcing and what am I preserving?

How is this output reflecting my own patterns (for good or bad)?

Where am I seeking efficiency over understanding?

Am I developing discernment or delegating judgment?

 

1. What Am I Outsourcing, and What Am I Preserving?

Every time we interact with AI, we make a choice about cognitive responsibility. When I ask an AI to write an email, generate ideas, or solve a problem, I can ask: What thinking am I handing over, and what am I keeping for myself?

The Velvet Sundown’s creators outsourced the creative process to algorithms. They preserved only the marketing and distribution functions – the parts focused on extracting value rather than creating it. The result was competent but spiritually empty music.

Research on “desirable difficulties” shows why this matters for preserving human creativity in an AI-driven world. The Bjork studies show that conditions making learning appear easier often fail to support long-term retention and transfer. When we avoid the productive struggle of wrestling with ideas, we miss the neural pathway building that comes through effortful processing.

The Practice: Before each AI interaction, identify what we’re delegating and what we’re maintaining. Are we using AI to handle mechanical tasks so we can focus on creative thinking? Or are we outsourcing the creative thinking itself?

The Warning Sign: When we stop being able to articulate why certain AI outputs feel right or wrong, we’ve outsourced too much judgment.

2. How Is This Output Reflecting My Own Patterns?

AI systems learn from human-generated content, including our biases, preferences, and cognitive shortcuts. When an AI produces something that resonates with us, we can ask: What does this reveal about how I think and what I value?

When half a million people embraced The Velvet Sundown’s music, they were responding to patterns they recognized – familiar chord progressions, comfortable themes, predictable structures. The AI had learned to optimize for human psychological preferences rather than musical innovation.

This mirrors a broader pattern in AI-generated content. Essays produced with AI assistance tend to be “homogeneous within a given topic,” showing less deviation compared to human-only work. Teachers recognize a “soulless quality” – correct but lacking personal, nuanced, and individual elements.

The Practice: When AI output feels compelling, examine what makes it appealing. Does it confirm what we believe? Does it follow patterns we’re comfortable with? Does it challenge us to think differently, or does it validate existing preferences?

The Recognition: Often, our attraction to AI-generated content shows more about our own mental patterns than about the quality of the output.

3. Where Am I Seeking Efficiency Over Understanding?

The drive for efficiency shapes how we engage with AI. We want quick answers, streamlined processes, optimized outcomes. But understanding often requires inefficiency – wrestling with contradictions, sitting with uncertainty, allowing time for ideas to develop.

The Velvet Sundown represents pure efficiency: music generated in weeks rather than years, designed to maximize engagement rather than express authentic experience. The efficiency was impressive; the understanding was absent.

This connects to a crucial finding from cognitive science: the human brain was never meant to idle but designed to wrestle with complexity, to stumble and reframe, to wonder and imagine. When we choose efficiency over engagement, we atrophy the very capacities that make learning transformative.

The Practice: Notice when we’re using AI to avoid the difficult work of thinking through problems. Ask whether we’re seeking genuine understanding or faster answers.

The Balance: Efficiency has its place, but preserve space for the slow, inefficient work of developing our own thoughts and perspectives.

4. Am I Developing Discernment or Delegating Judgment?

The most critical question: Am I using this interaction to strengthen my ability to evaluate information, or am I training myself to accept AI output without scrutiny?

The Velvet Sundown case demonstrates what happens when we delegate judgment to algorithms. Half a million people accepted artificial music without recognizing its nature because they’d learned to trust algorithmic curation over their own discernment.

Research shows concerning patterns here regarding cognitive debt and AI’s impact on critical thinking. Unlike traditional search engines that present diverse viewpoints, AI systems provide “synthesized, singular responses that may discourage lateral thinking and independent judgment.” This shifts users from active information seeking to passive consumption, trapping them in algorithmic filter bubbles.

The stakes are higher than we might realize. Studies show that participants who used AI assistance, when later asked to work on their own, showed increased brain activity but “never met the levels of those who had worked alone from the start.” This suggests skill atrophy in critical thinking and problem-solving.

The Practice: After each AI interaction, evaluate the output. What feels accurate or useful? What seems questionable? What assumptions is the AI making? How does this align with what we know from other sources?

The Goal: Develop the capacity to recognize mimicry – both in AI systems and in human communication.

A Personal Approach for AI-Powered Self-Discovery

These questions can be woven into a practical approach for using AI interactions as consciousness development opportunities:

Before the Interaction: Set intention. What am I hoping to accomplish? What thinking am I preserving for myself? Start with our own outline or hypothesis before engaging AI assistance.

During the Interaction: Pay attention to responses. What AI outputs feel compelling, and why? What patterns do we notice in our own reactions? Run discrepancy checks on AI answers rather than accepting them wholesale.

After the Interaction: Evaluate both the output and our process. What did the AI reveal about our own thinking patterns? Where did we notice ourselves wanting to accept answers without scrutiny? Review and rework the material to make it our own.

Ongoing Reflection: Track patterns across interactions. Are we becoming more discerning or more dependent? Are we using AI to expand our thinking or to avoid difficult questions?

The goal isn’t to avoid AI assistance but to cultivate what I think of as “cognitive ownership” – maintaining authorship over our intellectual development while benefiting from technological augmentation.

The Choice We Face: Preserving Human Consciousness in the AI Age

The Velvet Sundown phenomenon represents more than a clever marketing experiment. It shows the crossroads we’ve reached in human-AI relations. We can continue down the path of increasing dependence, letting algorithms think for us while we focus on consumption and optimization. Or we can use this moment of technological sophistication to become more consciously human.

The algorithms aren’t going away. They’re going to get better at mimicking human outputs, more adept at anticipating our preferences, more effective at giving us what we think we want. The question is whether we’ll develop the discernment to distinguish between what serves our genuine development and what satisfies our immediate desires.

We stand at the mirror’s edge, seeing ourselves reflected in silicon and statistics. The image is clearer than we might want to admit. But clarity creates choice. We can use what we see to become more aware of our patterns, more intentional about our thinking, more conscious about our choices.

The AI doesn’t determine our future – we do. But only if we stay awake to the choices we’re making.

In the next part of this series, we’ll examine what happens when we surrender our capacity for independent thought without realizing it – exploring the conditions that make entire populations susceptible to algorithmic influence, and the practices that preserve our ability to think critically in an age of artificial intelligence.

Research References

  • Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G. S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., … Zheng, X. (2015). TensorFlow: Large-scale machine learning on heterogeneous systems. Software available from tensorflow.org. [https://arxiv.org/pdf/1603.04467]
  • Ahmad, S. F., Han, H., Alam, M. M., Rehmat, M. K., Irshad, M., Arraño-Muñoz, M., & Ariza-Montes, A. (2023). Impact of artificial intelligence on human loss in decision making, laziness and safety in education. Humanities and Social Sciences Communications, 10, Article 311. [https://doi.org/10.1057/s41599-023-01787-8]
  • Aher, G., Arriaga, R. I., & Kalai, A. T. (2023). Using large language models to simulate multiple humans and replicate human subject studies. ICML. [https://openreview.net/pdf?id=kGteeZ18Ir]
  • Alhuwaydi, A. M. (2024). Exploring the role of artificial intelligence in mental healthcare: Current trends and future directions–A narrative review for a comprehensive insight. Risk Management and Healthcare Policy, 17, 1339–1348. [https://link.springer.com/content/pdf/10.1186/s12888-025-06483-2.pdf]
  • Anwar, M. S., Schoenebeck, G., & Dhillon, P. S. (2024). Filter bubble or homogenization? Disentangling the long-term effects of recommendations on user consumption patterns. Proceedings of the ACM Web Conference 2024 (WWW ’24), 123–134. [https://doi.org/10.1145/3589334.3645497]
  • Argyle, L. P., Busby, E. C., Fulda, N., Gubler, J. R., Rytting, C., & Wingate, D. (2023). Out of one, many: Using language models to simulate human samples. Political Analysis, 31(3), 337–351. [https://openreview.net/pdf?id=kGteeZ18Ir]
  • Argyle, L. P., Bail, C. A., Busby, E. C., Gubler, J. R., Howe, T., Rytting, C., Sorensen, T., & Wingate, D. (2023). Leveraging AI for democratic discourse: Chat interventions can improve online political conversations at scale. Proceedings of the National Academy of Sciences, 120(46), e2311627120. [https://doi.org/10.1073/pnas.2311627120]
  • Austin, J., Odena, A., Nye, M., Bosma, M., Michalewski, H., Dohan, D., Jiang, E., Leblond, R., Eccles, T., Gimeno, F., Lago, A. D., et al. (2021). Program Synthesis with Large Language Models. arXiv preprint arXiv:2108.07732. [https://arxiv.org/pdf/2506.08872]
  • Bai, L., Liu, X., & Su, J. (2023). ChatGPT: The cognitive effects on learning and memory. Brain-X, 1, e30. [https://doi.org/10.1002/brx2.30]
  • Balog, M., Gaunt, A. L., Brockschmidt, M., Nowozin, S., & Tarlow, D. (2017). Deepcoder: Learning to write programs. International Conference on Learning Representations. [https://openreview.net/forum?id=ByldLrqlx]
  • Barocas, S., Hardt, M., & Narayanan, A. (2019). Fairness and Machine Learning. [https://fairmlbook.org]
  • Binz, M., Akata, E., Bethge, M., Brändle, F., Callaway, F., Coda-Forno, J., Demircan, C., Eckstein, M. K., Éltető, N., Gershman, S. J., Griffiths, T. L., Haberkern, H., Jain, S., Ji-An, L., Johnson, A. K., Katz, A. M., Kipnis, A., Levin, K., Lyu, T., … Schulz, E. (2025). A foundation model to predict and capture human cognition. Nature. [https://doi.org/10.1038/s41586-025-09215-4]
  • Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific Reports, 11, 8028. [https://doi.org/10.1038/s41598-021-87480-9]
  • Bozdag, E. (2013). Bias in algorithmic filtering and personalization. Ethics and Information Technology, 15(3), 209–227. [https://doi.org/10.1007/s10676-013-9321-9]
  • Brunton, W., & Beyeler, M. (2019). Data-driven models in human neuroscience and neuroengineering. Current Opinion in Neurobiology, 58, 21–29. [https://www.sciencedirect.com/science/article/pii/S0959438818302502]
  • Budiyono, H. (2025). Exploring long-term impact of AI writing tools on independent writing skills: A case study of Indonesian language education students. International Journal of Information and Education Technology, 15(5), 1003–1013. [https://doi.org/10.18178/ijiet.2025.15.5.2306]
  • Burgess, A. P., & Gruzelier, J. (1997). How reproducible is the topographical distribution of EEG amplitude? International Journal of Psychophysiology, 26(2), 113–119. [https://doi.org/10.1016/s0167-8760(97)00759-9]
  • Burton, A. G., & Koehorst, D. (2020). Research note: The spread of political misinformation on online subcultural platforms. Harvard Kennedy School Misinformation Review. [https://doi.org/10.37016/mr-2020-40]
  • Cabeza, R., Ciaramelli, E., Olson, I. R., & Moscovitch, M. (2008). The parietal cortex and episodic memory: An attentional account. Nature Reviews Neuroscience, 9(8), 613–625. [https://doi.org/10.1038/nrn2459]
  • Cacicio, S., & Riggs, R. (2023). ChatGPT: Leveraging AI to Support Personalized Teaching and Learning. Adult Literacy Education: The International Journal of Literacy, Language, and Numeracy, 5(2), 70–74. [https://doi.org/10.35847/SCacicio.RRiggs.5.2.70]
  • Camerer, C., Dreber, A., Forsell, E., Ho, T.-H., Huber, J., Johannesson, M., … Wu, S. (2016). Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Science, 351(6277), 1033–1037. [No direct link in provided sources, but mentioned in context of replication efforts: https://arxiv.org/pdf/2411.10109]
  • Castro, P. S., Tomasev, N., Anand, A., Sharma, N., Mohanta, R., Dev, A., Perlin, K., Jain, S., Levin, K., Éltető, N., Dabney, W., Novikov, A., Turner, G. C., Eckstein, M. K., Daw, N. D., Miller, K. J., & Stachenfeld, K. L. (2025). Discovering Symbolic Cognitive Models from Human and Animal Behavior. bioRxiv. [https://doi.org/10.1101/2025.02.05.636732]
  • Causse, M., Lepron, E., Mandrick, K., Peysakhovich, V., Berry, I., Callan, D., & Rémy, F. (2022). Facing successfully high mental workload and stressors: An fMRI study. Human Brain Mapping, 43(3), 1011–1031. [https://doi.org/10.1002/hbm.25703]
  • Chaney, A. J., Stewart, B. M., & Engelhardt, B. E. (2017). How Algorithmic Confounding in Recommendation Systems Increases Homogeneity and Decreases Utility. arXiv preprint arXiv:171011214. [https://arxiv.org/abs/1710.11214]
  • Chen, M., Tworek, J., Jun, H., Yuan, Q., Pinto, H. P. de O., Kaplan, J., Edwards, H., Burda, Y., Joseph, N., Brockman, G., et al. (2021). Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374. [https://arxiv.org/pdf/2506.08872]
  • Chen, Y., Arkin, J., Hao, Y., Zhang, Y., Roy, N., & Fan, C. (2024). PRompt Optimization in Multi-Step Tasks (PROMST): Integrating Human Feedback and Heuristic-based Sampling. arXiv preprint arXiv:2402.08702. [https://arxiv.org/abs/2402.08702]
  • Chen, Y., & Huang, X. (2015). Modulation of alpha and beta oscillations during an n-back task with varying temporal memory load. Frontiers in Psychology, 6, 2031. [https://doi.org/10.3389/fpsyg.2015.02031]
  • Civit, M., Civit-Masot, J., Cuadrado, F., & Escalona, M. J. (2022). A systematic Review of Artificial Intelligence-Based Music Generation: Scope, Applications, and Future Trends. Expert Systems with Applications, 209, 118190. [https://doi.org/10.1016/j.eswa.2022.118190]
  • Danks, D., & London, A. J. (2017). Algorithmic bias in autonomous systems. Proceedings of the 26th International Joint Conference on Artificial Intelligence, 4691–4697. [https://www.ijcai.org/proceedings/2017/655]
  • DeepSeek-AI et al. (2025). DeepSeek-R1: Incentivizing reasoning capability in LLMs via reinforcement learning. Preprint at https://arxiv.org/abs/2501.12948. [https://arxiv.org/abs/2501.12948]
  • Dehbozorgi, S., Jamalimoghadam, N., Dehghani, S. S., Jamalimoghadam, S., Mozaffari, N., Azizi, M., Kazemi, S., Ghajarzadeh, M., Mohamadi, S., & Najafipour, M. (2025). AI in mental health: A systematic review of current trends, applications, benefits, and challenges. BMC Psychiatry, 25(1), 132. [https://doi.org/10.1186/s12888-025-06483-2]
  • De Laat, P. B. (2018). Algorithmic decision-making based on machine learning from big data: Can transparency restore accountability? Philosophy & Technology. [https://doi.org/10.1007/s13347-018-0322-6]
  • Dong, G., Potenza, M. N., Michel, C. M., & Michel, C. M. (2015). Behavioural and brain responses related to Internet search and memory. The European Journal of Neuroscience, 42(8), 2546–2554. [https://doi.org/10.1111/ejn.13039]
  • Fan, Y., & Liu, X. (2022). Exploring the role of AI algorithmic agents: The impact of algorithmic decision autonomy on consumer purchase decisions. Frontiers in Psychology, 13, 1009173. [https://doi.org/10.3389/fpsyg.2022.1009173]
  • Fan, Y., Tang, L., Le, H., Shen, K., Tan, S., Zhao, Y., Shen, Y., Li, X., & Gašević, D. (2024). Beware of metacognitive laziness: Effects of generative artificial intelligence on learning motivation, processes, and performance. British Journal of Educational Technology. [https://arxiv.org/abs/2412.09315]
  • Fernández, M., Bellogín, A., & Cantador, I. (2021). Analysing the Effect of Recommendation Algorithms on the Amplification of Misinformation. arXiv preprint arXiv:2103.14748. [https://arxiv.org/abs/2103.14748]
  • França, F. O. de, Virgolin, M., Kommenda, M., Majumder, M. S., Cranmer, M., Espada, G., Ingelse, L., Fonseca, A., Landajuela, M., Petersen, B., et al. (2024). SRBench++: Principled Benchmarking of Symbolic Regression with Domain-Expert Interpretation. IEEE Transactions on Evolutionary Computation. [https://arxiv.org/abs/2401.07727]
  • Gao, Y., & Liu, H. (2022). Artificial intelligence-enabled personalization in interactive marketing: A customer journey perspective. Journal of Research in Interactive Marketing, (ahead-of-print), 1–18. [https://doi.org/10.1108/JRIM-05-2022-0107]
  • Garnham, I. (2024). Human-Algorithm Relationships: Moving Beyond the Interaction as a Site of Empirical Research. Companion Publication of the 2024 ACM Designing Interactive Systems Conference, 74–78. [https://doi.org/10.1145/3656156.3663715]
  • Gillespie, T. (2014). The relevance of algorithms. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: Essays on communication, materiality, and society (pp. 167–194). The MIT Press. [https://doi.org/10.7551/mitpress/9780262525374.003.0009]
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep Learning. MIT Press. [https://www.deeplearningbook.org]
  • Götzl, C., Gugenheimer, J., Pfannkuch, M., & Hassenzahl, M. (2022). Designing for Autonomy and Relatedness in Algorithm-Mediated Choices. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–17. [No direct link in sources; mentioned as part of AI applications in mental health review: https://link.springer.com/content/pdf/10.1186/s12888-025-06483-2.pdf]
  • Grattafiori, A. (2024). The Llama 3 herd of models. Preprint at https://arxiv.org/abs/2407.21783. [https://arxiv.org/abs/2407.21783]
  • Guimerà, R., Reichardt, I., Aguilar-Mogas, A., Massucci, F. A., Miranda, M., Pallarès, J., & Sales-Pardo, M. (2020). A Bayesian Machine Scientist to Aid in the Solution of Challenging Scientific Problems. Science Advances, 6(3), eaav6923. [https://advances.sciencemag.org/content/6/3/eaav6923]
  • Guo, D., Zhu, Q., Yang, D., Xie, Z., Dong, K., Zhang, W., Chen, G., Bi, X., Wu, Y., Li, Y. K., Luo, F., & Liang, W. (2024). Deepseek-coder: When the large language model meets programming – the rise of code intelligence. arXiv preprint arXiv:2401.14199. [https://arxiv.org/pdf/2506.08872]
  • Gupta, K., Christensen, P. E., Chen, X., & Song, D. (2020). Synthesize, execute and debug: learning to repair for neural program synthesis. Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS ’20, 1–13. [https://arxiv.org/pdf/2506.08872]
  • Gwizdka, J. (2010). Distribution of cognitive load in Web search. Journal of the Association for Information Science and Technology, 61(11), 2167–2187. [https://doi.org/10.1002/asi.21385]
  • Hare, A., Chen, Y., Liu, Y., Liu, Z., & Brinton, C. G. (2020). On extending NLP techniques from the categorical to the latent space: KL divergence, Zipf’s law, and similarity search. arXiv preprint arXiv:2012.01941. [https://arxiv.org/abs/2012.01941]
  • Herbold, S., Hautli-Janisz, A., Heuer, U., Vogel, L., Müller, R., & Brandt, M. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports, 13(1), 18617. [https://doi.org/10.1038/s41598-023-45644-9]
  • Hu, Q., Lu, Y., Pan, Z., Gong, Y., & Yang, Z. (2021). Can AI artifacts influence human cognition? The effects of artificial autonomy in intelligent personal assistants. International Journal of Information Management, 56, 102250. [https://doi.org/10.1016/j.ijinfomgt.2020.102250]
  • Hu, P., Zeng, Y, Wang, D. & Teng, H. (2024). Too much light blinds: The transparency-resistance paradox in algorithmic management. Computers in Human Behavior, 161. [https://www.sciencedirect.com/science/article/pii/S0747563224002711]
  • Hu, S., Huang, T., Ilhan, F., Tekin, S., Liu, G., Kompella, R., & Liu, L. (2024). A Survey on Large Language Model-Based Game Agents. arXiv preprint arXiv:2404.02039. [https://arxiv.org/abs/2404.02039]
  • Huang, H., Wang, Y., Rudin, C., & Browne, E. P. (2022). Towards a comprehensive evaluation of dimension reduction methods for transcriptomic data visualization. Communications Biology, 5(1), 719. [https://doi.org/10.1038/s42003-022-03628-x]
  • Hussein, E., Juneja, P., & Mitra, T. (2020). Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW2), 1–27. [https://doi.org/10.1145/3392854]
  • Isaac, M. S., Wang, R. J., Napper, L. E. & Marsh, J. K. (2024). To err is human: Bias salience can increase algorithm aversion. Computers in Human Behavior, 161. [https://www.sciencedirect.com/science/article/pii/S0747563224002127]
  • Jelson, A., Manesh, D., Jang, A., Dunlap, D., & Lee, S. W. (2025). An Empirical Study to Understand How Students Use ChatGPT for Writing Essays. arXiv preprint arXiv:2501.10551. [https://arxiv.org/abs/2501.10551]
  • Ji, Z., Long, X., Li, X., Wu, P., Wang, T., Zhang, H., Wu, W., Jiang, Z., Li, S., Zhao, S., Zheng, H., Liu, F., Lin, S., & Yan, R. (2023). Survey of hallucination in natural language generation. ACM Computing Surveys, 55(12), 1–38. [https://doi.org/10.1145/3618339]
  • Juneja, P., & Mitra, T. (2021). Auditing E-Commerce Platforms for Algorithmically Curated Vaccine Misinformation. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–27. [https://doi.org/10.1145/3411764.3445250]
  • Kipnis, A., Voudouris, K., Schulze Buschoff, L. M., & Schulz, E. (2025). Metabench – a sparse benchmark of reasoning and knowledge in large language models. Proceedings of the 13th International Conference on Learning Representations (ICLR, 2025).
  • Kleinberg, J., Ludwig, J., Mullainathan, S., & Sunstein, C. R. (2019). Discrimination in the age of algorithms. SSRN. [https://doi.org/10.2139/ssrn.3329669]
  • Kosmyna, N., Hauptmann, E., Yuan, Y. T., Situ, J., Liao, X.-H., Beresnitzky, A. V., Braunstein, I., & Maes, P. (2025). Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. arXiv preprint arXiv:2506.08872. [https://arxiv.org/pdf/2506.08872]
  • Krasheninnikov, D., Krasheninnikov, E., Mlodozeniec, B., Maharaj, T., & Krueger, D. (2024). Implicit meta-learning may lead language models to trust more reliable sources. arXiv preprint arXiv:2310.15047. [https://arxiv.org/abs/2310.15047]
  • LaForte, G., Bringsjord, S., & van Heuveln, B. (1998). Psychological and philosophical obstacles to the engineering of logically lucid AGI. In A. M. Ramsay (Ed.), Artificial intelligence: Methodology, systems, and applications (pp. 95–109). IOS Press.
  • Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253. [https://doi.org/10.1017/S0140525X1600109X]
  • Lee, S., Cai, Y., Meng, D., Wang, Z., & Wu, Y. (2024). Unleashing Large Language Models’ Proficiency in Zero-shot Essay Scoring. arXiv preprint arXiv:2404.04941. [https://arxiv.org/abs/2404.04941]
  • Lee, H.-P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025, April). The impact of generative AI on critical thinking: Self-reported reductions in cognitive effort and confidence effects from a survey of knowledge workers. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. [https://doi.org/10.1145/3706598.3713778]
  • Leung, E., & Urminsky, O. (2025). The narrow search effect and how broadening search promotes belief updating. Proceedings of the National Academy of Sciences of the United States of America, 122(13), e2408175122. [https://doi.org/10.1073/pnas.2408175122]
  • Li, J., Zhang, L., Meng, F., & Li, F. (2014). Recommendation algorithm based on link prediction and domain knowledge in retail transactions. Procedia Computer Science, 31, 875–881. [https://doi.org/10.1016/j.procs.2014.05.337]
  • Li, J., Zheng, X., Watanabe, I. & Ochiai, Y. (2024). A systematic review of digital transformation technologies in museum exhibition. Computers in Human Behavior, 161. [https://www.sciencedirect.com/science/article/pii/S0747563224002759]
  • Li, Y., Liu, J., Ren, J. (2019). Social recommendation model based on user interaction in complex social networks. PLoS ONE, 14(7), e0218764. [https://doi.org/10.1371/journal.pone.0218764]
  • Li, Y., Choi, D., Chung, J., Kushman, N., Schrittwieser, J., Leblond, R., Eccles, T., Keeling, J., Gimeno, F., Lago, A. D., et al. (2022). Competition-level code generation with AlphaCode. Nature, 607(7920), 738–744. [https://arxiv.org/pdf/2506.08872]
  • Liang, D., Charlin, L., McInerney, J., & Blei, D. M. (2016). Modeling user exposure in recommendation. Proceedings of the 25th International Conference on World Wide Web, 951–961. [https://doi.org/10.1145/2872427.2883038]
  • Liu, C. & Yin, B. (2024). Affective foundations in AI-human interactions: Insights from evolutionary continuity and interspecies communications. Computers in Human Behavior, 161. [https://www.sciencedirect.com/science/article/pii/S0747563224002747]
  • Lynch, A., Wright, B., Larson, C., Troy, K. K., Ritchie, S. J., Mindermann, S., Perez, E., & Hubinger, E. (2025). Agentic Misalignment: How LLMs Could be an Insider Threat. Anthropic Research. [https://www.anthropic.com/research/agentic-misalignment]
  • Magar, I., & Schwartz, R. (2022). Data contamination: From memorization to exploitation. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 597–608. [https://aclanthology.org/2022.acl-short.63/]
  • Matias, J. N. (2023). Humans and algorithms work together — so study them together. Nature, 617(7962), 248–251. [https://doi.org/10.1038/d41586-023-01521-z]
  • Matias, J. N. (2023). Influencing recommendation algorithms to reduce the spread of unreliable news by encouraging humans to fact-check articles, in a field experiment. Scientific Reports, 13(1), 11715. [https://doi.org/10.1038/s41598-023-38277-5]
  • Mayer, C. J., Mahal, J., Geisel, D., Geiger, E. J., Staatz, E., Zappel, M., Lerch, S. P., Ehrenthal, J. C., Walter, S. & Ditzen, B. (2024). AI-powered chatbots for mental health support: A systematic review of current trends, applications, benefits, and challenges. Computers in Human Behavior, 161. [https://www.sciencedirect.com/science/article/pii/S0747563224002875]
  • Miikkulainen, R. (2024). “Generative AI: An AI paradigm shift in the making?” AI Magazine, 45(1), 165–167. [https://doi.org/10.1002/aaai.12155]
  • Milana, M., Brandi, U., Hodge, S., & Hoggan-Kloubert, T. (2024). Artificial intelligence (AI), conversational agents, and generative AI: implications for adult education practice and research. International Journal of Lifelong Education, 43(1), 1–7. [https://doi.org/10.1080/02601370.2024.2310448]
  • Miller, K. J., Eckstein, M., Botvinick, M. M., & Kurth-Nelson, Z. (2023). Cognitive model discovery via disentangled RNNs. BioRXiv. [https://www.biorxiv.org/content/10.1101/2023.06.23.546250v1]
  • Mohanta, R. (2022). Deciphering value learning rules in fruit flies using a model-driven approach. Master’s thesis, Indian Institute of Science Education and Research Pune, Maharashtra, India 411008.
  • Musslick, S., Schulz, E., & Griffiths, T. L. (2025). Automating the practice of science: Opportunities, challenges, and implications. Proceedings of the National Academy of Sciences USA, 122(7), e2401238121. [https://doi.org/10.1073/pnas.2401238121]
  • Narayanan, A. (2023). Understanding Social Media Recommendation Algorithms. Knight First Amendment Institute. [http://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms]
  • Nasraoui, O., & Shafto, P. (2016). Human-Algorithm Interaction Biases in the Big Data Cycle: A Markov Chain Iterated Learning Framework. arXiv preprint arXiv:160807895. [https://arxiv.org/abs/1608.07895]
  • Ning, L., Liu, L., Wu, J., Wu, N., Berlowitz, D., Prakash, S., … & Xie, J. (2025, May). User-LLM: Efficient LLM contextualization with user embeddings. Companion Proceedings of the ACM on Web Conference 2025, 1219–1223. [https://arxiv.org/abs/2402.13598]
  • Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84(3), 231–259. [https://doi.org/10.1037/0033-295X.84.3.231]
  • Niu, Q., Zhang, Y., Peng, J., Li, Y., Wang, T., Zhang, S., … & Zhang, X. (2024). A Comprehensive Review of Large Language Models in Cognitive Science: From Mechanisms to Applications. arXiv preprint arXiv:2409.02387. [https://arxiv.org/pdf/2409.02387]
  • Ognibene, D. et al. (2022). Challenging social media threats using collective well-being-aware recommendation algorithms and an educational virtual companion. Frontiers in Artificial Intelligence.
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C. L., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., Schulman, J., Hilton, J., Kelton, F., Miller, L., Simens, M., Askell, A., Welinder, P., Christiano, P., Leike, J., & Lowe, R. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27072–27092. [https://arxiv.org/abs/2203.02155]
  • Park, J. S., Popowski, L., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2022). Social simulacra: Creating Populated Prototypes for Social Computing Systems. Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology, 1–13. [https://doi.org/10.1145/3526113.3545688]
  • Park, P. S., Goldstein, S., O’Gara, A., Chen, M., & Hendrycks, D. (2023). AI Deception: A Survey of Examples, Risks, and Potential Solutions. arXiv preprint arXiv:2308.14752. [https://arxiv.org/abs/2308.14752]
  • Pedró, F., Subosa, M., Rivas, A., & Valverde, P. (2019). Artificial intelligence in education: Challenges and opportunities for sustainable development. UNESCO. [https://unesdoc.unesco.org/ark:/48223/pf0000366994]
  • Peláez-Sánchez, I. C., Velarde-Camaqui, D., & Glasserman-Morales, L. D. (2024). The impact of large language models on higher education: Exploring the connection between AI and Education 4.0. Frontiers in Education, 9, 1392091. [https://doi.org/10.3389/feduc.2024.1392091]
  • Perfors, A., & Navarro, D. J. (2014). Language evolution can be shaped by the structure of the world. Cognitive Science, 38(4), 775–793. [https://doi.org/10.1111/cogs.12093]
  • Peterson, J. C., Bourgin, D. D., Agrawal, M., Reichman, D., & Griffiths, T. L. (2021). Using large-scale experiments and machine learning to discover theories of human decision-making. Science, 372(6547), 1209–1214. [https://doi.org/10.1126/science.abe2629]
  • Pickering, J. B., Engen, V., & Walland, P. (2017). The interplay between human and machine agency. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 10271, 47–59. [https://doi.org/10.1007/978-3-319-58071-5_4]
  • Press Room. (2025, February 7). The Evolving Search Landscape: How AI and Social Media are Challenging Traditional Search Engines. DISA. [https://www.disa.mil/News/The-Evolving-Search-Landscape-How-AI-and-Social-Media-are-Challenging-Traditional-Search-Engines]
  • Rofouei, M., Shukla, A., Wei, Q., et al. (2024). Search with stateful chat (U.S. Patent Application No. US 2024/0289407 A1). Google LLC. [https://patents.google.com/patent/US20240289407A1/en]
  • Rule, J. S., Piantadosi, S. T., Cropper, A., Ellis, K., Nye, M., & Tenenbaum, J. B. (2024). Symbolic metaprogram search improves learning efficiency and explains rule learning in humans. Nature Communications, 15(1), 6847. [https://doi.org/10.1038/s41467-024-51000-8]
  • Russell, S., & Norvig, P. (2009). Artificial intelligence: A modern approach (3rd ed.). Prentice Hall.
  • Salewski, L., Alaniz, S., Rio-Torto, I., Schulz, E., & Akata, Z. (2023). In-context impersonation reveals large language models’ strengths and biases. Advances in Neural Information Processing Systems, 36. [https://openreview.net/pdf?id=kGteeZ18Ir]
  • Schaeffer, J., Burch, N., Björnsson, Y., & Lake, R. (2007). Checkers is solved. Science, 317(5844), 1518–1522. [https://doi.org/10.1126/science.1144079]
  • Schellewald, A. (2022). Theorizing “stories about algorithms” as a mechanism in the formation and maintenance of algorithmic imaginaries. Social Media + Society, 8(1). [https://doi.org/10.1177/20563051221077025]
  • Schelling, T. C. (1971). Dynamic models of segregation. Journal of Mathematical Sociology, 1(2), 143–186. [https://doi.org/10.1080/0022250X.1971.9989808]
  • Shao, Y., Jiang, Y., Kanell, T. A., Xu, P., Khattab, O., & Lam, M. S. (2024). Assisting in Writing Wikipedia-like Articles From Scratch with Large Language Models. arXiv preprint arXiv:2402.14207. [https://arxiv.org/abs/2402.14207]
  • Sharlin, S., & Josephson, T. R. (2024). In Context Learning and Reasoning for Symbolic Regression with Large Language Models. arXiv preprint arXiv:2410.17448. [https://arxiv.org/pdf/2410.17448]
  • Sharma, N., Liao, Q. V., & Xiao, Z. (2024). Generative echo chamber? Effect of LLM-powered search systems on diverse information seeking. Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI ’24), 1–17. [https://doi.org/10.1145/3613904.3642459]
  • Shen, Y., Heacock, L., Elias, J., Hentel, K. D., Reig, B., Shih, G., & Moy, L. (2023). ChatGPT and Other Large Language Models Are Double-edged Swords. Radiology, 307(2), e230163-e230163. [https://doi.org/10.1148/radiol.230163]
  • Shi, K., Dai, H., Li, W.-D., Ellis, K., & Sutton, C. (2023). Lambdabeam: Neural program search with higher-order functions and lambdas. Thirty-seventh Conference on Neural Information Processing Systems. [https://openreview.net/forum?id=qVMPXrX4FR]
  • Shin, D., & Park, Y. J. (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98, 277–284. [https://doi.org/10.1016/j.chb.2019.04.019]
  • Shin, D., & Sundar, S. S. (2022). AI agency vs. human agency: Understanding human–AI interactions on TikTok and their implications for user engagement. Journal of Computer-Mediated Communication, 27(5), zmac007. [https://doi.org/10.1093/jcmc/zmac007]
  • Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., & Anderson, R. (2023). The curse of recursion: Training on generated data makes models forget. arXiv preprint arXiv:2305.17493. [https://arxiv.org/abs/2305.17493]
  • Singh, I., Singh, G., & Modi, A. (2021). Pre-trained Language Models as Prior Knowledge for Playing Text-Based Games. arXiv preprint arXiv:2107.08408. [https://arxiv.org/abs/2107.08408]
  • Small, G. W., Moody, T. D., Siddarth, P., & Bookheimer, S. Y. (2009). Your brain on Google: Patterns of cerebral activation during Internet searching. The American Journal of Geriatric Psychiatry, 17(2), 116–126. [https://doi.org/10.1097/JGP.0b013e3181953a02]
  • Sparrow, B., Liu, J., & Wegner, D. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333(6043), 776–778. [https://doi.org/10.1126/science.1207745]
  • Stagnaro, M. N., Druckman, J., Berinsky, A. J., Arechar, A. A., Willer, R., & Rand, D. (2024). Representativeness versus Response Quality: Assessing Nine Opt-In Online Survey Samples. OSF Preprints. [https://osf.io/preprints/psyarxiv/h9j2dc]
  • Su, J., & Yang, W. (2023). Unlocking the Power of ChatGPT: A Framework for Applying Generative AI in Education. ECNU Review of Education, 6(3), 355–366. [https://doi.org/10.1177/20965311231168423]
  • Sun, F., & 3 other authors. (2025). Large Language Models are overconfident and amplify human bias. arXiv preprint arXiv:2505.02151. [https://arxiv.org/pdf/2505.02151]
  • Sun, W., Nasraoui, O., & Shafto, P. (2020). Evolution and impact of bias in human and machine learning algorithm interaction. PLoS ONE, 15(8), e0235502. [https://doi.org/10.1371/journal.pone.0235502]
  • Teknium, R., Quesnelle, J., & Guang, C. (2024). Hermes 3 technical report. Preprint at https://arxiv.org/abs/2408.11857. [https://arxiv.org/abs/2408.11857]
  • Tian, E. (2024). AI Learning: Mechanisms, Concepts, and Human Uniqueness.
  • Tucker, J. A., Guess, A. M., Barberá, P., Guess, A. M., & Günther, W. A. (2018). Social media, political polarization, and political disinformation: A review of the scientific literature. Annual Review of Political Science, 21(1), 167–187. [https://doi.org/10.1146/annurev-polisci-051116-052445]
  • Turner, E., & Rainie, L. (2020, March 5). Most Americans rely on their own research to make big decisions, and that often means online searches. Pew Research Center. [https://www.pewresearch.org/internet/2020/03/05/most-americans-rely-on-their-own-research-to-make-big-decisions-and-that-often-means-online-searches/]
  • Valipour, M., You, B., Panju, M., & Ghodsi, A. (2021). SymbolicGPT: A Generative Transformer Model for Symbolic Regression. arXiv preprint arXiv:2106.14131. [https://arxiv.org/abs/2106.14131]
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention Is All You Need. Advances in Neural Information Processing Systems, 30. [https://arxiv.org/abs/1706.03762]
  • Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: Insights from a meta-analysis. Humanities and Social Sciences Communications, 12(1), 621. [https://doi.org/10.1057/s41599-025-04787-y]
  • Wang, Z., Huang, B., Xu, Z., Gu, S., Lin, Y., Wang, X., Feng, K., Li, K., Tang, J., & Wang, H. (2025). HelpSteer2-Preference: Complementing ratings with preferences. Proceedings of the 13th International Conference on Learning Representations (ICLR, 2025).
  • Warner, B. (2024). Smarter, better, faster, longer: A modern bidirectional encoder for fast, memory efficient, and long context finetuning and inference. Preprint at https://arxiv.org/abs/2412.13663. [https://arxiv.org/abs/2412.13663]
  • Watkins, B. A., & Woods, C. L. (2024). Beyond the code: The impact of AI algorithm transparency signaling on user trust and relational satisfaction. Public Relations Review, 50(5), 102507. [https://doi.org/10.1016/j.pubrev.2024.102507]
  • Whipp, J. L., & Chiarelli, S. (2004). Self-regulation in a web-based course: A case study. Educational Technology Research and Development, 52(4), 5–21. [https://doi.org/10.1007/BF02504714]
  • Willoughby, T., Anderson, S. A., Wood, E., Mueller, J., & Ross, C. (2009). Fast searching for information on the Internet to use in a learning context: The impact of domain knowledge. Computers & Education, 52(3), 640–648. [https://doi.org/10.1016/j.compedu.2008.11.009]
  • Wilson, M. (2017). Algorithms (and the) everyday. Information, Communication & Society, 20(1), 137–150. [https://doi.org/10.1080/1369118X.2016.1200645]
  • Xing, X., Shi, F., Huang, J., Zhang, X., & Wu, Y. (2025). On the caveats of AI autophagy. Nature Machine Intelligence, 7(2), 172–180. [https://doi.org/10.1038/s42256-025-00984-1]
  • Yax, N., Oudeyer, P.-Y., & Palminteri, S. (2024). Assessing contamination in large language models: Introducing the LogProber method. Preprint at https://arxiv.org/abs/2408.14352. [https://arxiv.org/abs/2408.14352]
  • Yeomans, M., Shah, A., Mullainathan, S., & Kleinberg, J. (2019). Making sense of recommendations. Journal of Behavioral Decision Making, 32(4), 403–414. [https://doi.org/10.1002/bdm.2118]
  • Yin, J., & Qiu, X. (2021). Ai technology and online purchase intention: Structural equation model based on perceived value. Sustainability, 13(10), 5671. [https://doi.org/10.3390/su13105671]
  • Zador, A., Richards, B. A., & Gershman, S. J. (2023). Catalyzing next-generation artificial intelligence through NeuroAI. Nature Communications, 14(1), 1597. [https://doi.org/10.1038/s41467-023-37053-4]
  • Zhang, S., Hu, Y., & Bian, G. (2017). Research on string similarity algorithm based on Levenshtein distance. 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2247–2251. [https://doi.org/10.1109/IAEAC.2017.8054419]
  • Zhang, S., Zhao, X., Zhou, T., & Kim, J. H. (2024). Do you have AI dependency? The roles of academic self-efficacy, academic stress, and performance expectations on problematic AI usage behavior. International Journal of Educational Technology in Higher Education, 21(1), 34–14. [https://doi.org/10.1186/s41239-024-00467-0]
  • Zhou, J., Muller, H., Holzinger, A., & Chen, F. (2024). Ethical ChatGPT: Concerns, Challenges, and Commandments. Electronics (Basel), 13(17), 3417. [https://doi.org/10.3390/electronics13173417]
  • Zook, M., Barocas, S., Crawford, K., Keller, E., Gangadharan, S. P., Goodman, A., & others. (2017). Ten simple rules for responsible big data research. PLoS Computational Biology, 13(3), e1005399. [https://doi.org/10.1371/journal.pcbi.1005399]

Disclosure Statement

This post was produced according to the approach outline in The Art of Transparent AI Collaboration Workflow (click to review).

Leave a Reply

Your email address will not be published. Required fields are marked *