AI Vs. Analog

Physical Media as a Fortress for Individual Thought

Core Argument: Digital media encourages passive consumption and reliance on AI algorithms to tell us what’s true. In contrast, Physical Media (books, handwriting, stone inscriptions) requires active engagement. To read a faded headstone at a cemetery or a handwritten 19th-century letter, you have to use your own brain, literacy, and patience. Physical Media doesn’t have an “autofill” or “summarize” button. You are forced to think for yourself.

Digital Atrophy

To understand the impacts that AI and digital/social media have on our brains, we have to look at something called Digital Atrophy. I use this term because, essentially, we are losing our ability over time to acquire knowledge, thoroughly understand what we are learning, and develop independent ideas. As we outsource our literacy to algorithms and our social engagement to screens, the individual capacity for deep thought is in a consistent state of decline. By returning to physical archives and media, we reclaim our patience, intellectual curiosity, and stamina required to truly think for ourselves in a world of automated summaries. While artificial intelligence systems rely heavily on Bayesian probability — constantly updating predictions based on new data — the human brain operates with additional layers of sophistication. As Harvard education researcher Tina Grotzer explains, “Research from my lab found that kindergarteners used strategic information in playing a game that enabled them to make informed moves more quickly than a purely Bayesian approach. Further, our human minds can detect critical distinctions or exceptions to covariation patterns that drive conceptual change and model revisions that a purely Bayesian approach would sum across.” In other words, even young children outperform pure probability models by using intuition, strategy, and the ability to spot when a single exception breaks the rule, which is something no current AI does naturally.

Digital Literacy

Digital literacy in this day and age is often just skimming. Students have “read” texts, but in reality, somebody, or more accurately, something else may have done the heavy lifting, while students pull out key words and ideas that help them get by on exams. I fell victim to this at the beginning of the introduction of AI and these efficient systems, but it was highly concerning when I realized how quickly you can lose your own sense of self and independent thought, especially when tackling more complex ideas. Also quoted in the Harvard Gazette is Senior Lecturer in Public Policy Dan Levet, who co-wrote the book “Teaching Effectively with ChatGPT”. Levet shares my own sentiment, emphasizing, “If a student uses AI to do the work for them, rather than to do the work with them, there’s not going to be much learning. No learning occurs unless the brain is actively engaged in making meaning and sense of what you’re trying to learn, and this is not going to occur if you just ask ChatGPT, “Give me the answer to the question that the instructor is asking.” Teachers especially at higher levels, are facing a difficult decision on whether to allow use of generative AI. Some have opted to allow it only as a tool to make writing more efficient or to fix spelling errors, while others have completely banned its use, citing academic integrity and punishment if caught. It’s definitely a moral grey area, and a conversation that needs much more research before a conclusive argument can truly be made.

Death of the “Deep Dive”

AI reliance has become a cognitive shortcut, also commonly known as cognitive offloading. Researchers, especially students, in traditional academic settings used to be more reliant on primary source material and on the use of critical thinking skills to craft a compelling thesis. In his article “Cognitive offloading and the reshaping of human thought: The subtle influence of Artificial Intelligence,” Vincent Hooper of the Jain School of Global Management touches on how AI is reshaping how we learn. Hooper describes how “studies suggest AI transforms traditional cognitive offloading into delegated thinking, where users not only store information externally but also adopt AI-generated outputs with minimal critical scrutiny” (Hooper). Students who use AI to write essays, dissertations, or other deep academic projects often bypass cognitive struggle. Hooper gives the example of a graduate student who researches climate policy.” The student typically would use primary sources, identify arguments, and ultimately draft well-thought-out paragraphs on their chosen topic. Instead, Hooper explains, “The cognitive architecture shifts fundamentally: attention moves from deep engagement with source material to evaluating AI output quality; memory encoding bypasses the effortful retrieval practice that consolidates learning; and deliberative reasoning contracts to acceptance or rejection of pre-formed arguments rather than their independent construction” (Hooper). This is not the only examples of this kind of offloading. My personal research and work in my own media courses has shown a deeper societal shift to a more convienient, effcient, and “easy” world, where timing is often prioritized over quality and deeper philosophizing about the world. Media pushes herd mentality, AI pushes for offloading of deeper thinking, Advertising pushes agendas without us ever realizing – the time of independent thinkers seems to be at risk around every corner.

Author’s Statement

So, maybe you are curious after reading this about where I, the author, stand. I will admit I am quite biased when it comes to AI. However, I’m not completely on one side. It’s true – I don’t like using AI as a research tool. I don’t want algorithms finding my answers or shaping my thinking. But I do like robotics: physical machines, mechanical systems, robots with specific functions and settings, even humanoid styled ones. There’s a real difference between a tool that thinks for you and a tool that moves in the world. I’m not anti-tech across the board. My resistance is not to computation or automation broadly, but specifically to generative AI as a research tool. I reject the outsourcing of reading, interpretation, and synthesis to statistical language models. Science is cool though! I support the hardworking researchers who are finding new ways to use mechanical and automated systems to improve lives, learn new things, and even just have some fun.

Quiz on AI and Independent Thought

 

Results

Result A

You are someone who heavily relies on physical media, independent thinking, and hasn’t fully bought into AI. You have a healthy amount of skepticism regarding these newer algorithms and systems. You can be found writing in a notebook, reading physical books, and conducting fully independent research. Physical media is of great value to you.

Result B

You are someone who is on the border – you are a bit skeptical of full AI reliance, but aren’t sure if you are ready to fully discredit it’s usage. You probably use your computer for most things (research, news etc) and once in a while use google summaries, but do still have independent ideas and research. You read physical books once in a while. You haven’t fully experienced digital atrophy yet.

Result C

You don’t truly care about the impact of AI as much as efficiency and ease. Maybe you used to have some healthy skepticism, but now to you AI is a tool for you, not something to fear. You likely use it to help brainstorm research topics and edit writing. It has become much more difficult for you to think freely and imaginatively. Digital media is your top source of information.

#1. When reading a physical book or long-form article, how many minutes pass before you feel the “phantom itch” to check a notification or switch tabs?

#2. If you need to understand a complex historical event, which path would you take first?

#3. In a quiet academic setting (like a seminar or library), how often do you reach for something such as your phone, to avoid the awkwardness of deep thought?

#4. When was the last time you can remember writing something by hand that was longer than a note?

#5. In a world of generative AI, how much do you trust your own ability to spot fake information without a tool to tell you it’s edited?

#6. When using AI to help with a task, how much of the final voice would you say is your own?

#7. When was the last time you read an opinion that made you uncomfortable because you found it in a physical book, rather than an algorithm feeding it to you?

#8. When you see an AI-generated image or text, do you think about the human artists whose work was used to create it?

#9. How do you feel about imperfect media, like a hand-stapled zine or a used book with someone elses notes?

#10. If all digital servers went dark tonight, how much of your personal “intellectual library” would remain sitting on your bookshelf?

#11. When you need to verify a quote or fact is true, how deep do you go?

#12. When you encounter a difficult, complex text in a seminar or archive, how do you see your role in that moment?

Previous
Finish

Resources/Ways to help yourself

The good news, despite all of this, is that hope is not lost for those who feel they are too reliant on these systems and want to reclaim their independence and critical thinking. The brain is more flexible than many of us realize. Here are some ways you can help improve the way you think:

  • Be an active thinker. Quiz yourself on new information as you learn it, instead of just reading it over once.
  • Review material over increasing intervals to solidify your long-term memory
  • If you haven’t done it in ages, try to read a little bit below your level, and increase in difficulty over time. Try to diversify the topics and genres that you read.
  • Question everything you read and see. Don’t be afraid to mess up (such as thinking something is real that is AI), but try your best to build those observational skills.
  • Try to write your own first draft. Yes, even if you think it sucks, even if it’s messy. Messy is human.
  • Pick up new hobbies to keep your brain busy. Puzzles, crafting, and researching topics that interest you are some examples.
  • Learn to be ok with silence. You can start small, such as meditating for 10 minutes a day. Silence is useful, even if uncomfortable at first.
  • Spend time in nature and around real people. We need a break from screens and constant isolation.

Books to read

  • The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want. Authors: Emily M. Bender & Alex Hanna (2025)
  • Artificial Intelligence and Barbarism: A Critique of Digital Reason. Author: Alexandros Schismenos (2025)
  • Dialogues on Minds, Machines, and A. Author: Rocco J. Gennaro (2026)
  • Deep Work: Rules for Focused Success in a Distracted World. Author: Cal Newport
  • Thinking, Fast and Slow. Author: Daniel Kahneman (Nobel Prize winner)

Sources and Links for further education on this topic:

https://journals.lww.com/annals-of-medicine-and-surgery/fulltext/2025/08000/how_ai_quietly_undermines_the_joy_and_effort_of.1.aspx

https://news.harvard.edu/gazette/story/2025/11/is-ai-dulling-our-minds/

https://www.mdpi.com/2075-4698/15/1/6

https://www.media.mit.edu/publications/your-brain-on-chatgpt/

https://time.com/7295195/ai-chatgpt-google-learning-school/

https://colloquia.uhemisferios.edu.ec/index.php/colloquia/en/article/view/185/170

Videos on this topic: