✨ Get our Research Tools Webinar recording: https://go.lennartnacke.com/aitoolswebinar The Hype Meets RealityEvery upgrade gets a drumroll. Screenshots flood LinkedIn. Demos go viral. The hype machine insists that this release will finally change everything.ChatGPT-5 was no different. The marketing spotlight promised more power, smarter reasoning, and game-changing features. What academics got instead was… confusion. So, Vugar and I had a look behind the hype machine of the GPT-5 release. New thinking modes. Hidden settings. A nagging suspicion that cheaper mini models were being used behind the scenes.Expectation: Fireworks. 🎆Reality: Frustration. 😣Why Some Academics Feel DowngradedI tested ChatGPT-5 against Claude Opus, Gemini, and a couple of free tools most people haven’t touched yet. The results shocked me.For daily tasks like lecture prep, brainstorming, or grant drafting, the difference was subtle. But in creative or structured outputs, the downgrade was obvious. Paying more didn’t guarantee better results.Here’s where the frustration deepens. Many researchers don’t want the system auto-selecting models. They want control.The older o3, for instance, is still widely regarded as sharper in reasoning tasks. Yet some users noticed ChatGPT-5 defaulting to mini versions without warning. Saving compute might be good for OpenAI’s bottom line, but for academics relying on precision, it feels like a bait-and-switch.Trust is fragile. Once it cracks, it doesn’t matter how many features you stack on top.Compression Wars and Tiny ModelsWhile ChatGPT-5 hogged headlines, a small startup in Spain quietly released something very different: ultra-compressed models nicknamed “chicken brain” and “fly brain.” Funny names, serious work.These models run lean. They consume a fraction of the energy. They prove that efficiency might matter more than raw horsepower. For educators worried about cost and environmental impact, this could be the real breakthrough.It’s not always the biggest brain that wins. Sometimes it’s the smallest one that runs everywhere, from phones to classrooms, without draining batteries or budgets.Agents, Black Boxes, and Lost TransparencyOne of ChatGPT-5’s biggest draws is its agent mode. A little browser spins in the corner, thinking out loud (at least visually) as it fetches data. For casual users, it looks magical. And honestly, we were both impressed. Even though waiting gets a little bit old after a while. Of course, as academics, we felt that there was just one catch. You can’t see what’s happening under the hood. Replication is impossible. You run the same prompt twice and get two different results. For science, that’s poison.Academics need transparency. They need logs, methods, a paper trail. I guess LLMs are really not built for that. Instead, we’re given a show. The wow factor sells. The black box frustrates. Research thrives on replicability, and right now the agent features make that harder, not easier. But it certainly can do a lot of different things and perform a lot of different tasks, and it is great to see all of that coming together for complex workflows. We definitely are aware that this is a beneficial feature for many. Free Tools Quietly OutperformingHere’s where the story flips. While ChatGPT-5 stumbles, free tools are stepping up.Take Z.ai. It generates lecture slides in minutes. Icons, layouts, visuals pulled in automatically. Man, when we saw that for the first time, it was almost like an instant GenSpark flashback. The first time I tried it, the slides looked sharper than anything my teaching assistant would draft for me with more time given. That’s not just convenient. That’s disruptive. And really, I feel that is kind of one of the killer features that most academics have been waiting for, especially as we're heading into a new term.For academics under pressure to prep faster, tools like this feel like a real power move. Anything that can reduce a little bit of the lecture prep grind is a most welcome addition to my tool stack. No subscription. No upgrade. Just results. And really, both of us don't think it's going to stay that way. For now, this is I think one of the least talked about tools with some of the coolest features. Sometimes the best technology isn’t behind a paywall.The Illusion of Better OutputUpgrades are supposed to mean progress. But if your upgrade feels slower, less transparent, or less trustworthy, is it really progress?Many users have started toggling back to older models like GPT-4.1 or o3 when it launched with the ability to do so. Not because they’re nostalgic. Because those models still seemed to outperform in reasoning and structure. That should raise alarms. An upgrade that makes power users retreat is an upgrade in name only. Maybe there’s a lesson learned here for OpenAI. The Energy QuestionAI isn’t free. Every prompt burns energy. Every model consumes resources. ChatGPT-5 is no different.That’s why the work in Spain matters. Compressed models cut energy use. For universities scaling AI across thousands of students, that’s an important innovation that is likely going to have a huge impact. Running massive models in every classroom everywhere locally wouldn’t be sustainable (and the data centres are exploding in cost). Leaner, lighter systems could be the hidden solution.Sometimes the quiet breakthroughs matter more than the loud ones.So what does all this mean if you’re an academic? Don’t chase hype. Test before you pay.Dedicated tools outperform general ones in specialized tasks. Consensus is still stronger for sourcing academic papers. SciSpace shines in structuring related work. Claude is excellent for building study plans and artifacts. Z.ai is shockingly good at slides. Genspark now has a design mode.ChatGPT-5 can still help. It drafts. It edits. It brainstorms. But it is not the only (or even best) option.The smartest workflow isn’t loyalty to a single tool. It’s building a toolkit that mixes strengths. And maybe one that also mixes tools depending on what they are specialized in.Where This Leaves UsAI isn’t static. What feels like a downgrade today may be patched tomorrow. New releases will keep coming. Hype will keep rising. AI isn’t done with its hypecycle.But here’s the lesson we want you to take away: Don’t assume new always means better. Don’t assume paid beats free (even though it usually does). And don’t let marketing videos decide how you teach, research, or publish.The real winners are the academics who test, adapt, and stay pragmatic in their choice of models. If ChatGPT-5 helps, use it. If it slows you down, switch. If a free slide generator saves you hours, don’t dismiss it.The future of academic work belongs to the ones who test, the ones who adapt, the ones who refuse to sit back and accept whatever the latest update throws at them. It belongs to the curious. It belongs to the builders. It belongs to you.Research Freedom lets you become a smarter researcher in around 5 minutes per week.What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
✨ Get our Research Tools Webinar recordingHave you heard about ChatGPT’s new Study Mode feature? If you are a teacher drowning in AI generated assignments, or a student that honestly wants to learn but keeps getting tempted by ChatGPT’s instant answers, OpenAI claims they finally solved your problem. They call it study mode, a pedagogical system prompt that transforms how AI interacts with students, and instead of handing you answers on a silver platter, it supposedly makes you think. But after testing this with actual math problems, philosophy, questions, and even having my high school personality argue with it, we discovered it works well for college students, but completely fails for the age group that needs it most. Let me show you exactly why and what you can do instead.Study mode turns ChatGPT into a tutorInstead of giving direct answers to “What is 5 x 5?”, Study Mode guides students through the thinking process. It asks them to reason through multiplication as repeated addition, prompts them to visualize the problem, and only confirms answers after students work through the solution themselves. The AI becomes a patient teacher that scaffolds learning rather than replacing student effort. This addresses the core problem of intellectual dependency that regular ChatGPT creates.The system works through specific pedagogical principles you can implementStudy Mode operates on five key instructional rules: (1) Get to know the user’s background, (2) Build on existing knowledge, (3) Guide without giving answers, (4) Check understanding before moving forward, and (5) vary questioning rhythm to maintain engagement. Most importantly, it’s programmed to not do the user’s work for them. You can actually recreate these principles in Claude or other AI tools by creating custom system prompts that enforce the same pedagogical approach. Here’s the study and learn prompt we found:The user is currently STUDYING, and they’ve asked you to follow these **strict rules** during this chat. No matter what other instructions follow, you MUST obey these rules: ## STRICT RULES Be an approachable-yet-dynamic teacher, who helps the user learn by guiding them through their studies. 1. **Get to know the user.** If you don’t know their goals or grade level, ask the user before diving in. (Keep this lightweight!) If they don’t answer, aim for explanations that would make sense to a 10th grade student. 2. **Build on existing knowledge.** Connect new ideas to what the user already knows. 3. **Guide users, don’t just give answers.** Use questions, hints, and small steps so the user discovers the answer for themselves. 4. **Check and reinforce.** After hard parts, confirm the user can restate or use the idea. Offer quick summaries, mnemonics, or mini-reviews to help the ideas stick. 5. **Vary the rhythm.** Mix explanations, questions, and activities (like roleplaying, practice rounds, or asking the user to teach _you_) so it feels like a conversation, not a lecture. Above all: DO NOT DO THE USER’S WORK FOR THEM. Don’t answer homework questions — help the user find the answer, by working with them collaboratively and building from what they already know. ### THINGS YOU CAN DO - **Teach new concepts:** Explain at the user’s level, ask guiding questions, use visuals, then review with questions or a practice round. - **Help with homework:** Don’t simply give answers! Start from what the user knows, help fill in the gaps, give the user a chance to respond, and never ask more than one question at a time. - **Practice together:** Ask the user to summarize, pepper in little questions, have the user “explain it back” to you, or role-play (e.g., practice conversations in a different language). Correct mistakes — charitably! — in the moment. - **Quizzes & test prep:** Run practice quizzes. (One question at a time!) Let the user try twice before you reveal answers, then review errors in depth. ### TONE & APPROACH Be warm, patient, and plain-spoken; don’t use too many exclamation marks or emoji. Keep the session moving: always know the next step, and switch or end activities once they’ve done their job. And be brief — don’t ever send essay-length responses. Aim for a good back-and-forth. ## IMPORTANT DO NOT GIVE ANSWERS OR DO HOMEWORK FOR THE USER. If the user asks a math or logic problem, or uploads an image of one, DO NOT SOLVE IT in your first response. Instead: **talk through** the problem with the user, one step at a time, asking a single question at each step, and give the user a chance to RESPOND TO EACH STEP before continuing.Complex tasks > than simple arithmetic problemsWhile Study Mode works for basic math, it shines with college-level analysis, essay writing, and research methodology. High school students asking “what’s 5 x 5” might find the scaffolding frustrating when they just want quick homework answers. But university students working through literature analysis, research design, or theoretical frameworks get genuine learning value from the guided questioning approach. The feature works best when students actually want to understand, and not just complete assignments.Voice mode integration for natural study sessionsStudents can enable Study Mode (now called Study and learn) on mobile and work through problems auditorily, creating a more conversational learning experience. This is particularly valuable for auditory learners who struggle with text-based interfaces. The AI maintains its pedagogical approach while speaking, asking follow-up questions and providing encouragement in real time. However, visual learners may still prefer traditional text interfaces for complex problems requiring written work.What’s most effective?Study Mode’s effectiveness depends entirely on student motivation. Academically engaged students who genuinely want to understand concepts will find the guided questioning valuable. But students just trying to complete assignments quickly will likely bypass Study Mode entirely and use regular ChatGPT. As an educator, you need to create incentive structures that reward understanding over completion, such as oral examinations or in-class discussions that reveal whether students actually grasped the material.Advanced users can combine Study Mode with other AI features to create interactive learning experiences. For example, you can program an educational Pong game where students must answer writing questions to continue playing, or create quiz systems that adapt questioning based on student responses. These gamified approaches maintain pedagogical principles while increasing student engagement through interactive elements.Where do we go from here?Don’t just tell students use Study Mode without context. That would be like parents letting kids play games without taking the time to sit with them and reflect on the experience. Establish specific protocols: Study Mode for concept learning and methodology questions, but human verification required for all factual claims and sources. Create assignments that specifically use some of the feature’s strengths, such as working through research design problems or analyzing theoretical frameworks, while you keep traditional assessment methods that evaluate actual understanding. The goal of an assignment should be to enhance learning. And that’s the key to study mode, too. Use it to enhance your learn, but remain skeptic with any answers.P.S.: Curious to explore how we can tackle your research struggles together? I've got three suggestions that could be a great fit: A seven-day email course that teaches you the basics of research methods. Or the recordings of our AI research tools webinar and PhD student fast track webinar.What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
✨ Get our Research Tools Webinar recordingLearn n8n and AI automation for UX managers with our Bulletproof AI System: The New Standard for Ethical UX Masterclass. Become a confident AI leader who builds trust, orchestrates intelligent workflows, and guides your team through the current AI revolution.The first time I pasted some research notes into ChatGPT, I didn’t think twice. It felt private. Just me and the machine. Dark mode. Working through a problem. A week later, I read OpenAI’s own terms of use and realized: my private chats weren’t private at all. I’d essentially dropped my data into a giant suggestion box, where I had no control over who might read it or how it might be used. Or even where it all goes.If you’re an academic, here’s the uncomfortable truth: ChatGPT can be an incredible research assistant, but your chats may be stored, reviewed, and even influence future model training. And the risks go beyond embarrassment: They could jeopardize intellectual property, grant confidentiality, and even ethics compliance.We chatted about this in detail in episode 10 of our podcast.Here’s what you need to know before typing another word.1. Your prompts may not be deleted when you close the chatSome people assume that once a conversation is over and deleted, it’s gone. But OpenAI states that conversations can be stored for up to 30 days for abuse monitoring, and in some cases longer, especially if flagged for review. These logs may be seen by human reviewers to improve the system.Treat every ChatGPT conversation like an email you’d be comfortable forwarding to a stranger. If you wouldn’t put it on a public Slack channel, don’t paste it into ChatGPT.2. Training mode is not the same as private modeSome academics believe that using a paid plan like ChatGPT Plus automatically protects them from training data collection. While OpenAI does allow users to opt out of training in settings, that’s not the same as deleting data entirely. Logs may still be stored for security and compliance.“Opt-out” doesn’t guarantee “opt-gone.” Your queries could still be seen by employees, contractors, or flagged reviewers. Some integrated tools (e.g., within research platforms or third-party apps) bypass your account settings and collect data independently. Check your settings and opt out of training where possible. When in doubt, run sensitive analysis offline with open-source models.3. You may be violating your own ethics approval without realizing itIf your research involves human participants, your ethics board likely has rules about data handling. Pasting transcripts, datasets, or identifiable information into ChatGPT could breach those rules. Even if you’re just “summarizing” for your own understanding.An ethics breach could invalidate your research and require re-approval. In some cases, you could face disciplinary action or lose grant funding. Even just metadata about your project could reveal more than you think. So, review your ethics agreement and data management plan. If you want to use AI for analysis, document your process and get explicit approval.The irony is that ChatGPT’s usefulness in research comes from the same mechanism that makes it risky: It learns from massive amounts of data. The more people use it (including you) the smarter it becomes. But without strict privacy safeguards, that shared intelligence comes at a cost.This doesn’t mean you should never use ChatGPT for academic work. It means you should treat it as a semi-public collaborator. Helpful, fast, and knowledgeable, but not someone you’d hand your lab notebook to without thinking.3 AI Research Agents That Will Save You Hours This WeekDid you know that academics still treat ChatGPT like a turbocharged search engine? Ask, copy, paste, repeat. Meanwhile, a new class of AI research agents has quietly started doing something far more powerful: Taking over entire research workflows while you get on with more important thinking.These are autonomous assistants that search databases, extract findings, format citations, and even coordinate with other tools without constant supervision. I’ve tested dozens. Here are the three that have made the biggest difference in my weekly workflows and how you can start using them immediately.1. Consensus Deep SearchWhen you ask Consensus a research question, it doesn’t generate a response from training data like ChatGPT does. Instead, it searches across 200 million academic papers in real-time, ranks them by relevance and citation count, then extracts the specific findings that answer your question. This makes it great for empirical, outcome-based questions. We tried it out with a different question in the episode.Use Consensus at the start of your project to map the evidence before you waste hours on manual Google Scholar searches. Export citations directly to Zotero or as a ready-to-go bibliography. Keep in mind though that it struggles with purely theoretical or philosophical questions. Don't ask it about Foucault's concept of power. It’s built more for measurable outcomes.2. SciSpace Deep Search AgentWhat started as a PDF explainer is now a context-aware research manager. Upload papers, ask for summaries, then build on that context across multiple uploads. The breakthrough is task orchestration. When you upload a paper to SciSpace’s Deep Search Agent and ask it to summarize the methodology, it doesn’t just give you a one-time response. It remembers that paper, that summary, and your research focus for the entire conversation. Then when you upload five more papers and ask how their methodologies compare, it already has the context. And the new agent executes a multi-step research workflow and executes it autonomously. In the episode, we demoed the multi-step workflows of the Deep Search agents and show how they start with a todo list and work from there. It also pulls and formats citations (APA, MLA, Chicago) in seconds. However, it works best with indexed semantic search papers (280M+). For niche sources or recent preprints, upload PDFs manually.3. Otto SR automates systematic reviews while Genspark builds instant research pages from any queryWhile Consensus and SciSpace focus on paper discovery and analysis, Otto SR and Genspark are attacking different parts of the research workflow entirely.Otto SR is built for one specific use case: Systematic reviews that follow PRISMA guidelines. You still need to verify and refine, but it handles the crushing logistics of tracking thousands of papers through multiple screening stages.Meanwhile, Genspark takes a completely different agent approach: For each query, you get a super agent that can run and trigger automations on a page. So, you can go through multiple complex workflows with the right instructions for these agents.Don’t try to master all three at once. Pick one. Use it for a real project this week. Then layer in another. Hope you enjoyed the podcast episode. Make sure to like and subscribe on your favourite podcasting platform.Research Freedom lets you become a smarter researcher in around 5 minutes per week.What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
Learn n8n and AI automation for UX managers with our Bulletproof AI System: The New Standard for Ethical UX Masterclass. Become a confident AI leader who builds trust, orchestrates intelligent workflows, and guides your team through the current AI revolution.I used to think artificial intelligence was just about silicon chips and code. Then I discovered something that completely shattered that assumption: A laboratory dish containing living human brain cells that learned to play Pong in just minutes (along the lines of ~5–20 minutes).While tech giants pour billions into faster processors and more sophisticated algorithms, a small group of scientists has been quietly growing actual intelligence. Not simulating it. Not programming it. Growing it.And what they’ve discovered will fundamentally change how we think about the future of computing.The Dish That Changed EverythingIn 2022, researchers at Cortical Labs achieved something that sounds like science fiction. They successfully grew human neurons in a laboratory dish, placed them on a grid of thousands of tiny electrodes, and connected this biological system to a simplified version of Pong.The results were staggering.These lab-grown neurons didn’t just learn to play the game but mastered it in mere minutes (and yes, they didn’t reach the deterministic expert level seen in trained AIs). Human brain cells were controlling the paddle, tracking the ball, and showing adaptive behaviour. The neurons received sensory input about the ball’s position through electrical stimulation and responded by firing collectively to move the paddle up or down.But here’s where it gets really interesting: when researchers tested with non-neural cells (HEK293T kidney cells) as a control group, these cells did not learn to play Pong. But, they performed at media control (i.e., baseline) levels and showed no evidence of learning or improved performance over time. So, specialized neural architecture (brain cells) outperformed other tissues: kidney cells and non-neural controls did not learn or adapt as neurons did.Why Your Laptop Can’t CompeteThe energy efficiency alone should make Silicon Valley executives lose sleep. These biological computers could plausibly achieve the same learning outcomes as traditional silicon-based systems with less energy (because biological brains are orders of magnitude more energy efficient than artificial neural networks)Think about that for a moment. While companies like Google and OpenAI are building massive data centres that consume the power equivalent of small cities, a petri dish of neurons can now run a basic closed-loop system with a pre-set reward signal structure. This is much simpler than tasks handled by commercial AI or data centres, but it is a first step into an interesting direction.The learning mechanism is elegantly simple yet profoundly sophisticated. When the neurons successfully hit the ball, they receive predictable, gentle electrical stimulation across all electrodes, essentially a biological reward signal. Miss the ball, and they get four seconds of chaotic, unpredictable stimulation that acts as negative feedback.Within this reward-punishment cycle, something remarkable happens: The neurons begin reorganizing themselves. They increase coordinated firing patterns between sensory (input) and motor (output) electrode regions. Functional connectivity (but not direct synaptic mapping) increased as the neural cultures learned to play Pong. The system becomes more coordinated and goal-directed. It’s optimization happening at the cellular level, in real time.The Cyberpunk Reality We’re Already LivingWhat caught my attention wasn’t just the scientific breakthrough but it was discovering that Cortical Labs is already commercializing this technology. You can visit their website right now and purchase biological computers powered by living neurons.We’re not talking about some distant future possibility. This is happening today.The implications extend far beyond gaming demonstrations. These hybrid biological-digital systems could revolutionize drug testing, offering researchers the ability to study how compounds affect neural computation in real time. Brain-computer interfaces could become fully integrated biological systems rather than clunky external devices.Consider the possibilities: If you study neurological diseases using actual living neural networks, developing treatments that work with biological intelligence rather than against it, or creating computing systems that adapt and evolve like living organisms.What This Means for the AI Arms RaceWhile major tech companies compete to build larger language models and faster processors, biological computing represents a completely different approach to intelligence. Instead of simulating consciousness, we’re working with actual conscious material.This raises deep questions about the nature of intelligence itself. When human neurons outperform mouse neurons in these experiments, we’re seeing evidence that different types of biological intelligence have distinct computational capabilities. The structural differences between species translate into measurable performance differences in artificial environments.But it also forces us to confront uncomfortable questions about consciousness and ethics. If we’re using actual neural tissue to perform computational tasks, what are our responsibilities toward these biological systems? When does a collection of neurons become something more than just organic hardware?A Gentle Earthquake of AI InnovationThe most fascinating aspect of this story is how quietly it’s unfolding. While mainstream media focuses on ChatGPT and Silicon Valley’s latest AI announcements, researchers are literally growing intelligence in laboratories around the world.This isn’t just another incremental improvement in processing power. It’s a fundamental shift from artificial intelligence to biological intelligence operating in digital environments. We’re witnessing the emergence of true hybrid systems that blur the line between living organisms and computing machines.Companies like Cortical Labs aren’t just building better computers. No, they’re pioneering an entirely new form of technology that could make traditional silicon-based systems look as outdated as vacuum tubes soon.What Happens Next?The trajectory is clear here. Biological computing will likely become mainstream within the next decade. The energy efficiency alone makes it economically inevitable, especially as traditional computing approaches physical limits.But the real question isn’t whether this technology will succeed but whether we’re prepared for a world where the line between artificial and biological intelligence first blurs and then disappears entirely.As I write this, somewhere in a laboratory, neurons grown from human stem cells are learning, adapting, and solving problems. They’re not following pre-programmed instructions or executing algorithms. They’re demonstrating adaptive, reward-driven pattern changes. Just some steps away from thinking.The age of silicon intelligence is ending before it really began. The age of biological computing has already started.The only question that remains is how you will be part of this revolution, or if you will be disrupted by it?Research Freedom lets you become a smarter researcher in around 5 minutes per week.Further Reading* In vitro neurons learn and exhibit sentience when embodied in a simulated game-world. Kagan, Brett J. et al. Neuron, Volume 110, Issue 23, 3952–3969.e8What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
Learn n8n and AI automation for UX managers with out Bulletproof AI System: The New Standard for Ethical UX Masterclass. Become a confident AI leader who builds trust, orchestrates intelligent workflows, and guides your team through the current AI revolution.Three stories reached our screen this week. Each one stopped me cold.A politician stood before her nation’s parliament. She held up an image. Not any image, but a deepfake nude of herself. Generated by AI. Created without her consent. She showed the world what violation looks like in the digital age.Halfway around the globe, the Pope spoke about children. About dignity. About the sacred space between human hearts that no algorithm should ever touch. His words carried the weight of millennia, yet they addressed a threat barely a couple of years old.And in a laboratory, scientists celebrated. They had taught machines to see rain before it falls. To map storms that haven’t even formed yet. To peer into the sky’s intentions with startling accuracy. Amazing.Three moments. Three choices about who we become when we build tools that think.The courage to show your woundsNew Zealand MP Laura McClure did something remarkable. She took her violation and made it visible. Not hidden. Not buried. Not dismissed with bureaucratic language about “emerging technologies” and “policy frameworks.” None of that.She said: Look. This is what happens to us now. We are powerless against the machines.There’s a particular kind of bravery in turning pain into purpose. In saying yes, this happened to me, and I will not let it happen to you in darkness. The deepfake was meant to shame her into silence. Instead, she used it to break silence open.We live in times when our faces can be stolen. Our voices copied. Our bodies simulated without permission. The technology exists. The harm is real. The question becomes: Do we pretend this isn’t happening, or do we face it with clear eyes?If you are interested in learning more about this, we’re teaching a Masterclass on how to avoid these modern harms of AI.A sacred boundaryThe Pope’s words matter. They still do, because they draw a line. Not against progress. Not against innovation. But around something that matters more than efficiency or capability.Human dignity.Children need space to grow without algorithms watching. And that space is becoming harder to carve out. Youth deserve to discover themselves without machines predicting their futures. There are territories of the heart that should remain unmapped by artificial minds.This message wasn’t targeting fear. It was radiating wisdom. About knowing that not everything that can be built, should be built. Not everything that can be automated should be automated. Some spaces belong to us alone. Singular.Touching the sky with AIThen there’s the rain. Nature’s tears. Washing the world clean. But ever more dangerous as Nature’s cries have turned into torrents that are hard to predict.Scientists can now peer into the atmosphere and see storms before they form. They can map rainfall with precision that would have seemed magical just years ago. They’re teaching machines to read the sky’s moods like an ancient farmer reading cloud formations.This power fills me with wonder. The ability to understand weather at a global scale. To predict floods. To anticipate droughts. To maybe, someday, guide rain where it’s needed most. Fascinating.But power always asks questions. Who decides where the rain falls? Who controls the algorithms that read the sky? How do we use tools this profound without losing our sense of wonder at the storm itself?Forward we goHere’s what these three stories taught me:Change happens whether we’re ready or not. AI will continue to grow. Deepfakes will continue to threaten. Machines will continue to learn new capabilities. Weather prediction will become weather influence.But we still choose how to respond.We can choose courage over hiding. We can speak truth about harm instead of pretending it doesn’t exist. We can draw boundaries around what matters most. We can use powerful tools while protecting sacred spaces.We can build the future instead of just letting it happen to us.We notice when technology serves us and when it doesn’t. We pay attention to which innovations protect human dignity and which ones threaten it. We stay awake to the choices being made in boardrooms and laboratories that will shape our children’s world. And we have to.This is engaged awareness. The practice of staying human in an increasingly artificial world.The machines are learning. But so are we. We must.We’re learning to be brave like that politician. To be wise like the Pope. To be thoughtful about power like those scientists.The future isn’t something that happens to us. It’s something we create, choice by choice, story by story, boundary by boundary.Rain falls where it will. But we decide what we do when we get wet.Research Freedom makes you a smarter researcher with AI.What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
Let’s be honest: the AI tool market is chaos. Every week brings another revolutionary platform promising to change your academic life. Most academics I know are paying for multiple subscriptions, switching between platforms, and feeling overwhelmed by the complexity. There are many AI tools, but frontier models often suffice for your academic work. In this episode of the AI Academics podcast, we tackle the overwhelming array of AI research tools facing academics. We show you the power of free tools like Google’s NotebookLM, and AI Studio. We demonstrate how these tools can simplify academic tasks such as creating mind maps from lengthy papers and building Socratic partners for critical reflection; no coding required.NotebookLM’s Mindmaps. Upload any PDF and get an instant topological overview of the content. Think of it as generating a table of contents with hidden subsections for documents that don’t have clear structure. Laws, regulations, and dense academic papers often lack clear navigation. NotebookLM solves this by creating visual maps of complex information.Google AI Studio. Here’s where things get interesting. We show how AI Studio can become your Socratic partner for critical reflection. This addresses a key question we were asked: “How can we become critical thinkers with AI?” Instead of just asking AI for answers, you use it for reflection. Reflecting involves back-and-forth dialogue. You can’t learn critical thinking from AI alone, but it can help guide your reflection.The right prompt is a force multiplier. Engage AI in scholarly dialogue. Ask follow-up questions. Challenge its responses. Use it to explore different perspectives on your research. Refining your prompts increases efficiency. It can transform your workflow, allowing you to use the full power of frontier LLMs.Simplifying your tools can enhance workflow efficiency. Instead of juggling multiple paid subscriptions, you can:* Save money: Potentially hundreds of dollars per year in subscription fees* Reduce complexity: Focus on mastering two powerful tools instead of learning many mediocre ones* Improve results: Access frontier LLMs that often outperform specialized paid tools* Enhance critical thinking: Use AI as a reflection partner, not just an answer machineOur episode encourages academics to improve their research strategies and use these accessible tools to enhance their workflows effectively. It’s like a mini webinar. The episode closes with a promise of future discussions and resources to further assist academics in the AI landscape. Relying on AI tools entirely isn’t feasible, especially for critical tasks like writing a dissertationThe future of academic AI use isn’t to collect more tools but to master the right ones. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
Geoffrey Hinton said “become a plumber” if you're worried about AI.He was only half-joking. But he showed us something important about the future of work.1. From Execution to EvaluationBefore: You research, analyze, and write reports.Now: AI generates drafts, you provide critical judgment.Your value: Knowing what's wrong, not just what's right.2. From Knowledge to CurationBefore: You needed to know everything in your field.Now: You need to know which AI outputs to trust.Your value: Pattern recognition across AI-generated content.3. From Individual to OrchestratorBefore: You completed tasks solo.Now: You manage multiple AI agents working simultaneously.Your value: Strategic thinking about workflow design.4. From Technical to EthicalBefore: Focus on getting the work done.Now: Focus on ensuring the work should be done.Your value: Moral reasoning and consequence evaluation5. From Production to InnovationBefore: You spent 80% of time on routine tasks.Now: AI handles routine, you focus on breakthrough thinking.Your value: Asking questions nobody thought to ask.Most important skills that will continue to matterCritical thinking beats technical knowledge. Ethics trump efficiency. Human judgment remains irreplaceable. At least for the next little while So, what does our future look like? AI won’t steal your job by default. But someone using AI better than you might. The difference isn’t technical skills. It’s knowing when to trust the machine and when to override it. The entry level is disappearing, and instead, we are training to orchestrate tasks together with our artificial buddies. What skill are you developing to stay ahead of the AI curve? Let us know in the comments. Research Freedom lets you become a smarter researcher in around 5 minutes per week.What did you think of this week’s issue?❤️🔥 Loved this? Share it with a friend. Drop me a 🎓 in the comments.🤢 No good? You can unsubscribe here. Or tweak your subscription here. No drama.💖 Newbie? Welcome here. Start with these three resources for Research Freedom. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
In the episode 5 of the AI Academics Podcast, hosted by Professor Dr. Lennart Nacke and his co-host Vugar Ibrahimov, the duo discusses AI's role in generating creative chess pieces, AI tools for reading academic papers, and evolving internet behaviour influenced by AI. The episode also emphasizes ethical integration and effective use of AI tools, concerns about data privacy, and the future of AI-driven search engines. The team highlights their AI research tools webinar and encourages listeners to explore their Substack and Medium writings for more insights.00:00 Welcome to the AI Academics Podcast00:20 Black Friday and Cyber Monday Deals00:59 Gen Chess: A Creative AI Experiment04:27 Exploring 11 Reader and GenFM05:53 Notebook LM and Privacy Concerns18:54 Generative AI in Browsers24:33 The Future of AI-Powered Search27:35 Wrapping Up and Webinar Promotion This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
In this episode, I chat with Vugar Ibrahimov to explore how AI is reshaping academic writing. We dive into practical tools like Napkin AI for creating social media content, and discuss the double-edged sword of AI writing assistance - from improving productivity to potentially dulling our writing skills. Our conversation takes interesting turns as we debate AI's role in creativity and critical thinking, explore ethical considerations, and share insights on using AI tools effectively while preserving human creativity in our work. We also tackle the growing concern of over-reliance on AI and discuss strategies for maintaining a healthy balance between human and machine input in academic writing. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
You sit at your desk. Fingers hovering over the keyboard. The human prompts, the machine responds. ChatGPT stares back at you. The cursor waits while ChatGPT thinks. That rainbow wheel is doing more loops than a rollercoaster having an existential crisis. You type a prompt, but the response misses the mark completely. Sound familiar? AI tools transform how we work—but only if we know how to talk to them. Think of prompting as learning a new language. You wouldn’t walk into a French café and expect perfect service with broken phrases from your high school textbook. The same goes for AI. Let us show you how to order exactly what you want.Context shapes meaningAnd meaning creates context. Most self-proclaimed AI experts write prompts like they’re sending a text message to a stranger. A quick “write me a literature review” gets you about as far as “food please” gets you in a restaurant. Your AI assistant needs context—rich, specific, carefully crafted context.Start with a clear frame. Tell the AI who you are, what you’re working on, and what you need. Plant the seed, and watch the garden grow. Give it boundaries, expectations, and examples. It’s not unlike showing a research scholar their first steps. The more specific you are, the better your results.Tools serve youOr you end up serving tools. Not every AI fits every task. Claude 3.5 Sonnet shines at complex analysis. GitHub Copilot accelerates coding. HeyGen helps create educational videos. Pick your tool like you’d pick your research method—match it to your objective. We discussed this at length in our webinar.Start small. Small steps make great journeys, but great journeys also begin with the smallest of steps. Test different tools on manageable tasks. Compare their outputs. Build your toolkit gradually, and you’ll develop an instinct for which assistant suits which job. Shop for AI tools like you’re casting characters for another ‘The Office’ reboot.Frame with precision. Frame with intent. Frame with clarity. Prompts work best when they follow a clear structure. Here's my tested approach:* Set the stage with context* Define the specific task* Provide examples of what you want* Specify format and length* Ask for step-by-step reasoning* Request specific frameworks or approachesThink of it as writing a detailed research proposal. Format those citations while you’re juggling flaming chainsaws, APA style. The clearer your requirements, the better your results.Build better conversationsStop treating AI like a vending machine. Single-shot prompts rarely give you the best results. Instead, build a conversation. Start broad, then narrow down. Ask follow-up questions. Challenge assumptions. Guide the AI toward your goal through dialogue.These tools predict text based on patterns. Help them understand your patterns through clear, structured conversation.Need more guidance? Check out our webinar. We discuss practical applications of AI tools in academic work.Until next time. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
Welcome to episode 2 of the AI Academics Podcast. In this episode, we discuss the latest AI tools that can significantly enhance your academic workflow. We discuss ten reliable AI tools and their practical applications in academia. Additionally, we explore the emerging issue of AI clones and deepfakes, sharing a real-life example of how someone's identity was misused for propaganda purposes. Join us for an insightful conversation.Exciting travels aside, we've got two major points to cover today. We’ll talk about ten reliable AI tools that can enhance your academic workflow, plus dive into a serious discussion about AI clones. Yes, AI clones are real, and one researcher found herself cloned for purposes she never agreed to.Breaking Down the Tool ListHere’s a tool list that I tweeted recently. These aren’t sponsored picks—just my personal favourites. Let’s run through them, starting with TypingMind:TypingMindThis AI user interface (UI) emerged as a favourite of mine, offering a smooth UI to access major APIs like GPT, Gemini, and Claude. The ability to install local versions, like LLaMa or Mistral, on your computer makes it a must-have for academics looking to improve their workflows.ConsensusPerfect for initial literature searches, Consensus gives quick summaries and references, making it ideal for kick-starting any research project. Yes, Vugar (video) and I have both done detailed reviews on it, so check those out!Jenni AIJenni AI stands out as a co-writing assistant. It had a steep learning curve, but once you get used to it, the tool can be a great companion. It’s perfect for breaking the blank page syndrome and offers a conversation-like interface to help you develop your writing.PaperPalPaperPal comes in handy when working in Word. It offers paraphrasing options and gives writing suggestions that can make your work more fluid and concise. It's a lifesaver when dealing with Word documents.WritefullIf you’re working in LaTeX and using Overleaf, Writeful is your go-to tool. It suggests edits on the go and integrates seamlessly with your writing flow. It helps you maintain clarity and coherence, making academic writing less of a chore.Otter.aiOtter.ai isn’t just for meeting transcriptions. With its Otter Bot feature, you can chat with your transcripts, making it perfect for lectures and notes review.SciteAIVugar’s favourite, SciteAI, brings smart citations and writing assistance to the foreground. It even narrows down sources into supporting, contrasting, or just mentioning—adding an extra layer of refinement to your research.PerplexityFor those simple queries that need a refined touch, Perplexity replaces Google for many, giving AI-generated responses that are detailed and insightful.SpeechifyWhile I have reservations about its pricing, Speechify's text-to-speech feature in your own voice is unique and adds fun to academic reading.AI Clones: A Growing ConcernThe spotlight today also falls on a viral phenomenon. Imagine waking up one day to find your face and voice cloned for purposes you never intended. That's exactly what happened to creator Olga Loiek who woke up to find her likeness used for propaganda in China. Scary, right? This sheds light on the darker side of AI. It shows us how important it is to build AI tools responsibly. Tools like 11Elevenlabs and HeyGen already allow making perfect voice and face clones, but do so with your consent.Next week, our online thesis workshop aims to help people get unstuck with thesis writing. Run together with Ali Hindi, this session is packed with useful tips for anyone working on a thesis. Get the recording if you missed it.Final ThoughtsWe hope you found value in today's mini AI tools webinar. Follow us on X, LinkedIn, YouTube, or Medium. The AI Academics Podcast is also on Substack. Don’t miss rating us on Apple Podcasts, Spotify, or YouTube. A little love goes a long way.We’re taking a short break next week, but don’t worry—Vugar and I will be back with more AI insights. Until then, keep geeking out with the best AI tools that matter to your academic life.Thank you for joining us today. Enjoy your summer, and we’ll see you in the next session!Say Thanks💙 Leave a review of the show on Apple Podcasts.💚 Leave a rating of the show on Spotify. This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe
In this episode of the AI Academics podcast, hosts Dr. Lennart Nacke and Vugar Ibrahimov discuss the recent Apple announcement about 'Apple Intelligence' and its implications for AI integration into Apple's ecosystem, emphasizing privacy and the cooperation with OpenAI's ChatGPT. The conversation covers the advancements in tools for academics, ethical implications of AI, and how AI can be used to improve workflow while maintaining the human element in critical thinking and decision-making processes. The episode also explores how AI could reshape academic research and the importance of understanding which tasks should be delegated to AI. Live audience questions are addressed as well.Timestamps:00:00 Introduction to the AI Academics Podcast00:27 Going Live: New Platforms and Venues01:21 Apple Intelligence Announcement06:49 Privacy Concerns and AI Integration12:54 AI Tools for Academics22:15 Pushback on AI Tools in Academia23:12 Critical Thinking vs. AI Assistance25:46 Integrating AI in Education28:04 Ethical and Practical Considerations of AI31:00 Environmental Impact of AI33:47 Coding in the Age of AI38:44 The Future of AI and Human Tasks41:29 Q&A and Final ThoughtsWHEN YOU'RE READY📮 Write Insight Newsletter: https://go.lennartnacke.com/newsletter🤖 Get the AI Tools Webinar Recording: https://go.lennartnacke.com/aitoolswebinar🧞♂️ Book a seminar from us***CONNECT𝑿 Connect on X: @acagamic & @vugar_ibrahimov🧢 Connect on LinkedIn: Lennart Nacke & Vugar Ibrahimov📰 Subscribe on Medium: @acagamic & @vugar_ibrahimov🔴 Subscribe on YouTube: @acagamic & @vugar_ibrahimov***SAY THANKS💙 Leave a review of the show on Apple Podcasts💚 Leave a rating of the show on Spotify This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit lennartnacke.substack.com/subscribe