DiscoverActual Intelligence with Steve Pearlman
Actual Intelligence with Steve Pearlman
Claim Ownership

Actual Intelligence with Steve Pearlman

Author: Steve Pearlman, Ph.D.

Subscribed: 848Played: 6,762
Share

Description

One of the world's premiere critical thinking experts, you can view Dr. Steve Pearlman's viral Editor’s Pick TEDx talk here: https://youtu.be/Bry8J78Awq0?si=08vBAR1710mgQt0i

pearlmanactualintelligence.substack.com
64 Episodes
Reverse
Steve Pearlman: Today on actual intelligence, we have a very important and timely discussion with Dr. Robert Neber of a SU, whose recent opinion piece in inside higher education is titled AI and Higher Ed, and an impending collapse. Robert is a teaching professor and honors faculty fellow at the Barrett Honors College at a SU.And the reason that I invited him to speak with us today on actual intelligence is his perspective on artificial intelligence and education. And his contention roughly that higher Ed's rush to embrace artificial intelligence is going to lead us to some rather troubling places. So let's get to it with Dr.Robert Niebuhr.Robert. We talked a little bit about this on our pre-call, and I don't usually start a podcast like this, but what you said to me was so striking, so, uh, nauseating. So infuriating that I think it's a good place to begin and maybe some of [00:01:00] our listeners who value actual intelligence will also find it as appalling as I do, or at least a point of interest that needs to be talked about.You were in a meeting and we're not gonna talk about exactly, necessarily what that meeting was, but you're in a meeting with a number of other. Faculty members and something interesting arose, and I'll allow you to share that experience with us and we'll use that as a springboard for this discussion.Robert Neibuhr: Yeah, sure. Uh, so obviously, as you can imagine, right, I mean, faculty are trying to cope with, um, a perceived notion that students are using AI to create essays. And, and, uh, you know, in, in the, where I'm at, you know, one of the backbones, um, in my unit to. Um, assessed work is looking at argumentative essays.So the, the sort of, the idea that, that this argumentative essay is a backbone of a, of a grade and assessment. Um, and if we're, if we're suspecting that they're, they're using ai, um, you [00:02:00] know, faculty said, well, why should we bother grading essays if they're written by bots? Um, and, and you know, I mean, there's a lot, there's a lot to unpack there and a lot of things that are problematic with that.Um, but yeah, the, the, the idea that, you know, we, we don't have to, to combat a, to combat the perceived threat of, of student misuse of ai, we just will forego critical assessment. Um, that, that was, you know, not a lone voice in the room. That that seemed to be something that was, that was reasonably popular.Steve Pearlman: Was there any recognition of what might be being sacrificed by not ever having students write another essay just to avoid them using ai, which of course we don't want them to just have essays write, uh, so of course we don't want them to just have AI write their essays. That's not getting us anywhere.But was there any conception that there might be some loss in terms of that policy? [00:03:00]Robert Neibuhr: I mean, I, I think, I think so. I mean, I, I imagine, uh, you know, I think. My colleagues come from, from a place where, where they're, they're trying to figure out and, and cope with a change in reality. Right? But, um, there, there is also a subtext, I think across, across faculties in the United States of being overworked.And, and especially with the mantra among, you know, administration of, you know, AI will help us ramp up or scale up our, our class sizes and we can do more and we can. All this sort of extra stuff that it would seem like faculty would be, um, you know, more of their time and, and more of their effort, you know, as an ask here that I think that's, that, that may be, that may have been part of it.Um, I, I, I don't know that the idea of like the logical implication of this, that, you know, if we no longer. Exercise students' brains if we no longer have them go through a process that encourages critical [00:04:00] thinking and art, you know, articulating that through writing, like what that means. I, I don't know that they sort of thought it beyond like, well, you know, this could be, we could try it and see was kind of the mentality that I, I sort of gauged from, from the room.But, uh, it's, I mean, it's a bigger problem, right? I think the, the, the larger aspect of. What do we, what do we do? What can we do as faculty in this sort of broad push for AI all over the place? And then the idea of the mixed messages. Students get right. Students get this idea, well, this is the future. If you don't learn how to, how to use it, if you don't, you know, understand it, you're gonna be left behind.And then at the same time, it's like, well, don't use it from my class. Right? Learn it, but don't use it here. And that's. That's super unclear for students and it's, it's unclear for faculty too, right? So, um, it, it's one of those things that it's not, um, I don't think in the short term it works. And as you, as you, as you implied, right, the long term solution here of getting rid of essay [00:05:00] assignments in, in a discussion based seminar that relies on essays as a critical, I mean, this is not a viable solution, right?We, we got the entire purpose of, of the program in this case.Steve Pearlman (2): And yet a lot of faculty from what you described and a lot of what I've read as well, is also moving towards having AI be able to grade. The students work not just on simple tests, but on essays. And as you point out in your article, that's potentially moving us to a place where kids are using AI to write the essays, and then faculty are using AI to grade the essays.And who, when did the human being get involved in between, in terms of any intellectual growth?Robert Neibuhr: Yeah. No, it, it's, I think it's a, it's, it's really, it's a, it's a really big, it's a really big problem because, um. Again, those long-term implications, uh, are, are clear as, as, as you laid out. But, um, it's also, I mean, like, again, like this notion that [00:06:00] there's, there's a tool that obviously can help us, you know, multiple avenues where AI can be, can be something that's, that's helps us be more efficient and all this, those sort of stuff that, that's, that's, that's true.Um, so it's, it's there. So we should gauge and understand it. Um, but it doesn't mean you just use it everywhere. You know, you, you can buy, I don't know, you can buy alcohol at the grocery store. It doesn't mean you have it with your Cheerios, right? I mean, there's a, there's a time and place polite society says, you know, you can consume this at these times with these meals or in this company, right?It's not all, all of this. So things, so, you know, the message that I think it's a level of respect, right? If we, we don't respect the students, if we don't lay out clear guidelines and. We don't show them respect, we don't ask for respect back if, if we use bots to grade and the whole thing just becomes a charade.And, and I, I think the, again, the system [00:07:00] begins to, to break down and I think people wind up losing the point of what the exercise is all about anyway. And I, I may not just the assignment or the class, but like higher education. Right. I mean, the, the, the point is to. Teach us how to be better thinkers to, to gauge, evaluate information, uh, you know, use evidence, uh, apply it in our lives as, as we see fit.And, and if it's, and if we're not prepped for that, then, then what did they prep us for? If, if, you know, the student's perspective, it's like, well, what did I just do? What did I pay for? That's, that's a, that's a huge long term problemSteve Pearlman (2): it seems like. Uh. That, what did I pay for? Question is gonna come to bear heavily on higher education in the near future because if students are able to use AI to accomplish some of their work, and if faculty are using AI to grade some of their [00:08:00] work and so on, and then the, you know, the, these degrees are costing hundreds of thousands of dollars.And it's an effectual piece of paper that maybe that loses value in essence also because the students didn't really get anything from that process or get as much as they used to because they're using ai. You know, is this moving towards some kind of gross reassessment of the value of higher education or its role in our society entirely?Robert Neibuhr: I mean, it it, I think it certainly. It certainly has the potential, right? I mean, I would, I would even look back and, and think of a, a steady decline, right? That this is, this is one of, of many pieces that have gone, gone down. And I, you know, I mean mentioning in, in your, in your question just now, right? That the sense of, you know, students as client or customer, uh, how that has changed the sort of the, the interface and, and [00:09:00] how, you know.Uh, we, we think of this, uh, this whole, this whole endeavor, right? I mean, um, and, you know, and this leads to things like, oh, retention numbers and, and all these sort of things that the mental gymnastics that happens to, um, you know, do all these things and, and the truth be told, right? Different paths for different people, right?There's not, you know, there's not a single, like, you don't have to get the degree in physics to be as successful, but the, the student as, as, as customer, I think also has, um. Solidified this, this notion, um, that we can le list the student feedback, right? And, and student feedback is important. So I'll qualify that that standards were, were low.I, I know for my own example, you know, even 20 years ago, right, that that undergraduates would have to produce a capstone thesis as part of their bachelor's degree. And I know firsthand that at from the time that, you know, [00:10:00] the history department had looked at, um, exit surveys of people who didn't finish their history degree.And they said, well, why didn't you finish your history degree? I said, oh, well, you know, I, whatever the program was, psychology, sociology, doesn't matter, whatever the other degree was. That degree program didn't require a thesis. So that was. That was easier, right? That was th
How does your brain tackle a new problem? Believe it or not, it tackles new problems by using old frameworks it created for similar problems you faced before. But if your brain is wired to use old frameworks for new problems, then isn’t that a problem? It is. And that’s why most people never think outside the box.So, how do you get your brain to think innovatively? Divergently? And outside the box, when others don’t?It’s easier than you think, but before we get to that, let’s be clear on something. When I talk about frameworks, I’m not speaking metaphorically. I’m speaking about the literal wiring of your brain, something neuropsychologists might refer to as “engrams,” and just one engram might be a network of millions of synapses.Think of these engrams as your brain’s quick-reference book for solving problems. For example, if your brain sees a small fire, it quickly finds the engrams that it has for fire. One engram might be to run out of the house. Another might be to pour water on the problem. Without these existing engrams, you might just stand there staring at the fire trying to figure out what to do. So, you should be thankful that your brain has these pre-existing engrams for problems. If it didn’t, every problem would seem new for the first time.But there’s a serious flaw in the brain’s use of engrams. Old engrams don’t always really apply to new problems. So, let’s say your brain sees a fire, but this time it’s an electrical fire. It still sees fire, shuffles through its engrams, and lands on the engram for pouring water on that fire to extinguish it. In its haste, it’s old engram overlooks the fact that it’s an electrical fire. So, pouring water on it only spreads it, if not also gets you electrocuted.Your brain chose the closest engram it had for solving the current problem, but that old engram for extinguishing fire with water was terribly flawed in terms of solving for electrical fires. Old engrams never fully match new problems.So, here’s why most people cannot think outside the box: They’re trapped using old engrams and do not know how to shift their brains into new ones. That’s right. Since the brain needs to rely on some kind of existing engram, then people who do not know how to break free of their engrams will never think innovatively, creatively, or outside the box.But thinking outside the box is easy if you know the trick. When faced with a problem, even if it is a similar to one you faced before, or especially if it is similar to one you faced before, you need to force your brain into looking at the problem in a radically different way. Remember, your brain will keep trying to work back to the old engram. That’s it’s default approach. It wants to use templates it already has. And so you have to shock it into a new perspective that does not allow it to revert to the old perspective. I’m talking about something that has nothing to do with the problem at all. I’m talking about an abstract, divergent, and entirely unrelated new perspective.For example, when you’re facing a problem, or when you’re leading a team facing a problem, examine the problem through some kind of radical analogy that seemingly has nothing to do with the problem itself, but something with which you are your team are familiar.You might ask, how’s this situation like Star Wars? Who or what is Darth Vader? What’s the force? Who or what is Luke Skywalker? What’s a lightsaber in this scenario?Or, you might consider how your problem is like what happened to Apollo 13. How are we spiraling through space? How much power do we need to conserve and how do we do it? Who’s inside the capsule? What’s outside? Who’s mission control? And so on.See, you might think that these are trivial or even silly examples, but remember, it is the fact that they are so unrelated and abstract that will jolt your brain out of its existing engrams and force it to look at the problem in entirely new ways. And here’s the beauty of it: Because your brain still wants to solve the problem, it will on its own, whether you even want it to or not, find ways to make connections between your abstract idea and the problem itself, and it will do so in innovative, creative ways that will make your thinking or your team’s thinking, stand out.Remember, when Einstein was developing his Theory of Relatively, he didn’t just sit around doing math. He also spent a lot of time imagining what it would be like to ride on the front of a beam of light.So, when it comes down to it, if you know what to do, then thinking outside of the box might be easier than … well … easier than you think. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe FREE to receive new posts and support my work.APA to Students: Don't Bother to Think for Yourselves Anymore. Let AI Do It.If in the future you want a psychologist who can actually think about psychology, or a doctor who can actually think about medicine, or a teacher who can think about what their teaching, or a lawyer who can actually think about the law, then the new American Psychological Association’s (APA) A.I. policies should make you concerned. Maybe they should even make you angry.As many who’ve been to college already know, the APA’s standards for what constitutes academic integrity and citing sources is the prevailing standard at most institutions. When students write papers or conduct any research, it’s typically the APA’s standards that they observe for what they are permitted to use and how they must disclose their use of it.Yet, when it comes to supporting critical thinking and actual intelligence, the APA’s new standards just took a problematic if not catastrophic turn. And the irony is palpable. Of all the organizations that set standards for how students should use their brains, you might think that the American Psychological Association would want to hold the line in favor of actual thinking skills. You might think that with all of the emerging research on A.I.’s negative consequences for the brain—including the recent MIT study that showed arrested brain development for students using A.I. to write, which you can learn more about on my recent podcast—that the APA would adopt a vanguard position against replacing critical thinking with A.I. You might think that the APA would want to bolster actual intelligence, independent thought, evidence-based reasoning, etc. But instead of supporting those integral aspects of healthy brain development, the APA just took a big step in the opposite direction.I’m referring to the APA’s new so-called “standards” for “Generative A.I. Use,” standards that open the doors for students to let Generative A.I. do their thinking for them. For example, the APA liscenses students to have A.I. “analyze, refine, format, or visualize data” instead of doing it themselves, provided, of course, that they just disclose “the tool used and the number of iterations” of outputs. Similarly, the APA welcomes students to have A.I. “write or draft manuscript content” for them, provided that they disclose the “prompts and tools used.”To be clear, the APA’s new standards make it all too clear that it is very concerned that students properly attribute their uses of Generative A.I., but the American Psychological Association is not concerned about students using Generative A.I. to do their thinking for them. In other words, the APA has effectually established that it is okay if students don’t analyze their own data, find their own sources, write their own papers, create research designs, or effectively do any thinking of their own; it’s just not okay if students don’t disclose it. In short, the leading and most common vanguard for the integrity of individual intellectual work just undermined the fundamental premise of education itself.What the APA could have done and should have done instead was to take a Gibraltarian stand against students using A.I. in place of their own critical thinking and independent thought. That is what it has done to this point. For example, students were simply not permitted to have a friend draft an essay for them. They were not, in many circles, they were not permitted to allow a friend to proofread their work unless the syllabus licensed them to do so. But for some reason, since it is an A.I. drafting the paper instead of a friend, the APA considers it permissible.Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe free to receive new posts and support my work.Consistent with its history of guarding academic standards, the APA could have said that students who have an A.I. “analyze … data” or “write or draft manuscript content” were not using their own intellect and therefore cheating. Period. Doing so would have sent a strong message across all of academia that permitting students to use Generative Artificial Intelligence instead of their actual intelligence was a violation of academic integrity, not to mention a gross violation of the most fundamental premise of education itself: the cultivation the student’s mind.To be fair, not all of the usages of A.I. referenced by the APA’s new standards are cheating. For example, allowing students to use A.I. to “create … tables” or “figures” instead of painstakingly trying to build them in Microsoft word, would not replace the student’s meaningful cognitive work.Furthermore, and more importantly, the APA’s policies are not binding. Educators, departments, and/or institutions need not follow suit. Any given educator can still restrict A.I. usages and determine their own standards for what is acceptable in a given course, including the establishment of policies that would treat using A.I. to “analyze … data” as cheating (which it should be).And finally, the APA still asserts that “AI cannot be named as an author on an APA scholarly publication.” Yet, to co-opt a psychological term, that seems nothing if not “schizophrenic.” After all, if a student uses A.I. to find its resources, “analyze” their “data,” and “write” their “manuscript,” then why shouldn’t it be listed as an author, if not the lead author? What, after all, is the student really doing anyway?Thus, as arguably the leading force for what constitutes academic integrity vs. cheating, the APA’s move at least implicitly licenses students across academia to use Generative A.I. in ways that will undermine their individual work, critical thinking, and overall actual intelligence. Once again, the APA just told students everywhere that using A.I. to “write or draft manuscript content” for them, instead of thinking about it themselves, developing their ideas themselves, referencing sources for themselves, perhaps even reading sources for themselves, and on and on, is perfectly okay as long as they cite it when they do so.And while it remains true that faculty can do as they wish, imagine being that high school, college, or graduate school educator who has to stand against the APA. Imagine having to hold the line against what will be mounting droves of students who ask, “Why can’t we use A.I. in your class when we use it in our other classes?” And who ask, “Why can’t we use A.I. in your class when the American Psychological Association says it is fine?” Considering that educators with stricter A.I. policies are already seeing students unenroll from their courses, the new APA standards my prove catastrophic.So, that returns us to the emerging problem: If you think that academic institutions should graduate students who can think critically about their subject of “expertise”—if you want a doctor who can think about medical things—then the APA just told you that you had better thing again.(This article written with no Artificial Intelligence, only the actual kind.)If you support actual intelligence, please share this with other likeminded people.*** This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Thanks for reading Actual Intelligence with Dr. Steve Pearlman! This post is public so feel free to share it.Thanks for reading Actual Intelligence with Dr. Steve Pearlman! Subscribe for free to receive new posts and support my work.Want Your Kids Off Their Phones: They Just Told Us How to Do ItIn a new Harris poll conducted with The Atlantic, kids have reminded us about the importance unstructured, unsupervised play for the development not just of their actual intelligence, but of so many related developmental factors: critical thinking, problem solving, self-efficacy, social maturity, and, well, you name it.According to the article, What Kids Told Us About How to Get Them Off Their Phones, by David Graham and Tom Nichols, the Harris poll surveyed 500 kids between 8 and 12 years old, most of whom have phones and not only are on social media, but also interact—unsupervised—with adult strangers through social media or games. Yet, most aren’t allowed out in public without adult supervision, even though, as the article states, “according to Warwick Cairns, the author of How to Live Dangerously, kidnapping in the United States is so rare that a child would have to be outside unsupervised for, on average, 750,000 years before being snatched by a stranger,” statistically speaking.But modern parents, concerned about dangers in the real world, relegate their kids to online interactions in part under the guise of their safety. As the authors put it, “because so many parents restrict their ability to socialize in the real world on their own, kids resort to the one thing that allows them to hang out with no adults hovering: their phones.”If there are operative words in that quote, they are “no adults hovering.” What kids report is that more than anything else, they want play that does not involve adult supervision.Of course they do. Why? Because, based on overwhelming amounts of research, our brains evolved with free play as a primary means of cognitive and social development. And that’s not just true of humans, by the way. Studies on animals reinforce the point. For example, kittens who were not permitted free play also never developed they social skills they needed as adults. So, is should not be surprising that human children are meant to play with each other, in mixed groups, without supervision, figuring out how to get along, create games, test their own ideas, etc.If you want a sense of just how important and powerful free play is, then consider just one of many recent studies: Advocating for Play: The Benefits of Unstructured Play in Public Schools,Heather Macpherson Parrott and Lynn E. Cohen. The study examined the impact of increased free play time for kids in school, which found improvements in the following areas:· desire and ability to learn/focus,· mood,· social interaction,· cooperation,· problem solving,· independence, and· self-advocacyAll said, whereas the evidence about the harms of smartphones of child development is mounting fast, unsupervised free play helps young brains develop in just about all of the ways that they need to develop.So, though it might take just a little coordination with other parents, give your kids what they want (even they specifically don’t know that they wan it): free play with other kids that’s not (generally) under your watchful eye. Take their phones away and then drop them at a park, a backyard, a basement, etc. and tell them to have fun. And if they complain that they are bored, then tell them to figure out what to do, because that’s exactly what their brains need to learn anyway.What I mean by that is that it is healthy for their brains to work through being bored, figure out how to resolve social conflicts, and invent what to do next, including, and most especially, adapt to changing circumstances. All of that happens through free, unsupervised play. So, sometimes the key to excellent parenting isn’t parenting more, but parenting less.As Max Bekoff wrote, “Play is training for the unexpected.” This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Is ChatGPT dumbing down your kid? It is and here’s what you can do.A new MIT study reveals the powerful consequences of artificial intelligence on actual intelligence, and guess what? Simply (and terrifyingly) put, the use of artificial intelligence undermines your child’s actual intelligence. In short, when children don’t think for themselves, they don’t learn to think for themselves. That should surprise no one.I’ll get to the disturbing details of the study in a moment, but let me first explain why these outcomes were obvious and inevitable. In a nutshell, the brain functions like a muscle insofar that it becomes stronger when it is used and atrophies when it is not used. I could list a thousand additional factors that affect thinking, but that simple premise really is enough for this discussion.And when I say that the brain functions like a muscle, most people think I’m speaking overly metaphorically. I’m not. While the brain, of course, isn’t actual muscle tissue, its functioning is remarkably similar. Much in the way that exercising muscles builds more muscles, exercising the brain builds the brain—literally. Every single time we engage in a thinking act, the brain builds more wiring, such as synapses through synaptogenesis, for that thinking act. On the flipside, the brain not only allows existing pathways to diminish when they’re not used, it actually overwrites existing pathways with new ones.Watch this play out in the MIT study …The MIT StudyThat study is Your brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, by a team of researchers led by Dr. Nataliya Kosmyna. The scientists broke a group of students down into three essay-writing groups: An “A.I.-assisted” writing group that used multiple LLMs (not just ChatGPT), a “search engine” group, and a “brain-only” group. The students then engaged in three writing sessions while the researchers monitored their brain activity using an EEG. Each student was interviewed after each session, and all of their writing was assessed by humans, as well as an A.I.So, what happens when one group is required to use their brains more than the other groups? Would it shock you to know that the group that needed to do their own thinking actually thought more? I hope not, not anymore than it should be surprising that a group of kids who practiced hitting a ball did better at hitting a ball than a group of kids who watched a robot hit a ball for them. (Okay, that’s not a perfectly fair analogy to the A.I. usage in this case, but it illustrates the point.)And the point is that brain-only group performed better and scored higher on their essays. But that’s not the most important outcome for us. What’s more important is that “the brain-only group exhibited the strongest, widest-ranging networks” of brain activity, while the group with A.I. “assistance elicited the weakest overall coupling.” In other words, the brain-only group thought a lot; the A.I.-assisted group did not. Do you remember what we said about what happens when the brain “muscle” isn’t used?But it gets worse. The researchers brought those two groups back for a fourth session and switched their roles. They gave the A.I. group a brain-only writing task and the brain-only group an A.I. writing task. And here’s what’s so important: the brain-only group still performed better, even when using A.I., and the A.I. group still performed worse, even when given the opportunity to think for themselves. Or should I say, it did worse because they now had to think for themselves.Over the first three brain-only writing assignments, the brain-only students built their brains for the task, and they built mental frameworks (read: habits) to rely on when engaging those tasks. Thus, that they then “gained” an A.I. assistant did not suddenly degrade all of the wiring that their brains built. But the A.I. group, when suddenly given the opportunity for a brain-only task, not only had built no wiring for accomplishing that task, it also, and this is the most critical part, created wiring and mental frameworks for using A.I. instead.What that means in a nutshell, and these are my words not those of the study, is that the brain-only group got smarter and the A.I. group not only failed to become smarter, they got dumbed down—they became habituated to relying on A.I. Thus, when given the opportunity to do so, they were incapable of thinking as well as the brain-only participants did.All of that should be concerning enough, but there’s more. In addition to the direct cognitive effects, the researchers also found that brain-only participants “demonstrated higher memory recall” and engagement of thinking-related brain areas compared to the A.I group. Meanwhile, compared to the brain-only group, the A.I. participants reported lower “ownership of their essay,” which is an educator’s way of saying that they didn’t care about it as much and did not feel as though it was their own.Thus, to sum it all up, A.I.-assisted writing made the kids perform poorly, made them dumber, and made them less invested in their own thinking and writing.What to doIn light of this study, one school of “thought” could be that since everyone is going to rely on A.I. in the future anyway, kids who do so will be no worse off than their peers, and using A.I. might free up time for them to do things that are more valuable than writing essays, which, again, they won’t really ever need to write on their own anyway because A.I. will be there to “assist.” Those who subscribe to that position probably should stop following me here at Actual Intelligence right now as we will be rather inclined to disagree.The other school of thought is that thinking skills, such as those developed through writing, which research repeatedly shows is the best way to teach critical thinking, are far more important than any and all expediencies achieved through A.I. assistance. Let me rephrase that: If you want your kids to build their brains rather than have them degenerate into relatively useless gelatin that can only write A.I. prompts or order burrito online, then keep their brains as far from A.I. as possible.Obviously, there’s not much that you can do with your college-aged kids other than share this information with them and hope they make the right decisions. But for kids still under your roof, there are things you can do:1. Share this information with them. Most kids don’t want to become dumber; they do value their ability to think. So, take time to explain, and then reinforce, the consequences of A.I. In fact, start thinking of A.I. as something about which you need to begin messaging no differently than alcohol, drugs, and sex.2. Ask them how they use A.I. Understand their current relationship with A.I., and please keep in mind that the MIT study does not speak to other ways that students might interact with A.I. beyond this one context. Using A.I. in other ways might be more or less consequential.3. Check their work: There are plenty of sites out there that scan essays to see if they were written by A.I. Those sites are not perfectly reliable, but they might offer useful information about what your kid is up to.4. If you want to get serious, have your kids download all their source materials before writing, then shut of their internet while they write. Take away the temptation; make them use their brains.ConclusionThe implications of A.I.-based “thinking” work are becoming clear, but for anyone who has thought about it or who values thinking, they’re also not surprising. Every time we use A.I. to “assist” our thinking, it not only prevents us from thinking, it degrades our capacity to think in the future.Worse—much, much worse—is that those of you reading this built your brains before A.I. existed, which means that even if you gravitate to using A.I. now (please don’t), you’ve got a lot of “muscle” built up to abate its consequences. A.I. will still degrade your thinking, but those sound neural pathways you built up all your life won’t all turn to jelly overnight.But for your kids, it’s different. Their neural pathways are still in the process of building up for the first time. Even though we are all always rewriting our brains, kids’ brains have not even fully developed, so whatever they habituate to will become hardwired moving forward. Consequently, kids who are raised as A.I. natives might never develop their brains for thinking in the same way yours did. And that will not only affect their lives, but a generation of lesser-thinkers will affect all our lives.But there’s good news! Somewhere down the line, kids who actually learn to think for themselves will stand out against the emerging generation who might not. So, if you can raise your own child to think critically, they might just be among the few who lead the world to a better place.And that, once again, is why actual intelligence is so important. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
We finally have emerging research on Artificial Intelligence's consequences for actual intelligence.  If you're an educator or parent--or if you're anyone who just thinks that thinking is important--then you need to learn about this study.  It offers hard evidence that our young people are in danger of diminished thinking skills for life. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Stuck in a mental rut?  Need a way to break out of your current thought patterns?  Want to unlock and unleash your creative, divergent, disruptive thinking skills?  Who doesn't? Listen to learn how! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Headagogy Update!

Headagogy Update!

2024-02-0901:28

More Headagogy coming soon!  Also, check out The Critical Thinking Institute pdocast, with me!!! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve interviews Louis E. Newman, author of Thinking Critically in College: The Essential Handbook for Student Success.  What's the relationship between thinking and studentship?  How can we -- and why should we -- move students to think about disciplinarity?  Are colleges promoting the thinking of which Newman advises students?  And how can they benefit from his ideas regardless? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Is ChatGPT friend or foe?  Should the whole world, as Australia has done, relegate essay writing to inside classrooms?  Is "the academic essay dead"?  Or is ChatGPT, as some have contended, a tool for critical thinking that we should embrace as a new ally in teaching students?As Steve discusses, ChatGPT certainly is a revelation, but no one is really talking about why, and it might not be what you expect. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Continuing their discussion of the pedagogical, institutional, and societal implications of rubrics and rubricizing, Joe, Michelle, and Steve get into rubrics and questions of ...privilege and the expression of structuralized racismthe effort to dismantle public education through standardizationhow rubrics as a concept contribute to the undermining of teaching as a profession, and so much more. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve and the authors of Rubric Nation -- Michelle Tenam-Zemach and Joseph E. Flynn, Jr. -- get into it about all things rubrics and rubricization, as well as whatever it is that we are doing, good and bad, as an educational system regarding teaching, learning, democracy, assessment, studentship, dialogue, politics, critical thinking, teacher training, privilege, race, class, and our greater (and lesser?) humanity.  Spoiler alert: it's "a mess."  But that's what makes this discussion particularly deep and interesting. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve welcomes futurist Frances Valintine: Founder of MindLab--the Best Start-up in Asia Pacific as judged by Steve Wozniak and Sir Richard Branson in 2014.  Frances is a member of the New Zealand Hall of Fame for Women Entrepreneurs (2022), and named one of the top 50 EdTech Educators in the World by EdTech International (2016).  They discuss progressive teaching practices and the wide-scale implementation of change across New Zealand, and its implications for our conception of educational institutions worldwide. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Listen for an in-depth discussion of the rigamarole around academic rigor, including what might be a very surprising--though nonetheless perfectly sensible--root of its challenges.  Student vs. faculty conceptions of rigorG.I. infections"Summer School" This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Part 2 on Jones's firing, including a cranky look at curious statements by NYU, and an uncomfortable look at time traveling through the academy. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve takes an in-depth look at NYU's expedited decision to fire distinguished Organic Chemistry professor, Dr. Maitland Jones, after receiving a petition from students complaining about his course.  What's really at the heart of NYU's actions?  What role did the petition play? What role should rigor play in education? And what in the world does the movie, Demolition Man, have to do with any of this? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve welcomes the University of Wyoming's own TK Stoudt and his students, Amy Bezzant, Maddy Davis, and James Roberts.  Hear about the triumph (and trials!) of peer assessment from an educator who's newer to implementing it, and from students who encountered it for the first time.  What really happens when we give Excalibur to Uryens?  Why should you have a campfire in your classroom?Should Maddie marry an NFL player?Learn the answers to all that and more! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Steve welcomes the University of Wyoming's own TK Stoudt and his students, Amy Bezzant, Maddy Davis, and James Roberts.  Hear about the triumph (and trials!) of peer assessment from an educator who's newer to implementing it, and from students who encountered it for the first time.  What really happens when we give Excalibur to Uryens?  Why should you have a campfire in your classroom?Should Maddie marry an NFL player?Learn the answers to all that and more! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
Ken Bain, author  of  What the Best College Teachers Do and What the Best College Students Do, joins Headagogy to discuss his latest book, Super Courses: The Future of Teaching and Learning.  The discussion with Bain not only delves into examples of these courses and their relationship with problem based learning, but also into critical ideas for teaching and learning, such as why "expectation failure" is so absolutely critical.  Learn the steps you need to take to start your own "super course." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
In this concluding episode on peer assessment, Steve conveys the research on peer assessment, learning outcomes, and soft skills.  There should be no doubts about its value, especially, in the words of Walter Lippman, "It takes wisdom to understand wisdom. The music means nothing if the audience is deaf." This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit pearlmanactualintelligence.substack.com
loading
Comments (1)

Tashi Dendup

great show. I am going to use REAL while reading news and articles. I will also teach same to students

Dec 13th
Reply