Подборка
Redefining Society and Technology Podcast
Redefining Society and Technology Podcast
Автор: Marco Ciappelli, ITSPmagazine
Подписка оформлена: 3Воспроизведенные: 68Подписаться
поделиться
© Copyright 2015-2025 ITSPmagazine, Inc. All Rights Reserved
Описание
Musing On Society, Technology, and Cybersecurity | Hosted by Marco Ciappelli
Let’s face it: the future is now. We live in a hybrid analog-digital society, and it’s time to stop ignoring the profound impact technology has on our lives.
The line between the physical and virtual worlds? It’s no longer real — just a figment of our imagination. We’re constantly juggling convenience, privacy, freedom, security, and even the future of humanity in a precarious balancing act.
There’s no better place than here, and no better time than now, to reflect on our relationship with technology — and redefine what society means in this new age.
Let’s face it: the future is now. We live in a hybrid analog-digital society, and it’s time to stop ignoring the profound impact technology has on our lives.
The line between the physical and virtual worlds? It’s no longer real — just a figment of our imagination. We’re constantly juggling convenience, privacy, freedom, security, and even the future of humanity in a precarious balancing act.
There’s no better place than here, and no better time than now, to reflect on our relationship with technology — and redefine what society means in this new age.
219 Episodes
Reverse
Dr. Steve Mancini: https://www.linkedin.com/in/dr-steve-m-b59a525/Marco Ciappelli: https://www.marcociappelli.com/Nothing Has Changed in Cybersecurity Since War Games — And That's Why We're in Trouble"Nothing has changed."That's not what you expect to hear from someone with four decades in cybersecurity. The industry thrives on selling the next revolution, the newest threat, the latest solution. But Dr. Steve Mancini—cybersecurity professor, Homeland Security veteran, and Italy's Honorary Consul in Pittsburgh—wasn't buying any of it. And honestly? Neither was I.He took me back to his Commodore 64 days, writing basic war dialers after watching War Games. The method? Dial numbers, find an open line, try passwords until one works. Translate that to today: run an Nmap scan, find an open port, brute force your way in. The principle is identical. Only the speed has changed.This resonated deeply with how I think about our Hybrid Analog Digital Society. We're so consumed with the digital evolution—the folding screens, the AI assistants, the cloud computing—that we forget the human vulnerabilities underneath remain stubbornly analog. Social engineering worked in the 1930s, it worked when I was a kid in Florence, and it works today in your inbox.Steve shared a story about a family member who received a scam call. The caller asked if their social security number "had a six in it." A one-in-nine guess. Yet that simple psychological trick led to remote software being installed on their computer. Technology gets smarter; human psychology stays the same.What struck me most was his observation about his students—a generation so immersed in technology that they've become numb to breaches. "So what?" has become the default response. The data sells, the breaches happen, you get two years of free credit monitoring, and life goes on. Groundhog Day.But the deeper concern isn't the breaches. It's what this technological immersion is doing to our capacity for critical thinking, for human instinct. Steve pointed out something that should unsettle us: the algorithms feeding content to young minds are designed for addiction, manipulating brain chemistry with endorphin kicks from endless scrolling. We won't know the full effects of a generation raised on smartphones until they're forty, having scrolled through social media for thirty years.I asked what we can do. His answer was simple but profound: humans need to decide how much they want technology in their lives. Parents putting smartphones in six-year-olds' hands might want to reconsider. Schools clinging to the idea that they're "teaching technology" miss the point—students already know the apps better than their professors. What they don't know is how to think without them.He's gone back to paper and pencil tests. Old school. Because when the power goes out—literally or metaphorically—you need a brain that works independently.Ancient cultures, Steve reminded me, built civilizations with nothing but their minds, parchment, and each other. They were, in many ways, a thousand times smarter than us because they had no crutches. Now we call our smartphones "smart" while they make us incrementally dumber.This isn't anti-technology doom-saying. Neither Steve nor I oppose technological progress. The conversation acknowledged AI's genuine benefits in medicine, in solving specific problems. But this relentless push for the "easy button"—the promise that you don't have to think, just click—that's where we lose something essential.The ultimate breach, we concluded, isn't someone stealing your data. It's breaching the mind itself. When we can no longer think, reason, or function without the device in our pocket, the hackers have already won—and they didn't need to write a single line of code.Subscribe to the Redefining Society and Technology podcast. Stay curious. Stay human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Author Kate O'Neill's Book "What Matters Next": AI, Meaning, and Why We Can't Delegate Creativity | Redefining Society and Technology with Marco CiappelliKate O'Neill: https://www.koinsights.com/books/what-matters-next-book/Marco Ciappelli: https://www.marcociappelli.com/ When Kate O'Neill tells me that AI's most statistically probable outcome is actually its least meaningful one, I realize we're talking about something information theory has known for decades - but nobody's applying to the way we're using ChatGPT.She's a linguist who became a tech pioneer, one of Netflix's first hundred employees, someone who saw the first graphical web browser and got chills knowing everything was about to change. Her new book "What Matters Next" isn't another panic piece about AI or a blind celebration of automation. It's asking the question nobody seems to want to answer: what happens when we optimize for probability instead of meaning?I've been wrestling with this myself. The more I use AI tools for content, analysis, brainstorming - the more I notice something's missing. The creativity isn't there. It's brilliant for summarization, execution, repetitive tasks. But there's a flatness to it, a regression to the mean that strips away the very thing that makes human communication worth having.Kate puts it plainly: "There is nothing more human than meaning-making. From semantic meaning all the way out to the philosophical, cosmic worldview - what matters and why we're here."Every time we hit "generate" and just accept what the algorithm produces, we're choosing efficiency over meaning. We're delegating the creative process to a system optimized for statistical likelihood, not significance.She laughs when I tell her about my own paradox - that AI sometimes takes MORE time, not less. There's this old developer concept called "yak shaving," where you spend ten times longer writing a program to automate five steps instead of just doing them. But the real insight isn't about time management. It's about understanding the relationship between our thoughts and the tools we use to express them.In her book "What Matters Next," Kate's message is that we need to stay in the loop. Use AI for ugly first drafts, sure. Let it expedite workflow. But keep going back and forth, inserting yourself, bringing meaning and purpose back into the process. Otherwise, we create what she calls "garbage that none of us want to exist in the world with."I wrote recently about the paradox of learning when we rely entirely on machines. If AI only knows what we've done in the past, and we don't inject new meaning into that loop, it becomes closed. It's like doomscrolling through algorithms that only feed you what you already like - you never discover anything new, never grow, never challenge yourself.We're living in a Hybrid Analog Digital Society where these tools are unavoidable and genuinely powerful. The question isn't whether to use them. It's how to use them in ways that amplify human creativity rather than flatten it, that enhance meaning rather than optimize it away.The dominant narrative right now is efficiency, productivity, automation. But what if the real value isn't doing things faster - it's doing things that actually matter? Technology should serve humanity's purpose. Not the other way around. And that purpose can't be dictated by algorithms trained on statistical likelihood. It has to come from us, from the messy, unpredictable, meaningful work of being human.My Newsletter? Yes, of course, it is here: https://www.linkedin.com/newsletters/7079849705156870144/ Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
AI in Healthcare: Who Benefits, Who Pays, and Who's at Risk in Our Hybrid Analog Digital Society🎙️ EXPERT PANEL Hosted By Marco Ciappelli & Sean MartinDr. Robert Pearl - Former CEO, Permanente Medical Group; Author, "ChatGPT, MD"Rob Havasy - Senior Director of Connected Health, HIMSSJohn Sapp Jr. - VP & CSO, Texas Mutual InsuranceJim StClair - VP of Public Health Systems, AltarumRobert Booker - Chief Strategy Officer, HITRUSTI had one of those conversations recently that reminded me why we do what we do at ITSPmagazine. Not the kind of polite, surface-level exchange you get at most industry events, but a real grappling with the contradictions and complexities that define our Hybrid Analog Digital Society.This wasn't just another panel discussion about AI in healthcare. This was a philosophical interrogation of who benefits, who pays, and who's at risk when we hand over diagnostic decisions, treatment protocols, and even the sacred physician-patient relationship to algorithms.The panel brought together some of the most thoughtful voices in healthcare technology: Dr. Robert Pearl, former CEO of the Permanente Medical Group and author of "ChatGPT, MD"; Rob Havasy from HIMSS; John Sapp from Texas Mutual Insurance; Jim StClair from Altarum; and Robert Booker from HITRUST. What emerged wasn't a simple narrative of technological progress or dystopian warning, but something far more nuanced—a recognition that we're navigating uncharted territory where the stakes couldn't be higher.Dr. Pearl opened with a stark reality: 400,000 people die annually from misdiagnoses in America. Another half million die because we fail to adequately control chronic diseases like hypertension and diabetes. These aren't abstract statistics—they're lives lost to human error, system failures, and the limitations of our current healthcare model. His argument was compelling: AI isn't replacing human judgment; it's filling gaps that human cognition simply cannot bridge alone.But here's where the conversation became truly fascinating. Rob Havasy described a phenomenon I've noticed across every technology adoption curve we've covered—the disconnect between leadership enthusiasm and frontline reality. Healthcare executives believe AI is revolutionizing their operations, while nurses and physicians on the floor are quietly subscribing to ChatGPT on their own because the "official" tools aren't ready yet. It's a microcosm of how innovation actually happens: messy, unauthorized, and driven by necessity rather than policy.The ethical dimensions run deeper than most people realize. When Marco—my co-host Sean Martin and I—asked about liability, the panel's answer was refreshingly honest: we don't know. The courts will eventually decide who's responsible when an AI diagnostic tool leads to harm. Is it the developer? The hospital? The physician who relied on the recommendation? Right now, everyone wants control over AI deployment but minimal liability for its failures. Sound familiar? It's the classic American pattern of innovation outpacing regulation.John Sapp introduced a phrase that crystallized the challenge: "enable the secure adoption and responsible use of AI." Not prevent. Not rush recklessly forward. But enable—with guardrails, governance, and a clear-eyed assessment of both benefits and risks. He emphasized that AI governance isn't fundamentally different from other technology risk management; it's just another category requiring visibility, validation, and informed decision-making.Yet Robert Booker raised a question that haunts me: what do we really mean when we talk about AI in healthcare? Are we discussing tools that empower physicians to provide better care? Or are we talking about operational efficiency mechanisms designed to reduce costs, potentially at the expense of the human relationship that defines good medicine?This is where our Hybrid Analog Digital Society reveals its fundamental tensions. We want the personalization that AI promises—real-time analysis of wearable health data, pharmacogenetic insights tailored to individual patients, early detection of deteriorating conditions before they become crises. But we're also profoundly uncomfortable with the idea of an algorithm replacing the human judgment, intuition, and empathy that we associate with healing.Jim StClair made a provocative observation: AI forces us to confront the uncomfortable truth about how much of medical practice is actually procedure, protocol, and process rather than art. How many ER diagnoses follow predictable decision trees? How many prescriptions are essentially formulaic responses to common presentations? Perhaps AI isn't threatening the humanity of medicine—it's revealing how much of medicine has always been mechanical, freeing clinicians to focus on the parts that genuinely require human connection.The panel consensus, if there was one, centered on governance. Not as bureaucratic obstruction, but as the framework that allows us to experiment responsibly, learn from failures without catastrophic consequences, and build trust in systems that will inevitably become more prevalent.What struck me most wasn't the disagreements—though there were plenty—but the shared recognition that we're asking the wrong question. It's not "AI or no AI?" but "What kind of AI, governed how, serving whose interests, with what transparency, and measured against what baseline?"Because here's the uncomfortable truth Dr. Pearl articulated: we're comparing AI to an idealized vision of human medical practice that doesn't actually exist. The baseline isn't perfection—it's 400,000 annual misdiagnoses, burned-out clinicians spending hours on documentation instead of patient care, and profound healthcare inequities based on geography and economics.The question isn't whether AI will transform healthcare. It already is. The question is whether we'll shape that transformation consciously, ethically, and with genuine concern for who benefits and who bears the risks.Listen to the full conversation and subscribe to stay connected with these critical discussions about technology and society.Links:ITSPmagazine: ITSPmagazine.comRedefining Society and Technology Podcast: redefiningsocietyandtechnologypodcast.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Event | Global Space Awards 2025 Honors Captain James Lovell Legacy at Natural History Museum London | A conversation with Sanjeev Gordhan | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Sanjeev GordhanGeneral Partner @ Type One Ventures | Space, Deep-Tech, StrategyOn LinkedIn: https://www.linkedin.com/in/sanjeev-gordhan-3714b327/____________Short Introduction The inaugural Global Space Awards celebrates the Golden Era of Space on December 5, 2025, at London's Natural History Museum. Hosted by physicist Brian Greene, the event honors Captain James Lovell's legacy and recognizes innovators transforming space from government domain to commercial frontier in our Hybrid Analog Digital Society.____________Article "There are people who make things happen, there are people who watch things happen, and there are people who wonder what happened. To be successful, you need to be a person who makes things happen."Those words from Captain James Lovell defined his life—from commanding Apollo 13's near-disastrous mission to inspiring generations of space explorers. This December, London's Natural History Museum will host the inaugural Global Space Awards, an event dedicating its first evening to Lovell's extraordinary legacy while celebrating those making things happen in space today.Sanjeev Gordhan, General Partner at Type One Ventures and part of the Global Space Awards organizing team, joined me to discuss why this moment matters. Not just for space enthusiasts, but for everyone whose lives are being transformed by technologies developed beyond Earth's atmosphere."Space is not a sector," Sanj explained. "It's a domain that overrides many sectors—agriculture, pharmaceuticals, defense, telecommunications, connectivity. Things we engage with daily."The timing couldn't be more significant. We're witnessing what Sanj calls a fundamental shift in space economics. In the 1970s and 80s, launching a kilogram into space cost $70,000-$80,000. Today? Around $3,000. That 20x reduction has transformed space from an exclusive government playground into a commercially viable domain where startups can reach orbit on seed funding.This democratization of space access is precisely why the Global Space Awards emerged. The industry needed something beyond its echo chambers—a red-carpet moment celebrating excellence across the entire spectrum, from research laboratories to scaling businesses, from breakthrough science to sustainable investments.The response exceeded all expectations. The first-year event received 516 nominations from 38 countries. Sanj and his team were "gobsmacked"—they'd hoped for maybe 150-200. The overwhelming engagement proved what they suspected: the space community was hungry for recognition that spans the complete journey from laboratory to commercial impact.What makes this particularly fascinating is how space technology circles back to solve Earth's problems. Consider pharmaceuticals: crystallization processes in microgravity create flawless crystal structures impossible to achieve on Earth. The impact? Chemotherapy treatments that currently require hours-long hospital visits could become subcutaneous injections patients self-administer at home. That's not science fiction—that's research happening now on the International Space Station, waiting for commercial space infrastructure to scale production.Or agriculture: Earth observation satellites help farmers optimize crop yields, manage water resources, and predict harvests with unprecedented accuracy. Space technology feeding humanity—literally.The investment mathematics are compelling. For every dollar invested in space innovation, the return to humanity measures around 20x. Not in stock market terms, but in solving problems like food security, medical treatments, climate monitoring, and global connectivity. These aren't abstract future benefits—they're happening now, accelerating as launch costs plummet and commercial operations expand.The Global Space Awards recognizes this multifaceted reality through eight distinct categories: Playmaker of the Year, Super Scaler, Space Investor, Partnership of the Year, Innovation Breakthrough, Science Breakthrough, Sustainability for Earth, and Sustainability for Space. Each award acknowledges that space progress requires diverse contributions—from the scientists doing foundational research to the investors providing capital, from the engineers building systems to the partnerships bridging sectors.And then there's the James Lovell Legacy Award, presented to his family at this inaugural event. The choice is deliberate and symbolic. Lovell commanded Apollo 8, the first crewed mission to orbit the Moon, then led Apollo 13's dramatic survival when an oxygen tank exploded en route to the lunar surface. His calm under pressure, innovative problem-solving with limited resources, and unwavering commitment to bringing his crew home safely epitomize what space exploration demands: courage combined with pragmatism, vision tempered by reality.The Lovell family's response to the tribute captures this spirit perfectly: "His words continue to guide not only our family, but all those who dare to dream beyond the horizon."That phrase—"dream beyond the horizon"—resonates deeply in our current moment. We're transitioning from the heroic Apollo era to something more complex and perhaps more consequential. Space is becoming infrastructure, not just exploration. The question isn't whether humans will have a permanent presence beyond Earth, but how quickly and sustainably we'll build it.The Natural History Museum setting adds another layer of meaning. Here's a building celebrating Earth's evolutionary history hosting an event about humanity's next evolutionary step—becoming a spacefaring species. The juxtaposition of dinosaur fossils and rocket technology, of ancient geology and future lunar economies, captures where we stand: creatures evolved on one small planet now reaching beyond it.Physicist Brian Greene hosting the event is equally symbolic. Not an astronaut or rocket scientist, but someone who makes complex physics comprehensible to non-specialists. Space's future depends on broad understanding, not just specialized expertise. When space technology becomes as mundane as aviation—when we stop thinking about the satellites enabling our GPS or the space-tested materials in our smartphones—that's when the real transformation completes.Sanj mentioned something that stuck with me: people ask why we spend billions on space when Earth has so many problems. The answer is that space spending helps solve Earth's problems. Better farming through satellite data. Life-saving pharmaceuticals manufactured in microgravity. Climate monitoring. Disaster response. Global internet access for remote regions. The false choice between Earth and space collapses when you understand space as a domain enabling solutions, not a destination draining resources.Looking forward, the opportunities expand exponentially. We haven't even begun exploiting lunar resources or manufacturing in zero gravity at scale. The next 5-15 years will bring benefits we can barely imagine today—but we must start now. Space infrastructure takes time. The ISS took over a decade to build. Commercial space stations, lunar bases, and orbital manufacturing facilities will require similar long-term commitments.That's why events like the Global Space Awards matter. They connect the dots between research and commerce, between investment and impact, between legacy and future. They remind us that space isn't just about rockets and astronauts—it's about chemists and farmers, investors and engineers, visionaries and pragmatists all working toward the same horizon.The finalists will be announced from the stratosphere—literally, on a screen carried by balloon—because why not? If you're celebrating space, do it with flair.As our conversation ended, I found myself hoping to attend. Not because I'm a space professional (I'm not), but because I'm fascinated by how technology reshapes society. And space technology is reshaping everything, whether we notice it or not. In our Hybrid Analog Digital Society, space represents the ultimate extension of human capability—using technology not to replace our humanity but to expand what humanity can accomplish.Captain Lovell's quote rings true: some make things happen, some watch, some wonder. The Global Space Awards celebrates those making things happen. The rest of us should at least watch—because what happens in space increasingly happens to all of us.Subscribe to continue these conversations about technology, society, and humanity's next chapter. Because the future is being built right now, and it's more exciting than most people realize.____________About the eventGLOBAL SPACE AWARDS DEDICATES EVENING TO HONOR THE LEGACY AND EXTRAORDINARY CONTRIBUTIONS OF CAPTAIN JAMES LOVELLInaugural James Lovell Legacy Award Introduced and Presented to the Lovell Family Red-Carpet Awards Event Taking Place on December 5 at The Natural History Museum, LondonLondon, U.K. – October 29, 2025 – The Global Space Awards (GSA), the first international event dedicated to celebrating the achie
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book | STREAMING WARS: How Getting Everything We Want Changed Entertainment Forever | Journalist Charlotte Henry Explains How Streaming Changed Entertainment Forever | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Charlotte HenryAuthor, journalist, broadcaster who created and runs The Addition newsletter looking at the crossover between media and tech.The Media Society https://theaddition.substack.com/On LinkedIn: https://www.linkedin.com/in/charlotteahenry/____________Short Introduction Journalist Charlotte Henry reveals how streaming transformed entertainment in her new book "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever." From Netflix's rise to the 2023 Hollywood strikes, she examines how we consume media, express ourselves, and the surprising return to "old-fashioned" weekly releases in our Hybrid Analog Digital Society.____________Article We used to learn who someone was by looking at their record collection. Walk into their home, scan the vinyl on the shelves, and you'd know—this person loves Metallica, that person's into jazz, someone else collected every Beatles album ever pressed. Media was how we expressed ourselves, how we told our story without saying a word.That's gone now. And we might not have noticed it disappearing.Charlotte Henry, a London-based journalist and author of "Streaming Wars: How Getting Everything We Want Changed Entertainment Forever," sat down with me to discuss something most of us experience daily but rarely examine deeply: how streaming has fundamentally altered not just entertainment, but how we relate to media and each other."You can't pop over to someone's house after a first date and see their Spotify playlist," Charlotte pointed out. She's right—you can't browse someone's Netflix queue the way you could their DVD collection, can't judge their Kindle library the way you could scan their bookshelf. We've lost that intimate form of self-expression, that casual cultural reveal that came from physical media.But Charlotte's book isn't a nostalgic lament. It's something far more valuable: a snapshot of this exact moment in media history, a line in the sand marking where we are before everything changes again. And in technology and media, change is the only constant.Her starting point is deliberate—the 2023 Hollywood strikes. Not the beginning of streaming's story, but perhaps its most symbolic moment. Writers, actors, costume designers, transportation crews, everyone who keeps Hollywood running stood up and said: this isn't working. The frustrations that exploded that summer had been building for years, all stemming from how streaming fundamentally disrupted the entertainment economy.My wife works in Hollywood's costume department. She lived through those strikes, felt the direct impact of an industry transformed. The changes Charlotte documents aren't abstract—they're affecting real careers, real livelihoods, real creative work.What struck me most about our conversation was how Charlotte brings together all of streaming—not just Netflix and Disney+, but Twitch, Spotify, Apple Music, the specialized services for heavy metal or horror movies, the entire ecosystem of on-demand media. No one had told this complete story before, and it needed telling precisely because it's changing so rapidly.Consider this: streaming is both revolutionary and circular. We cut the cord, abandoned cable packages, embraced freedom of choice. But now? The streaming services are rebundling themselves into packages that look suspiciously like the cable bundles we rejected. We've come full circle, just with different branding.The same thing is happening with release schedules. Remember when Netflix revolutionized everything by dropping entire seasons at once? Binge-watching became our cultural norm. But now services are reverting to weekly releases—Stranger Things spread across quarters to ensure multiple subscription payments, Apple TV+ releasing shows one episode per week like it's 1995. We're going back to the future.Charlotte's analysis of the consumer psychology is fascinating. We've been trained to expect everything, everywhere, immediately. Not just TV shows—beer subscription services, meal kits, next-day Amazon delivery. We subscribe rather than own. We stream rather than collect. And that shift has changed not just how we consume media, but how we think about possession, patience, and value.The economic impact goes deeper than most realize. Writers who once created 24-episode seasons now produce 8-episode limited series but remain contractually bound to exclusivity, earning less while being unable to take other work. Meanwhile, streamers pump money into content, taking risks on shows that traditional networks never would have greenlit, creating opportunities for voices that wouldn't have been heard before.It's complicated. Like all technological transformation, streaming brings both disruption and opportunity, loss and gain.The data-driven nature of streaming is particularly interesting. Charlotte notes that often the most-watched content isn't the prestigious shows we discuss—it's the mediocre background programming people half-watch while scrolling their phones. Netflix figured this out and adjusted strategy accordingly. They still want the big shows, the water-cooler moments, but they've also embraced the second-screen reality of modern viewing.And then there's AI—the elephant in every media conversation now. Charlotte dedicates a chapter to it because she had to. We're on the verge of being able to create Netflix-quality content with minimal human involvement. The 2023 strikes were partly about this, negotiating protections around AI use of actors' likenesses and voices.But here's where Charlotte and I found common ground: we both believe AI might actually increase the value of human-made work. When everything can be generated, the authentically human becomes precious. The imperfect becomes valuable. The emotional becomes irreplaceable.I'm seeing signs of this already. Bookstores packed with kids excited about physical books. Vinyl sales continuing to rise. People craving the tangible, the real, the human. Maybe we'll look back at this moment and recognize it as the turning point—not where AI replaced human creativity, but where we collectively decided what we value most.Charlotte's book captures this inflection point perfectly. In our Hybrid Analog Digital Society, we're navigating between worlds—the physical and virtual, the owned and subscribed, the patient and immediate, the human and artificial. Understanding where we are now helps us choose where we go next.As we wrapped our conversation, Charlotte and I bonded over our shared love of analog media—the CDs behind her, the vinyl behind those, my own collections scattered between Los Angeles and Florence. Two media nerds on opposite sides of an ocean, connected by technology that would have seemed like science fiction to our younger selves, discussing how that very technology is changing everything.The streaming wars aren't over. They're just beginning. Charlotte Henry's book gives us the map to understand the battlefield.Subscribe to continue these conversations about media, technology, and society. Because in a world of infinite content, thoughtful analysis of what it all means becomes the rarest commodity of all.____________About the bookStreaming Wars: How Getting Everything We Wanted Changed Entertainment ForeverStreaming didn't just change what we watch. It changed who holds the power in entertainment.Streaming Wars reveals how platforms like Netflix, Disney+, Apple TV+, Spotify and Amazon Prime have transformed more than just entertainment. They've rewritten the rules of streaming services, media economics, power and visibility. Journalist Charlotte Henry explores what's really going on behind your screen, from Hollywood's 2023 strikes to the rise of ad-supported tiers, the global race for live sports and the slow fade of traditional TV. With a sharp, accessible lens, Henry breaks down how AI, rebundling and fierce platform competition are driving a new era of streaming and why this shift matters now. Perfect for anyone who wants to understand how streaming is reshaping culture, business and what we watch.Find it on Amazon: https://www.amazon.com/Streaming-Wars-Getting-Everything-Entertainment/dp/1398622559____________Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology Podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society. ____________End of transmissionListen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stori
____________Podcast Redefining Society and Technology Podcast With Marco Ciappellihttps://redefiningsocietyandtechnologypodcast.com ____________Host Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/____________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb____________TitleNew Book: SPIES, LIES, AND CYBER CRIME | Former FBI Spy Hunter Eric O'Neill Explains How Cybercriminals Use Espionage techniques to Attack Us | Redefining Society And Technology Podcast With Marco Ciappelli____________Guests:Eric O'NeillKeynote Speaker, Cybersecurity Expert, Spy Hunter, Bestselling Author. AttorneyOn LinkedIn: https://www.linkedin.com/in/eric-m-oneill/Find the book on Eric Website: https://ericoneill.netSean Martin, CISSPGTM Advisor | Journalist, Analyst, Technologist | Cybersecurity, Risk, Operations | Brand & Content Marketing | Musician, Photographer, Professor, Moderator | Co-Founder, ITSPmagazine & Studio C60Sean Martin, Co-Founder, ITSPmagazine and Studio C60 Website: https://www.seanmartin.com ____________Short Introduction Former FBI counterintelligence specialist Eric O'Neill, who caught the most damaging spy in US history, reveals how cyber criminals use traditional espionage techniques to attack us. In his new book "Spies Lies and Cyber Crime," he exposes the $14 trillion cybercrime industry and teaches us to recognize attacks in our Hybrid Analog Digital Society. ____________Article Trust has become the rarest commodity on Earth. We can't trust what we see, what we hear, or what we read anymore. And the people exploiting that crisis? They learned their craft from spies.Eric O'Neill knows this better than most. He's the former FBI counterintelligence specialist who went undercover—as himself—to catch Robert Hanssen, Russia's top spy embedded in the FBI for 22 years. That story became his first book "Gray Day" and the movie "Breach." But five years later, Eric's back with a very different kind of warning.His new book "Spies Lies and Cyber Crime" isn't another spy memoir. It's a field manual for surviving in a world where criminal syndicates have weaponized traditional espionage techniques against every single one of us. And business is booming—to the tune of $14 trillion annually, making cybercrime the third largest economy on Earth, bigger than Japan and Germany combined."They're not attacking our computers," Eric told me during our conversation. "They're attacking you and me personally. They're fooling us into just handing everything over."The pandemic accelerated everything. We were thrown into a completely virtual environment before security was ready, and that moment marks the biggest single rise of cybercrime in history. While most of us were stuck at home adjusting to Zoom calls, cyber criminals were innovating faster than anyone else, studying how we communicate, work, and associate in digital spaces.Here's what makes Eric's perspective invaluable: he understands both sides of this war. He spent his FBI career using traditional counterintelligence techniques—deception, impersonation, infiltration, confidence schemes, exploitation, and destruction—to catch spies. Now he watches cyber criminals deploy those exact same tactics against us through our screens.The top cybercrime gangs have actually hired active intelligence officers from countries like Russia, China, and Iran. These spies moonlight as cyber criminals, bringing state-level tradecraft to street-level scams. It's sophisticated, organized, and shockingly effective.Consider the romance scam Eric describes in the book: a widowed grandfather receives a simple text saying "Hey." Being polite, he responds "Sorry, wrong number." That single response marks him as a target. Over weeks, a "friendship" develops. His new best friend chats with him daily, learns his hopes and dreams, then introduces him to an "investment opportunity."Within months, the grandfather has invested his entire pension—hundreds of thousands of dollars—into what looks like a legitimate cryptocurrency platform with secure logins and rising account values. When he tries to withdraw money for a family vacation, his friend vanishes. The company doesn't exist. The website was a dummy. Everything is gone.That's not a quick phishing scam—that's a confidence scheme straight from the spy playbook, adapted for our Hybrid Analog Digital Society where we live in little boxes on screens, increasingly disconnected from physical reality.The sophistication extends to ransomware operations. These aren't kids in hoodies—they're organized businesses with affiliate programs, marketing departments, tech support teams, and customer service. They're polite as they negotiate your ransom. They help you decrypt your data after you pay. Some even donate to charities. And yes, many victims get hit again a month later by the same group.What struck me most about our conversation was Eric's emphasis on preparation over panic. He's developed a methodology called PAID: Prepare (ahead of the attack), Assess (constantly look for threats), Investigate (when you identify something suspicious), and Decide (take action)."You don't want to be in a dark alley before you think about physical security," he explained. "Same with cyber. Don't wait until you're in the middle of a ransomware attack to build your defenses. That's ten times more expensive."The scale of this threat hasn't fully registered with most people. Cybercrime is projected to hit $18 trillion next year, yet individuals and companies alike operate as if attacks are rare events that happen to other people. The reality? It's not if you'll be attacked, it's when.Eric wrote "Spies Lies and Cyber Crime" as if you're taking a training course at the FBI Academy for Cyber Criminals. The first part teaches you to think like a bad guy—to recognize deception, impersonation, and confidence schemes. The second part gives you the tools to defend yourself, whether you're protecting your family's data or running enterprise security.One detail Eric insists on: every parent must read chapters 10 and 11 with their teenagers. The book addresses cyberbullying, exploitation, and social media dangers that have led to teen suicide. Some conversations are that critical.As we closed our conversation, Eric demonstrated how vulnerable we've become. "How do you even know you're talking to me?" he asked. "I could be sitting here in my pajamas, typing what I want my avatar to say." He's right—deepfakes are that sophisticated now. His advice? Ask everyone in a video meeting to pick up a pen or wave their hands. Avatars can't do that yet.The word "yet" hangs heavy in that sentence.We're moving into a world where trust is the most valuable thing on Earth, and cyber criminals are actively destroying it for profit. Eric O'Neill spent his career catching spies who betrayed their country. Now he's teaching us to catch criminals who are betraying all of us, one click at a time.Subscribe to continue these essential conversations about security, technology, and society. In our increasingly digital world, understanding how cyber criminals think isn't optional anymore—it's survival. ____________About the bookSpies, Lies and CybercrimeSpies, Lies and Cybercrime will appeal to every person curious or frightened by the prospect of a cyberattack, from students and retirees to the C-Suite and boardroom. Readers will take up arms in the current cyber war instead of fleeing while the village burns. They will become email archeologists and threat hunters, questioning every movement online and spotting the attackers hiding in every shadow. They will learn how to embed cybersecurity intrinsically into the culture and technology of their businesses and lives. Only then can we begin to move the needle toward a world safe from cyber-attacks. Find it on: https://ericoneill.net____________Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology Podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society. ____________End of transmissionListen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company?👉 https://www.studioc60.com Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Everyone Is Protecting My Password, But Who Is Protecting My Toilet Paper? - Interview with Amberley Brady | AISA CyberCon Melbourne 2025 Coverage | On Location with Sean Martin and Marco CiappelliAISA CyberCon Melbourne | October 15-17, 2025Empty shelves trigger something primal in us now. We've lived through the panic, the uncertainty, the realization that our food supply isn't as secure as we thought. Amberley Brady hasn't forgotten that feeling, and she's turned it into action.Speaking with her from Florence to Sydney ahead of AISA CyberCon in Melbourne, I discovered someone who came to cybersecurity through an unexpected path—studying law, working in policy, but driven by a singular passion for food security. When COVID-19 hit Australia in 2019 and grocery store shelves emptied, Amberley couldn't shake the question: what happens if this keeps happening?Her answer was to build realfoodprice.com.au, a platform tracking food pricing transparency across Australia's supply chain. It's based on the Hungarian model, which within three months saved consumers 50 million euros simply by making prices visible from farmer to wholesaler to consumer. The markup disappeared almost overnight when transparency arrived."Once you demonstrate transparency along the supply chain, you see where the markup is," Amberley explained. She gave me an example that hit home: watermelon farmers were getting paid 40 cents per kilo while their production costs ran between $1.00 to $1.50. Meanwhile, consumers paid $2.50 to $2.99 year-round. Someone in the middle was profiting while farmers lost money on every harvest.But this isn't just about fair pricing—it's about critical infrastructure that nobody's protecting. Australia produces food for 70 million people, far more than its own population needs. That food moves through systems, across borders, through supply chains that depend entirely on technology most farmers never think about in cybersecurity terms.The new autonomous tractors collecting soil data? That information goes somewhere. The sensors monitoring crop conditions? Those connect to systems someone else controls. China recognized this vulnerability years ago—with 20% of the world's population but only 7% of arable land, they understood that food security is national security.At CyberCon, Amberley is presenting two sessions that challenge the cybersecurity community to expand their thinking. "Don't Outsource Your Thinking" tackles what she calls "complacency creep"—our growing trust in AI that makes us stop questioning, stop analyzing with our gut instinct. She argues for an Essential Nine in Australia's cybersecurity framework, adding the human firewall to the technical Essential Eight.Her second talk, cheekily titled "Everyone is Protecting My Password, But No One's Protecting My Toilet Paper," addresses food security directly. It's provocative, but that's the point. We saw what happened in Japan recently with the rice crisis—the same panic buying, the same distrust, the same empty shelves that COVID taught us to fear."We will run to the store," Amberley said. "That's going to be human behavior because we've lived through that time." And here's the cybersecurity angle: those panics can be manufactured. A fake image of empty shelves, an AI-generated video, strategic disinformation—all it takes is triggering that collective memory.Amberley describes herself as an early disruptor in the agritech cybersecurity space, and she's right. Most cybersecurity professionals think about hospitals, utilities, financial systems. They don't think about the autonomous vehicles in fields, the sensor networks in soil, the supply chain software moving food across continents.But she's starting the conversation, and CyberCon's audience—increasingly diverse, including people from HR, risk management, and policy—is ready for it. Because at the end of the day, everyone has to eat. And if we don't start thinking about the cyber vulnerabilities in how we grow, move, and price food, we're leaving our most basic need unprotected.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazineGUEST:Amberley Brady, Food Security & Cybersecurity Advocate, Founder of realfoodprice.com.au | On LinkedIn: https://www.linkedin.com/in/amberley-b-a62022353/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
Beyond Blame: Navigating the Digital World with Our KidsAISA CyberCon Melbourne | October 15-17, 2025There's something fundamentally broken in how we approach online safety for young people. We're quick to point fingers—at tech companies, at schools, at kids themselves—but Jacqueline Jayne (JJ) wants to change that conversation entirely.Speaking with her from Florence while she prepared for her session at AISA CyberCon Melbourne this week, it became clear that JJ understands what many in the cybersecurity world miss: this isn't a technical problem that needs a technical solution. It's a human problem that requires us to look in the mirror."The online world reflects what we've built for them," JJ told me, referring to our generation. "Now we need to step up and help fix it."Her session, "Beyond Blame: Keeping Our Kids Safe Online," tackles something most cybersecurity professionals avoid—the uncomfortable truth that being an IT expert doesn't automatically make you equipped to protect the young people in your life. Last year's presentation at Cyber Con drew a full house, with nearly every hand raised when she asked who came because of a kid in their world.That's the fascinating contradiction JJ exposes: rooms full of cybersecurity professionals who secure networks and defend against sophisticated attacks, yet find themselves lost when their own children navigate TikTok, Roblox, or encrypted messaging apps.The timing couldn't be more relevant. With Australia implementing a social media ban for anyone under 16 starting December 10, 2025, and similar restrictions appearing globally, parents and carers face unprecedented challenges. But as JJ points out, banning isn't understanding, and restriction isn't education.One revelation from our conversation particularly struck me—the hidden language of emojis. What seems innocent to adults carries entirely different meanings across demographics, from teenage subcultures to, disturbingly, predatory networks online. An explosion emoji doesn't just mean "boom" anymore. Context matters, and most adults are speaking a different digital dialect than their kids.JJ, who successfully guided her now 19-year-old son through the gaming and social media years, isn't offering simple solutions because there aren't any. What she provides instead are conversation starters, resources tailored to different age groups, and even AI prompts that parents can customize for their specific situations.The session reflects a broader shift happening at events like Cyber Con. It's no longer just IT professionals in the room. HR representatives, risk managers, educators, and parents are showing up because they've realized that digital safety doesn't respect departmental boundaries or professional expertise."We were analog brains in a digital world," JJ said, capturing our generational position perfectly. But today's kids? They're born into this interconnectedness, and COVID accelerated everything to a point where taking it away isn't an option.The real question isn't who to blame. It's what role each of us plays in creating a safer digital environment. And that's a conversation worth having—whether you're at the Convention and Exhibition Center in Melbourne this week or joining virtually from anywhere else.AISA CyberCon Melbourne runs October 15-17, 2025 Virtual coverage provided by ITSPmagazine___________GUEST:Jacqueline (JJ) Jayne, Reducing human error in cyber and teaching 1 million people online safety. On Linkedin: https://www.linkedin.com/in/jacquelinejayne/HOSTS:Sean Martin, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder, ITSPmagazine and Studio C60 | Website: https://www.marcociappelli.comCatch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to share an Event Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: AI Creativity Expert Reveals Why Machines Need More Freedom - Creative Machines: AI, Art & Us Book Interview | A Conversation with Author Maya Ackerman | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Maya Ackerman, PhD.Generative AI Pioneer | Author | Keynote SpeakerOn LinkedIn: https://www.linkedin.com/in/mackerma/Website: http://www.maya-ackerman.com _____Short Introduction: Dr. Maya Ackerman, AI researcher and author of "Creative Machines: AI, Art, and Us," challenges our assumptions about artificial intelligence and creativity. She argues that ChatGPT is intentionally limited, that hallucinations are features not bugs, and that we must stop treating AI as an all-knowing oracle in our Hybrid Analog Digital Society._____Article Dr. Maya Ackerman is a pioneer in the generative AI industry, associate professor of Computer Science and Engineering at Santa Clara University, and co-founder/CEO of Wave AI, one of the earliest generative AI startup. Ackerman has been researching generative AI models for text, music and art since 2014, and an early advocate for human-centered generative AI, bringing awareness to the power of AI to profoundly elevate human creativity. Under her leadership as co-founder and CEO, WaveAI has emerged as a leader in musical AI, benefiting millions of artists and creators with their products LyricStudio and MelodyStudio.Dr. Ackerman's expertise and innovative vision have earned her numerous accolades, including being named a "Woman of Influence" by the Silicon Valley Business Journal. She is a regular feature in prestigious media outlets and has spoken on notable stages around the world, such as the United Nations, IBM Research, and Stanford University. Her insights into the convergence of AI and creativity are shaping the future of both technology and music. A University of Waterloo PhD and Caltech Postdoc, her unique blend of scholarly rigor and entrepreneurial acumen makes her a sought-after voice in discussions about the practical and ethical implications of AI in our rapidly evolving digital world. Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ We talk about AI hallucinations like they're bugs that need fixing. Glitches in the matrix. Errors to be eliminated. But what if we've got it completely backward?Dr. Maya Ackerman sat in front of her piano—a detail that matters more than you'd think—and told me something that made me question everything I thought I understood about artificial intelligence and creativity. The AI we use every day, the ChatGPT that millions rely on for everything from writing emails to generating ideas, is intentionally held back from being truly creative.Let that sink in for a moment. ChatGPT, the tool millions use daily, is designed to be convergent rather than divergent. It's built to replace search engines, to give us "correct" answers, to be an all-knowing oracle. And that's exactly the problem.Maya's journey into this field began ten years ago, long before generative AI became the buzzword du jour. Back in 2015, she made what her employer called a "risky decision"—switching her research focus to computational creativity, the academic precursor to what we now call generative AI. By 2017, she'd launched one of the earliest generative AI startups, WaveAI, helping people write songs. Investors told her the whole direction didn't make sense. Then came late 2022, and suddenly everyone understood.What fascinates me about Maya's perspective is how she frames AI as humanity's collective consciousness made manifest. We wrote, we created the printing press, we built the internet, we filled it with our knowledge and our forums and our social media—and then we created a functioning brain from it. As she puts it, we can now talk with humanity's collective consciousness, including what Carl Jung called the collective shadow—both the brilliance and the biases.This is where our conversation in our Hybrid Analog Digital Society gets uncomfortable but necessary. When AI exhibits bias, when it hallucinates, when it creates something that disturbs us—it's reflecting us back to ourselves. It learned from our data, our patterns, our collective Western consciousness. We participate in these biases to various degrees, whether we admit it or not. AI becomes a mirror we can't look away from.But here's where Maya's argument becomes revolutionary: we need to stop wanting AI to be perfect. We need to embrace its capacity to hallucinate, to be imaginative, to explore new possibilities. The word "hallucination" itself needs reclaiming. In both humans and machines, hallucination represents the courage to go beyond normal boundaries, to re-envision reality in ways that might work better for us.The creative process requires divergence—a vast open space of new possibilities where you don't know in advance what will have value. It takes bravery, guts, and willingness to fall flat on your face. But ChatGPT isn't built for that. It's designed to follow patterns, to be consistent, to give you the same ABAB rhyming structure every time you ask for lyrics. Try using it for creative writing, and you'll notice the template, the recognizable vibe that becomes stale after a few uses.Maya argues that machines designed specifically for creativity—like Midjourney for images or her own WaveAI for music—are far more creative than ChatGPT precisely because they're built to be divergent rather than convergent. They're allowed to get things wrong, to be imaginative, to explore. ChatGPT's creativity is intentionally kept down because there's an inherent conflict between being an all-knowing oracle and being creative.This brings us to a dangerous illusion we're collectively buying into: the idea that AI can be our arbitrator of truth. Maya grew up on three continents before age 13, and she points out that World War II is talked about so differently across cultures you wouldn't recognize it as the same historical event. Reality isn't simple. The "truth" doesn't exist for most things that matter. Yet we're building AI systems that present themselves as having definitive answers, when really they're just expressing a Western perspective that aligns with their shareholders' interests.What concerns me most from our conversation is Maya's observation that some people are already giving up their thinking to these machines. When she suggests they come up with their own ideas without using ChatGPT, they look at her like she's crazy. They honestly believe the machine is smarter than them. This collective hallucination—that we've built ourselves a God—is perhaps more dangerous than any individual AI capability.The path forward, Maya argues, requires us to wake up. We need diverse AI tools built for specific purposes rather than one omnipotent system. We need machines designed to collaborate with humans and elevate human intelligence rather than foster dependence. We need to stop the consolidation of power that's creating copies of the same convergent thinking, and instead embrace the diversity of human imagination.As someone who works at the intersection of technology and society, I find Maya's perspective refreshingly honest. She's not trying to sell us on AI's limitless potential, nor is she fear-mongering about its dangers. She's asking us to see it clearly—as powerful technology that's at least as flawed as we are, neither God nor demon, just a mind among minds.Her book "Creative Machines: AI, Art, and Us" releases October 14, 2025, and it promises to rewrite the narrative from an informed insider's perspective rather than someone with something to gain from public belief. In our rapidly evolving Hybrid Analog Digital Society, we need more voices like Maya's—voices that challenge us to think differently about the tools we're building and the future we're creating.Subscribe to continue these essential conversations about creativity, consciousness, and our coexistence with increasingly capable machines. Because the real question isn't whether machines can be creative—it's whether we'll have the wisdom to let them be.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stor
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliReflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid -digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.Introducing a New Series: Analog Minds in a Digital World:Reflections from Our Hybrid Analog-Digital SocietyPart II: Lo-Fi Music and the Art of Imperfection — When Technical Limitations Become Creative LiberationI've been testing small speakers lately. Nothing fancy—just little desktop units that cost less than a decent dinner. As I cycled through different genres, something unexpected happened. Classical felt lifeless, missing all its dynamic range. Rock came across harsh and tinny. Jazz lost its warmth and depth. But lo-fi? Lo-fi sounded... perfect.Those deliberate imperfections—the vinyl crackle, the muffled highs, the compressed dynamics—suddenly made sense on equipment that couldn't reproduce perfection anyway. The aesthetic limitations of the music matched the technical limitations of the speakers. It was like discovering that some songs were accidentally designed for constraints I never knew existed.This moment sparked a bigger realization about how we navigate our hybrid analog-digital world: sometimes our most profound innovations emerge not from perfection, but from embracing limitations as features.Lo-fi wasn't born in boardrooms or designed by committees. It emerged from bedrooms, garages, and basement studios where young musicians couldn't afford professional equipment. The 4-track cassette recorder—that humble Portastudio that let you layer instruments onto regular cassette tapes for a fraction of what professional studio time cost—became an instrument of democratic creativity. Suddenly, anyone could record music at home. Sure, it would sound "imperfect" by industry standards, but that imperfection carried something the polished recordings lacked: authenticity.The Velvet Underground recorded on cheap equipment and made it sound revolutionary—so revolutionary that, as the saying goes, they didn't sell many records, but everyone who bought one started a band. Pavement turned bedroom recording into art. Beck brought lo-fi to the mainstream with "Mellow Gold." These weren't artists settling for less—they were discovering that constraints could breed creativity in ways unlimited resources never could.Today, in our age of infinite digital possibility, we see a curious phenomenon: young creators deliberately adding analog imperfections to their perfectly digital recordings. They're simulating tape hiss, vinyl scratches, and tube saturation using software plugins. We have the technology to create flawless audio, yet we choose to add flaws back in.What does this tell us about our relationship with technology and authenticity?There's something deeply human about working within constraints. Twitter's original 140-character limit didn't stifle creativity—it created an entirely new form of expression. Instagram's square format—a deliberate homage to Polaroid's instant film—forced photographers to think differently about composition. Think about that for a moment: Polaroid's square format was originally a technical limitation of instant film chemistry and optics, yet it became so aesthetically powerful that decades later, a digital platform with infinite formatting possibilities chose to recreate that constraint. Even more, Instagram added filters that simulated the color shifts, light leaks, and imperfections of analog film. We had achieved perfect digital reproduction, and immediately started adding back the "flaws" of the technology we'd left behind.The same pattern appears in video: Super 8 film gave you exactly 3 minutes and 12 seconds per cartridge at standard speed—grainy, saturated, light-leaked footage that forced filmmakers to be economical with every shot. Today, TikTok recreates that brevity digitally, spawning a generation of micro-storytellers who've mastered the art of the ultra-short form, sometimes even adding Super 8-style filters to their perfect digital video.These platforms succeeded not despite their limitations, but because of them. Constraints force innovation. They make the infinite manageable. They create a shared language of creative problem-solving.Lo-fi music operates on the same principle. When you can't capture perfect clarity, you focus on capturing perfect emotion. When your equipment adds character, you learn to make that character part of your voice. When technical perfection is impossible, artistic authenticity becomes paramount.This is profoundly relevant to how we think about artificial intelligence and human creativity today. As AI becomes capable of generating increasingly "perfect" content—flawless prose, technically superior compositions, aesthetically optimized images—we find ourselves craving the beautiful imperfections that mark something as unmistakably human.Walking through any record store today, you'll see teenagers buying vinyl albums they could stream in perfect digital quality for free. They're choosing the inconvenience of physical media, the surface noise, the ritual of dropping the needle. They're purchasing imperfection at a premium.This isn't nostalgia—most of these kids never lived in the vinyl era. It's something deeper: a recognition that perfect reproduction might not equal perfect experience. The crackle and warmth of analog playback creates what audiophiles call "presence"—a sense that the music exists in the same physical space as the listener.Lo-fi music replicates this phenomenon in digital form. It takes the clinical perfection of digital audio and intentionally degrades it to feel more human. The compression, the limited frequency range, the background noise—these aren't bugs, they're features. They create the sonic equivalent of a warm embrace.In our hyperconnected, always-optimized digital existence, lo-fi offers something precious: permission to be imperfect. It's background music that doesn't demand your attention, ambient sound that acknowledges life's messiness rather than trying to optimize it away.Here's where it gets philosophically interesting: we're using advanced digital technology to simulate the limitations of obsolete analog technology. Young producers spend hours perfecting their "imperfect" sound, carefully curating randomness, precisely engineering spontaneity.This creates a fascinating paradox. Is simulated authenticity still authentic? When we use AI-powered plugins to add "vintage" character to our digital recordings, are we connecting with something real, or just consuming a nostalgic fantasy?I think the answer lies not in the technology itself, but in the intention behind it. Lo-fi creators aren't trying to fool anyone—the artifice is obvious. They're creating a shared aesthetic language that values emotion over technique, atmosphere over precision, humanity over perfection.In a world where algorithms optimize everything for maximum engagement, lo-fi represents a conscious choice to optimize for something else entirely: comfort, focus, emotional resonance. It's a small rebellion against the tyranny of metrics.As artificial intelligence becomes increasingly capable of generating "perfect" content, the value of obviously human imperfection may paradoxically increase. The tremor in a hand-drawn line, the slight awkwardness in authentic conversation, the beautiful inefficiency of analog thinking—these become markers of genuine human presence.The challenge isn't choosing between analog and digital, perfection and imperfection. It's learning to consciously navigate between them, understanding when limitations serve us and when they constrain us, recognizing when optimization helps and when it hurts.My small speakers taught me something important: sometimes the best technology isn't the one with the most capabilities, but the one whose limitations align with our human needs. Lo-fi music sounds perfect on imperfect speakers because both embrace the same truth—that beauty often emerges not from the absence of flaws, but from making peace with them.In our quest to build better systems, smarter algorithms, and more efficient processes, we might occasionally pause to ask: what are we optimizing for? And what might we be losing in the pursuit of digital perfection?The lo-fi phenomenon—and its parallels in photography, video, and every art form we've digitized—reveals something profound about human nature. We are not creatures built for perfection. We are shaped by friction, by constraint, by the beautiful accidents that occur when things don't work exactly as planned. The crackle of vinyl, the grain of film, the compression of cassette tape—these aren't just nostalgic affectations. They're reminders that imperfection is where humanity lives. That the beautiful inefficiency of analog thinking—messy, emotional, unpredictable—is not a bug to be fixed but a feature to be preserved.Sometimes the most profound technology is the one that helps us remember what it means to be beautifully, i
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Eli LopianFounder of Typemock Ltd | Author of AIcracy: Beyond Democracy | AI & Governance Thought LeaderOn LinkedIn: https://www.linkedin.com/in/elilopian/Book: https://aicracy.aiHost: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I had one of those conversations that makes you question everything you thought you knew about democracy, governance, and the future of human society. Eli Lopian, founder of TypeMock and author of the provocative book on AI-cracy, walked me through what might be the most intriguing political theory I've encountered in years.⸻ Article ⸻ Technology entrepreneur Eli Lopian joins Marco to explore "AI-cracy" - a revolutionary governance model where artificial intelligence writes laws based on abundance metrics while humans retain judgment. This fascinating conversation examines how we might transition from broken democratic systems to AI-assisted governance in our evolving Hybrid Analog Digital Society.Picture this scenario: you're sitting in a pub with friends, listening to them argue about which political rally to attend, and suddenly you realize something profound. As Eli told me, it's like watching people fight over which side of the train to sit on while the train itself is heading in completely the wrong direction. That metaphor perfectly captures where we are with democracy today.Eli's background fascinates me - breaking free from a religious upbringing at 16, building a successful AI startup for the past decade, and now proposing something that sounds like science fiction but feels increasingly inevitable. His central premise stopped me in my tracks: no human being should be allowed to write laws anymore. Only AI should create legislation, guided by what he calls an "abundance metric" - essentially optimizing for human happiness, freedom, and societal wellbeing.But here's where it gets really interesting. Eli isn't proposing we hand over control to a single AI overlord. Instead, he envisions three separate AI systems - one controlled by the government, one by the opposition, and one by an NGO - all working with the same data but operated by different groups. They must reach identical conclusions for any law to proceed. If they disagree, human experts investigate why.What struck me most was how this could actually restore direct democracy. In ancient Athens, every citizen participated in the polis. We can't do that with hundreds of millions of people, but AI could process everyone's input instantly. Imagine submitting your policy ideas directly to an AI system that responds within hours, explaining why your suggestion would or wouldn't improve societal abundance. It's like having the Athenian square scaled to modern complexity.The safeguards Eli proposes reveal his deep understanding of human nature. No AI can judge humans - that remains strictly a human responsibility. Citizens don't vote for charismatic politicians anymore; they vote for actual policies. Every three years, people choose their preferred policies. Every decade, they set ambitious collective goals - cure cancer, reach Mars, whatever captures society's imagination.Living in our Hybrid Analog Digital Society, we already see AI creeping into governance. Lawyers use AI, governments employ algorithms for efficiency, and citizens increasingly turn to ChatGPT for advice they once sought from doctors or therapists. Eli's insight is that we're heading toward AI governance whether we plan it or not - so why not design it properly from the start?His most compelling point addresses a fear I share: that AI lacks creativity. Eli argues this is actually a feature, not a bug. AI generates rather than truly creates. The creative spark - proposing that universal basic income experiment, suggesting we test new social policies, imagining those decade-long goals - that remains uniquely human. AI simply processes our creativity faster and more fairly than our current broken systems.The privacy question loomed large in our conversation. Eli proposes a brilliant separation: your personal AI mentor (helping you grow and find fulfillment) operates in complete isolation from the governance AI system. Like quantum physics, what happens in the personal realm stays there. The governance AI only sees aggregated societal data, never individual conversations.I kept thinking about trust throughout our discussion. We've already surrendered massive amounts of personal data to social media platforms. We share things on Instagram and TikTok that would have horrified us twenty years ago. Perhaps we'll adapt to AI governance the same way we adapted to cloud computing, social media, and smartphones.What excites me most is how this could give every citizen a real voice again. Not just during elections, but daily. Got an idea for improving your community? Submit it to the AI system. Receive thoughtful feedback about why it would or wouldn't work. Participate in creating the laws that govern your life rather than merely choosing between pre-packaged candidates every few years.Whether Eli's AI-cracy becomes reality or remains theoretical, it forces us to confront a crucial question: if democracy is broken, what comes next? In our rapidly evolving technological society, maybe it's time to stop fighting over which side of the train offers the better view and start laying new tracks entirely.__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/nFn6CcXKMM0_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3We Have All the Information, So Why Do We Know Less?Introducing: Reflections from Our Hybrid Analog-Digital SocietyFor years on the Redefining Society and Technology Podcast, I've explored a central premise: we live in a hybrid analog-digital society where the line between physical and virtual has dissolved into something more complex, more nuanced, and infinitely more human than we often acknowledge.But with the explosion of generative AI, this hybrid reality isn't just a philosophical concept anymore—it's our lived experience. Every day, we navigate between analog intuition and digital efficiency, between human wisdom and machine intelligence, between the messy beauty of physical presence and the seductive convenience of virtual interaction.This newsletter series will explore the tensions, paradoxes, and possibilities of being fundamentally analog beings in an increasingly digital world. We're not just using technology; we're being reshaped by it while simultaneously reshaping it with our deeply human, analog sensibilities.Analog Minds in a Digital World: Part 1We Have All the Information, So Why Do We Know Less?I was thinking about my old set of encyclopedias the other day. You know, those heavy volumes that sat on shelves like silent guardians of knowledge, waiting for someone curious enough to crack them open. When I needed to write a school report on, say, the Roman Empire, I'd pull out Volume R and start reading.But here's the thing: I never just read about Rome.I'd get distracted by Romania, stumble across something about Renaissance art, flip backward to find out more about the Reformation. By the time I found what I was originally looking for, I'd accidentally learned about three other civilizations, two art movements, and the invention of the printing press. The journey was messy, inefficient, and absolutely essential.And if I was in a library... well then just imagine the possibilities.Today, I ask Google, Claude or ChatGPT about the Roman Empire, and in thirty seconds, I have a perfectly formatted, comprehensive overview that would have taken me hours to compile from those dusty volumes. It's accurate, complete, and utterly forgettable.We have access to more information than any generation in human history. Every fact, every study, every perspective is literally at our fingertips. Yet somehow, we seem to know less. Not in terms of data acquisition—we're phenomenal at that—but in terms of deep understanding, contextual knowledge, and what I call "accidental wisdom."The difference isn't just about efficiency. It's about the fundamental way our minds process and retain information. When you physically search through an encyclopedia, your brain creates what cognitive scientists call "elaborative encoding"—you remember not just the facts, but the context of finding them, the related information you encountered, the physical act of discovery itself.When AI gives us instant answers, we bypass this entire cognitive process. We get the conclusion without the journey, the destination without the map. It's like being teleported to Rome without seeing the countryside along the way—technically efficient, but something essential is lost in translation.This isn't nostalgia talking. I use AI daily for research, writing, and problem-solving. It's an incredible tool. But I've noticed something troubling: my tolerance for not knowing things immediately has disappeared. The patience required for deep learning—the kind that happens when you sit with confusion, follow tangents, make unexpected connections—is atrophying like an unused muscle.We're creating a generation of analog minds trying to function in a digital reality that prioritizes speed over depth, answers over questions, conclusions over curiosity. And in doing so, we might be outsourcing the very process that makes us wise.Ancient Greeks had a concept called "metis"—practical wisdom that comes from experience, pattern recognition, and intuitive understanding developed through continuous engagement with complexity. In Ancient Greek, metis (Μῆτις) means wisdom, skill, or craft, and it also describes a form of wily, cunning intelligence. It can refer to the pre-Olympian goddess of wisdom and counsel, who was the first wife of Zeus and mother of Athena, or it can refer to the concept of cunning intelligence itself, a trait exemplified by figures like Odysseus. It's the kind of knowledge you can't Google because it lives in the space between facts, in the connections your mind makes when it has time to wander, wonder, and discover unexpected relationships.AI gives us information. But metis? That still requires an analog mind willing to get lost, make mistakes, and discover meaning in the margins.The question isn't whether we should abandon these digital tools—they're too powerful and useful to ignore. The question is whether we can maintain our capacity for the kind of slow, meandering, gloriously inefficient thinking that actually builds wisdom.Maybe the answer isn't choosing between analog and digital, but learning to be consciously hybrid. Use AI for what it does best—rapid information processing—while protecting the slower, more human processes that transform information into understanding. We need to preserve the analog pathways of learning alongside digital efficiency.Because in a world where we can instantly access any fact, the most valuable skill might be knowing which questions to ask—and having the patience to sit with uncertainty until real insight emerges from the continuous, contextual, beautifully inefficient process of analog thinking.Next transmission: "The Paradox of Infinite Choice: Why Having Everything Available Means Choosing Nothing"Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Marco______________________________________📬 Enjoyed this transmission? Follow the newsletter here: [Newsletter Link]Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!_____________________________________Marco CiappelliITSPmagazine | Co-Founder • CMO • Creative Director | ✓ Los Angeles ✓ Firenze❖ Have you heard about ITSPmagazine Studio?A Brand & Marketing Advisory For Cybersecurity And Tech Companies✶ Learn more about me and my podcasts✶ Follow me on LinkedIn✶ Subscribe to my NewsletterConnect with me across platforms:Bluesky | Mastodon | Instagram | YouTube | Threads | TikTok___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work blends journalism, storytelling, and sociology to examine how technological narratives influence human behavior, culture, and social structures.___________________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Jeff Burningham Tech Entrepreneur. Investor. National Best Selling Author. Explorer of Human Potential. My book #TheLastBookWrittenByAHuman is available now.On LinkedIn: https://www.linkedin.com/in/jeff-burningham-15a01a7b/Book: https://www.simonandschuster.com/books/The-Last-Book-Written-by-a-Human/Jeff-Burningham/9781637634561#:~:text=*%20Why%20the%20development%20of%20AI,in%20the%20age%20of%20AI.Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Entrepreneur and author Jeff Burningham explores how artificial intelligence serves as a cosmic mirror reflecting humanity's true nature. Through his book "The Last Book Written by a Human," he argues that as machines become more intelligent, humans must become wiser. This conversation examines our collective journey through disruption, reflection, transformation, and evolution in our Hybrid Analog Digital Society.⸻ Article ⸻ I had one of those conversations that made me pause and question everything I thought I knew about our relationship with technology. Jeff Burningham, serial entrepreneur and author of "The Last Book Written by a Human: Becoming Wise in the Age of AI," joined me to explore a perspective that's both unsettling and profoundly hopeful.What struck me most wasn't Jeff's impressive background—founding multiple tech companies, running for governor of Utah, building a $5 billion real estate empire. It was his spiritual awakening in Varanasi, India, where a voice in his head insisted he was a writer. That moment of disruption led to years of reflection and ultimately to a book that challenges us to see AI not as our replacement, but as our mirror."As our machines become more intelligent, our work as humans is to become more wise," Jeff told me. This isn't just a catchy phrase—it's the thesis of his entire work. He argues that AI functions as what he calls a "cosmic mirror to humanity," reflecting back to us exactly who we've become as a species. The question becomes: do we like what we see?This perspective resonates deeply with how we exist in our Hybrid Analog Digital Society. We're no longer living separate digital and physical lives—we're constantly navigating both realms simultaneously. AI doesn't just consume our data; it reflects our collective behaviors, biases, and beliefs back to us in increasingly sophisticated ways.Jeff structures his thinking around four phases that mirror both technological development and personal growth: disruption, reflection, transformation, and evolution. We're currently somewhere between reflection and transformation, he suggests, at a crucial juncture where we must choose between two games. The old game prioritizes cash as currency, power as motivation, and control as purpose. The new game he envisions centers on karma as currency, authenticity as motivation, and love as purpose.What fascinates me is how this connects to the hero's journey—the narrative structure underlying every meaningful story from Star Wars to our own personal transformations. Jeff sees AI's emergence as part of an inevitable journey, a necessary disruption that forces us to confront fundamental questions about consciousness, creativity, and what makes us human.But here's where it gets both beautiful and challenging: as machines handle more of our "doing," we're left with our "being." We're human beings, not human doings, as Jeff reminds us. This shift demands that we reconnect with our bodies, our wisdom, our imperfections—all the messy, beautiful aspects of humanity that AI cannot replicate.The conversation reminded me why I chose "Redefining" for this podcast's title. We're not just adapting to new technology; we're fundamentally reexamining what it means to be human in an age of artificial intelligence. This isn't about finding the easy button or achieving perfect efficiency—it's about embracing what makes us gloriously, imperfectly human.Jeff's book launches August 19th, and while it won't literally be the last book written by a human, the title serves as both warning and invitation. If we don't actively choose to write our own story—if we don't rehumanize ourselves while consciously shaping AI's development—we might find ourselves spectators rather than authors of our own future.Subscribe to continue these essential conversations about technology and society. Because in our rapidly evolving world, the most important question isn't what AI can do for us, but who we choose to become alongside it.Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ AI technology, artificial intelligence, future of AI, business podcast, entrepreneur interview, technology trends, tech entrepreneur, business mindset, innovation podcast, AI impact, startup founder, tech trends 2025, AI business, technology interview, entrepreneurship success__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. But transistor radios shattered this control structure completely, arriving at precisely the right cultural moment. The post-WWII baby boom had created an unprecedented youth population with disposable income, and rock and roll was exploding into mainstream culture—music that adults often disapproved of, music that spoke directly to teenage rebellion and independence.For the first time in human history, young people had private, personal access to media. They could take their music to bedrooms, to beaches, anywhere adults weren't monitoring. They could tune into stations playing Chuck Berry, Elvis, and Little Richard without parental oversight—and in many parts of Europe, they could discover the rebellious thrill of pirate radio stations broadcasting rock and roll from ships anchored just outside territorial waters, defying government regulations and cultural gatekeepers alike. The transistor radio became the soundtrack of teenage autonomy, the device that let youth culture define itself on its own terms.The timing created a perfect storm: pocket-sized technology collided with a new musical rebellion, creating the first "personal media bubble" in human history—and the first generation to grow up with truly private access to the cultural forces shaping their identity.The parallels to today's smartphone revolution are impossible to ignore. Both devices delivered the same fundamental promise: the ability to carry your entire media universe with you, to access information and entertainment on your terms, to connect with communities beyond your immediate physical environment.But there's something we've lost in translation from analog to digital. My generation with transistor radios had to work for connection. We had to hunt through static, tune carefully, wait patiently for distant signals to emerge from electromagnetic chaos. We learned to listen—really listen—because finding something worthwhile required skill, patience, and analog intuition.This wasn't inconvenience; it was meaning-making. The harder you worked to find something, the more it mattered when you found it. The more skilled you became at navigating radio's complex landscape, the richer your discoveries became.What the Transistor Radio Taught Us About TomorrowRadio's evolution illustrates a crucial principle that applies directly to our current digital transformation: technologies don't replace each other—they find new ways to matter. Printing presses didn't become obsolete when radio arrived. Radio adapted when television emerged. Today, radio lives on in podcasts, streaming services, internet radio—the format transformed, but the essential human need it serves persists.When I was sixteen, lying on that bedroom floor with my father's radio pressed to my ear, I was doing exactly what teenagers do today with their smartphones: using technology to construct identity, to explore possibilities, to imagine myself into larger narratives.The medium has changed; the human impulse remains constant. The transistor radio taught me that technology's real power isn't in its specifications or capabilities—it's in how it reshapes the fundamental social relationships that define our lives.Every device that promises connection is really promising transformation: not just of how we communicate, but of who we become through that communication. The transistor radio was revolutionary not because it was smaller or more efficient than tube radios, but because it created new forms of human agency and autonomy.Perhaps that's the most important lesson for our current moment of digital transformation. As we worry about AI replacing human creativity, social media destroying real connection, or smartphones making us antisocial, radio's history suggests a different possibility: technologies tend to find their proper place in the ecosystem of human needs, augmenting rather than replacing what came before.As Marshall McLuhan understood, "the medium is the message"—to truly understand what's happening to us in this digital age, we need to understand the media themselves, not just the content they carry. And that's exactly the message I'll keep exploring in future newsletters—going deeper into how we can understand the media to understand the messages, and what that means for our hybrid analog-digital future.The frequency is still there, waiting. You just have to know how to tune in.__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco https://www.marcociappelli.com___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work ble
I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next.Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content.The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression.What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever.The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession?This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter.But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity.This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated.This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity.What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology.The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools.As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories.IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media technology. And that choice—that conscious decision to shape rather than simply react—gives me genuine optimism about our Hybrid Analog Digital Society.Subscribe to Redefining Society and Technology Podcast for more conversations exploring how we're consciously shaping our technological future. Your thoughts and reflections always enrich these discussions. Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.______Listen to the Full Episodehttps://redefiningsocietyandtechnologypodcast.com/episodes/why-electric-vehicles-need-an-apollo-program-the-renweable-energy-infrastructure-reality-were-ignoring-a-conversation-with-mats-larsson-redefining-society-and-technology-podcast-with-marco-ciappelli__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.⸻ Article ⸻ When Reality Meets Electric Dreams: Lessons from the Apollo MindsetI had one of those conversations that stops you in your tracks. Mats Larsson, calling in from Stockholm while I connected from Italy, delivered a perspective on electric vehicles that shattered my comfortable assumptions about our technological transition."First of all, we need to admit that we do not know exactly how to build the future. And then we need to start building it." This wasn't just Mats being philosophical—it was a fundamental admission that our approach to electrification has been dangerously naive.We've been treating the electric vehicle transition like upgrading our smartphones—expecting it to happen seamlessly, almost magically, while we go about our daily lives. But as Mats explained, referencing the Apollo program, monumental technological shifts require something we've forgotten how to do: comprehensive, sustained, coordinated investment in infrastructure we can't even fully envision yet.The numbers are staggering. To electrify all US transportation, we'd need to double power generation—that's the equivalent of 360 nuclear reactors worth of electricity. For hydrogen? Triple it. While Tesla and Chinese manufacturers gained their decade-plus advantage through relentless investment cycles, traditional automakers treated electric vehicles as "defensive moves," showcasing capability without commitment.But here's what struck me most: we need entirely new competencies. "Electrification strategists and electrification architects," as Mats called them—professionals who can design power grids capable of charging thousands of logistics vehicles daily, infrastructure that doesn't exist in our current planning vocabulary.We're living in this fascinating paradox of our Hybrid Analog Digital Society. We've become so accustomed to frictionless technological evolution—download an update, get new features—that we've lost appreciation for transitions requiring fundamental systemic change. Electric vehicles aren't just different cars; they're a complete reimagining of energy distribution, urban planning, and even our relationship with mobility itself.This conversation reminded me why I love exploring the intersection of technology and society. It's not enough to build better batteries or faster chargers. We're redesigning civilization's transportation nervous system, and we're doing it while pretending it's just another product launch.What excites me isn't just the technological challenge—it's the human coordination required. Like the Apollo program, this demands that rare combination of visionary leadership, sustained investment, and public will that transcends political cycles and market quarters.Listen to my full conversation with Mats, and let me know: Are we ready to embrace the Apollo mindset for our electric future?Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ Electric Vehicles, Technology And Society, Infrastructure, Innovation, Sustainable Transport, electric vehicles, society and technology, infrastructure development, apollo program, energy transition, government investment, technological transformation, sustainable mobility, power generation, digital society__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. T
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient myt
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak: https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I’ve had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn’t feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I’ll write it myself.” But even in that frustration, something creative happened. That tension? It’s real. It’s generative. And it’s something Maury deeply understands.“Companies don’t know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That’s when they fail.”Now that AI is democratizing storytelling tools, the danger isn’t that no one can create—it’s that everyone’s content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It’s like the writer that’s tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We’re not just talking about job displacement. We’re talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there’s hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that’s the spark AI can’t replicate. That’s the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we’re the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.











