DiscoverRedefining Society and Technology Podcast
Redefining Society and Technology Podcast

Redefining Society and Technology Podcast

Author: Marco Ciappelli, ITSPmagazine

Subscribed: 3Played: 61
Share

Description

Musing On Society, Technology, and Cybersecurity | Hosted by Marco Ciappelli

Let’s face it: the future is now. We live in a hybrid analog-digital society, and it’s time to stop ignoring the profound impact technology has on our lives.

The line between the physical and virtual worlds? It’s no longer real — just a figment of our imagination. We’re constantly juggling convenience, privacy, freedom, security, and even the future of humanity in a precarious balancing act.

There’s no better place than here, and no better time than now, to reflect on our relationship with technology — and redefine what society means in this new age.
207 Episodes
Reverse
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Tech Entrepreneur and Author's AI Prediction - The Last Book Written by a Human Interview  | A Conversation with Jeff Burningham | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Jeff Burningham Tech Entrepreneur. Investor. National Best Selling Author. Explorer of Human Potential. My book #TheLastBookWrittenByAHuman is available now.On LinkedIn: https://www.linkedin.com/in/jeff-burningham-15a01a7b/Book: https://www.simonandschuster.com/books/The-Last-Book-Written-by-a-Human/Jeff-Burningham/9781637634561#:~:text=*%20Why%20the%20development%20of%20AI,in%20the%20age%20of%20AI.Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Entrepreneur and author Jeff Burningham explores how artificial intelligence serves as a cosmic mirror reflecting humanity's true nature. Through his book "The Last Book Written by a Human," he argues that as machines become more intelligent, humans must become wiser. This conversation examines our collective journey through disruption, reflection, transformation, and evolution in our Hybrid Analog Digital Society.⸻ Article ⸻ I had one of those conversations that made me pause and question everything I thought I knew about our relationship with technology. Jeff Burningham, serial entrepreneur and author of "The Last Book Written by a Human: Becoming Wise in the Age of AI," joined me to explore a perspective that's both unsettling and profoundly hopeful.What struck me most wasn't Jeff's impressive background—founding multiple tech companies, running for governor of Utah, building a $5 billion real estate empire. It was his spiritual awakening in Varanasi, India, where a voice in his head insisted he was a writer. That moment of disruption led to years of reflection and ultimately to a book that challenges us to see AI not as our replacement, but as our mirror."As our machines become more intelligent, our work as humans is to become more wise," Jeff told me. This isn't just a catchy phrase—it's the thesis of his entire work. He argues that AI functions as what he calls a "cosmic mirror to humanity," reflecting back to us exactly who we've become as a species. The question becomes: do we like what we see?This perspective resonates deeply with how we exist in our Hybrid Analog Digital Society. We're no longer living separate digital and physical lives—we're constantly navigating both realms simultaneously. AI doesn't just consume our data; it reflects our collective behaviors, biases, and beliefs back to us in increasingly sophisticated ways.Jeff structures his thinking around four phases that mirror both technological development and personal growth: disruption, reflection, transformation, and evolution. We're currently somewhere between reflection and transformation, he suggests, at a crucial juncture where we must choose between two games. The old game prioritizes cash as currency, power as motivation, and control as purpose. The new game he envisions centers on karma as currency, authenticity as motivation, and love as purpose.What fascinates me is how this connects to the hero's journey—the narrative structure underlying every meaningful story from Star Wars to our own personal transformations. Jeff sees AI's emergence as part of an inevitable journey, a necessary disruption that forces us to confront fundamental questions about consciousness, creativity, and what makes us human.But here's where it gets both beautiful and challenging: as machines handle more of our "doing," we're left with our "being." We're human beings, not human doings, as Jeff reminds us. This shift demands that we reconnect with our bodies, our wisdom, our imperfections—all the messy, beautiful aspects of humanity that AI cannot replicate.The conversation reminded me why I chose "Redefining" for this podcast's title. We're not just adapting to new technology; we're fundamentally reexamining what it means to be human in an age of artificial intelligence. This isn't about finding the easy button or achieving perfect efficiency—it's about embracing what makes us gloriously, imperfectly human.Jeff's book launches August 19th, and while it won't literally be the last book written by a human, the title serves as both warning and invitation. If we don't actively choose to write our own story—if we don't rehumanize ourselves while consciously shaping AI's development—we might find ourselves spectators rather than authors of our own future.Subscribe to continue these essential conversations about technology and society. Because in our rapidly evolving world, the most important question isn't what AI can do for us, but who we choose to become alongside it.Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ AI technology, artificial intelligence, future of AI, business podcast, entrepreneur interview, technology trends, tech entrepreneur, business mindset, innovation podcast, AI impact, startup founder, tech trends 2025, AI business, technology interview, entrepreneurship success__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____ Newsletter: Musing On Society And Technology https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144/_____ Watch on Youtube: https://youtu.be/OYBjDHKhZOM_____ My Website: https://www.marcociappelli.com_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3The First Smartphone Was a Transistor Radio — How a Tiny Device Rewired Youth Culture and Predicted Our Digital FutureA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI've been collecting vintage radios lately—just started, really—drawn to their analog souls in ways I'm still trying to understand. Each one I find reminds me of a small, battered transistor radio from my youth. It belonged to my father, and before that, probably my grandfather. The leather case was cracked, the antenna wobbled, and the dial drifted if you breathed on it wrong. But when I was sixteen, sprawled across my bedroom floor in that small town near Florence with homework scattered around me, this little machine was my portal to everything that mattered.Late at night, I'd start by chasing the latest hits and local shows on FM, but then I'd venture into the real adventure—tuning through the static on AM and shortwave frequencies. Voices would emerge from the electromagnetic soup—music from London, news from distant capitals, conversations in languages I couldn't understand but somehow felt. That radio gave me something I didn't even know I was missing: the profound sense of belonging to a world much bigger than my neighborhood, bigger than my small corner of Tuscany.What I didn't realize then—what I'm only now beginning to understand—is that I was holding the first smartphone in human history.Not literally, of course. But functionally? Sociologically? That transistor radio was the prototype for everything that followed: the first truly personal media device that rewired how young people related to the world, to each other, and to the adults trying to control both.But to understand why the transistor radio was so revolutionary, we need to trace radio's remarkable journey through the landscape of human communication—a journey that reveals patterns we're still living through today.When Radio Was the Family HearthBefore my little portable companion, radio was something entirely different. In the 1930s, radio was furniture—massive, wooden, commanding the living room like a shrine to shared experience. Families spent more than four hours a day listening together, with radio ownership reaching nearly 90 percent by 1940. From American theaters that wouldn't open until after "Amos 'n Andy" to British families gathered around their wireless sets, from RAI broadcasts bringing opera into Tuscan homes—entire communities synchronized their lives around these electromagnetic rituals.Radio didn't emerge in a media vacuum, though. It had to find its place alongside the dominant information medium of the era: newspapers. The relationship began as an unlikely alliance. In the early 1920s, newspapers weren't threatened by radio—they were actually radio's primary boosters, creating tie-ins with broadcasts and even owning stations. Detroit's WWJ was owned by The Detroit News, initially seen as "simply another press-supported community service."But then came the "Press-Radio War" of 1933-1935, one of the first great media conflicts of the modern age. Newspapers objected when radio began interrupting programs with breaking news, arguing that instant news delivery would diminish paper sales. The 1933 Biltmore Agreement tried to restrict radio to just two five-minute newscasts daily—an early attempt at what we might now recognize as media platform regulation.Sound familiar? The same tensions we see today between traditional media and digital platforms, between established gatekeepers and disruptive technologies, were playing out nearly a century ago. Rather than one medium destroying the other, they found ways to coexist and evolve—a pattern that would repeat again and again.By the mid-1950s, when the transistor was perfected, radio was ready for its next transformation.The Real Revolution Was Social, Not TechnicalThis is where my story begins, but it's also where radio's story reaches its most profound transformation. The transistor radio didn't just make radio portable—it fundamentally altered the social dynamics of media consumption and youth culture itself.Remember, radio had spent its first three decades as a communal experience. Parents controlled what the family heard and when. But transistor radios shattered this control structure completely, arriving at precisely the right cultural moment. The post-WWII baby boom had created an unprecedented youth population with disposable income, and rock and roll was exploding into mainstream culture—music that adults often disapproved of, music that spoke directly to teenage rebellion and independence.For the first time in human history, young people had private, personal access to media. They could take their music to bedrooms, to beaches, anywhere adults weren't monitoring. They could tune into stations playing Chuck Berry, Elvis, and Little Richard without parental oversight—and in many parts of Europe, they could discover the rebellious thrill of pirate radio stations broadcasting rock and roll from ships anchored just outside territorial waters, defying government regulations and cultural gatekeepers alike. The transistor radio became the soundtrack of teenage autonomy, the device that let youth culture define itself on its own terms.The timing created a perfect storm: pocket-sized technology collided with a new musical rebellion, creating the first "personal media bubble" in human history—and the first generation to grow up with truly private access to the cultural forces shaping their identity.The parallels to today's smartphone revolution are impossible to ignore. Both devices delivered the same fundamental promise: the ability to carry your entire media universe with you, to access information and entertainment on your terms, to connect with communities beyond your immediate physical environment.But there's something we've lost in translation from analog to digital. My generation with transistor radios had to work for connection. We had to hunt through static, tune carefully, wait patiently for distant signals to emerge from electromagnetic chaos. We learned to listen—really listen—because finding something worthwhile required skill, patience, and analog intuition.This wasn't inconvenience; it was meaning-making. The harder you worked to find something, the more it mattered when you found it. The more skilled you became at navigating radio's complex landscape, the richer your discoveries became.What the Transistor Radio Taught Us About TomorrowRadio's evolution illustrates a crucial principle that applies directly to our current digital transformation: technologies don't replace each other—they find new ways to matter. Printing presses didn't become obsolete when radio arrived. Radio adapted when television emerged. Today, radio lives on in podcasts, streaming services, internet radio—the format transformed, but the essential human need it serves persists.When I was sixteen, lying on that bedroom floor with my father's radio pressed to my ear, I was doing exactly what teenagers do today with their smartphones: using technology to construct identity, to explore possibilities, to imagine myself into larger narratives.The medium has changed; the human impulse remains constant. The transistor radio taught me that technology's real power isn't in its specifications or capabilities—it's in how it reshapes the fundamental social relationships that define our lives.Every device that promises connection is really promising transformation: not just of how we communicate, but of who we become through that communication. The transistor radio was revolutionary not because it was smaller or more efficient than tube radios, but because it created new forms of human agency and autonomy.Perhaps that's the most important lesson for our current moment of digital transformation. As we worry about AI replacing human creativity, social media destroying real connection, or smartphones making us antisocial, radio's history suggests a different possibility: technologies tend to find their proper place in the ecosystem of human needs, augmenting rather than replacing what came before.As Marshall McLuhan understood, "the medium is the message"—to truly understand what's happening to us in this digital age, we need to understand the media themselves, not just the content they carry. And that's exactly the message I'll keep exploring in future newsletters—going deeper into how we can understand the media to understand the messages, and what that means for our hybrid analog-digital future.The frequency is still there, waiting. You just have to know how to tune in.__________ End of transmission.📬 Enjoyed this article? Follow the newsletter here: https://www.linkedin.com/newsletters/7079849705156870144/🌀 Let's keep exploring what it means to be human in this Hybrid Analog Digital Society.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco https://www.marcociappelli.com___________________________________________________________Marco Ciappelli is Co-Founder and CMO of ITSPmagazine, a journalist, creative director, and host of podcasts exploring the intersection of technology, cybersecurity, and society. His work ble
I had one of those conversations that reminded me why I'm so passionate about exploring the intersection of technology and society. Speaking with Mark Smith, a board member at IBC and co-lead of their accelerator program, I found myself transported back to my roots in communication and media studies, but with eyes wide open to what's coming next.Mark has spent over 30 years in media technology, including 23 years building Mobile World Congress in Barcelona. When someone with that depth of experience gets excited about what's happening now, you pay attention. And what's happening at IBC 2025 in Amsterdam this September is nothing short of a redefinition of how we create, distribute, and authenticate content.The numbers alone are staggering: 1,350 exhibitors across 14 halls, nearly 300 speakers, 45,000 visitors. But what struck me wasn't the scale—it's the philosophical shift happening in how we think about media production. We're witnessing television's centennial year, with the first demonstrations happening in 1925, and yet we're simultaneously seeing the birth of entirely new forms of creative expression.What fascinated me most was Mark's description of their Accelerator Media Innovation Program. Since 2019, they've run over 50 projects involving 350 organizations, creating what he calls "a safe environment" for collaboration. This isn't just about showcasing new gadgets—it's about solving real challenges that keep media professionals awake at night. In our Hybrid Analog Digital Society, the traditional boundaries between broadcaster and audience, between creator and consumer, are dissolving faster than ever.The AI revolution in media production particularly caught my attention. Mark spoke about "AI assistant agents" and "agentic AI" with the enthusiasm of someone who sees liberation rather than replacement. As he put it, "It's an opportunity to take out a lot of laborious processes." But more importantly, he emphasized that it's creating new jobs—who would have thought "AI prompter" would become a legitimate profession?This perspective challenges the dystopian narrative often surrounding AI adoption. Instead of fearing the technology, the media industry seems to be embracing it as a tool for enhanced creativity. Mark's excitement was infectious when describing how AI can remove the "boring" aspects of production, allowing creative minds to focus on what they do best—tell stories that matter.But here's where it gets really interesting from a sociological perspective: the other side of the screen. We talked about how streaming revolutionized content consumption, giving viewers unprecedented control over their experience. Yet Mark observed something I've noticed too—while the technology exists for viewers to be their own directors (choosing camera angles in sports, for instance), many prefer to trust the professional's vision. We're not necessarily seeking more control; we're seeking more relevance and authenticity.This brings us to one of the most critical challenges of our time: content provenance. In a world where anyone can create content that looks professional, how do we distinguish between authentic journalism and manufactured narratives? Mark highlighted their work on C2PA (content provenance initiative), developing tools that can sign and verify media sources, tracking where content has been manipulated.This isn't just a technical challenge—it's a societal imperative. As Mark noted, YouTube is now the second most viewed platform in the UK. When user-generated content competes directly with traditional media, we need new frameworks for understanding truth and authenticity. The old editorial gatekeepers are gone; we need technological solutions that preserve trust while enabling creativity.What gives me hope is the approach I heard from Mark and his colleagues. They're not trying to control technology's impact on society—they're trying to shape it consciously. The IBC Accelerator Program represents something profound: an industry taking responsibility for its own transformation, creating spaces for collaboration rather than competition, focusing on solving real problems rather than just building cool technology.The Google Hackfest they're launching this year perfectly embodies this philosophy. Young broadcast engineers and software developers working together on real challenges, supported by established companies like Formula E. It's not about replacing human creativity with artificial intelligence—it's about augmenting human potential with technological tools.As I wrapped up our conversation, I found myself thinking about my own journey from studying sociology of communication in a pre-internet world to hosting podcasts about our digital transformation. Technology doesn't just change how we communicate—it changes who we are as communicators, as creators, as human beings sharing stories.IBC 2025 isn't just a trade show; it's a glimpse into how we're choosing to redefine our relationship with media technology. And that choice—that conscious decision to shape rather than simply react—gives me genuine optimism about our Hybrid Analog Digital Society.Subscribe to Redefining Society and Technology Podcast for more conversations exploring how we're consciously shaping our technological future. Your thoughts and reflections always enrich these discussions.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.______Listen to the Full Episodehttps://redefiningsocietyandtechnologypodcast.com/episodes/why-electric-vehicles-need-an-apollo-program-the-renweable-energy-infrastructure-reality-were-ignoring-a-conversation-with-mats-larsson-redefining-society-and-technology-podcast-with-marco-ciappelli__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com ______Title: Why Electric Vehicles Need an Apollo Program: The Reneweable Energy Infrastructure Reality We're Ignoring | A Conversation with Mats Larsson | Redefining Society And Technology Podcast With Marco Ciappelli______Guest: Mats Larsson New book: "How Building the Future Really Works." Business developer, project manager and change leader – Speaker. I'm happy to connect!On LinkedIn: https://www.linkedin.com/in/matslarsson-author/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Advisor | Journalist | Writer | Podcast Host | #Technology #Cybersecurity #Society 🌎 LAX 🛸 FLR 🌍WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Swedish business consultant Mats Larsson reveals why electric vehicle transition requires Apollo program-scale government investment. We explore the massive infrastructure gap between EV ambitions and reality, from doubling power generation to training electrification architects. This isn't about building better cars—it's about reimagining our entire transportation ecosystem in our Hybrid Analog Digital Society.⸻ Article ⸻ When Reality Meets Electric Dreams: Lessons from the Apollo MindsetI had one of those conversations that stops you in your tracks. Mats Larsson, calling in from Stockholm while I connected from Italy, delivered a perspective on electric vehicles that shattered my comfortable assumptions about our technological transition."First of all, we need to admit that we do not know exactly how to build the future. And then we need to start building it." This wasn't just Mats being philosophical—it was a fundamental admission that our approach to electrification has been dangerously naive.We've been treating the electric vehicle transition like upgrading our smartphones—expecting it to happen seamlessly, almost magically, while we go about our daily lives. But as Mats explained, referencing the Apollo program, monumental technological shifts require something we've forgotten how to do: comprehensive, sustained, coordinated investment in infrastructure we can't even fully envision yet.The numbers are staggering. To electrify all US transportation, we'd need to double power generation—that's the equivalent of 360 nuclear reactors worth of electricity. For hydrogen? Triple it. While Tesla and Chinese manufacturers gained their decade-plus advantage through relentless investment cycles, traditional automakers treated electric vehicles as "defensive moves," showcasing capability without commitment.But here's what struck me most: we need entirely new competencies. "Electrification strategists and electrification architects," as Mats called them—professionals who can design power grids capable of charging thousands of logistics vehicles daily, infrastructure that doesn't exist in our current planning vocabulary.We're living in this fascinating paradox of our Hybrid Analog Digital Society. We've become so accustomed to frictionless technological evolution—download an update, get new features—that we've lost appreciation for transitions requiring fundamental systemic change. Electric vehicles aren't just different cars; they're a complete reimagining of energy distribution, urban planning, and even our relationship with mobility itself.This conversation reminded me why I love exploring the intersection of technology and society. It's not enough to build better batteries or faster chargers. We're redesigning civilization's transportation nervous system, and we're doing it while pretending it's just another product launch.What excites me isn't just the technological challenge—it's the human coordination required. Like the Apollo program, this demands that rare combination of visionary leadership, sustained investment, and public will that transcends political cycles and market quarters.Listen to my full conversation with Mats, and let me know: Are we ready to embrace the Apollo mindset for our electric future?Subscribe wherever you get your podcasts, and join me on YouTube for the full experience. Let's continue this conversation—because in our rapidly evolving world, these discussions shape the future we're building together.Cheers,Marco⸻ Keywords ⸻ Electric Vehicles, Technology And Society, Infrastructure, Innovation, Sustainable Transport, electric vehicles, society and technology, infrastructure development, apollo program, energy transition, government investment, technological transformation, sustainable mobility, power generation, digital society__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 18, 2025The Narrative Attack Paradox: When Cybersecurity Lost the Ability to Detect Its Own Deception and the Humanity We Risk When Truth Becomes OptionalReflections from Black Hat USA 2025 on Deception, Disinformation, and the Marketing That Chose Fiction Over FactsBy Marco CiappelliSean Martin, CISSP just published his analysis of Black Hat USA 2025, documenting what he calls the cybersecurity vendor "echo chamber." Reviewing over 60 vendor announcements, Sean found identical phrases echoing repeatedly: "AI-powered," "integrated," "reduce analyst burden." The sameness forces buyers to sift through near-identical claims to find genuine differentiation.This reveals more than a marketing problem—it suggests that different technologies are being fed into the same promotional blender, possibly a generative AI one, producing standardized output regardless of what went in. When an entire industry converges on identical language to describe supposedly different technologies, meaningful technical discourse breaks down.But Sean's most troubling observation wasn't about marketing copy—it was about competence. When CISOs probe vendor claims about AI capabilities, they encounter vendors who cannot adequately explain their own technologies. When conversations moved beyond marketing promises to technical specifics, answers became vague, filled with buzzwords about proprietary algorithms.Reading Sean's analysis while reflecting on my own Black Hat experience, I realized we had witnessed something unprecedented: an entire industry losing the ability to distinguish between authentic capability and generated narrative—precisely as that same industry was studying external "narrative attacks" as an emerging threat vector.The irony was impossible to ignore. Black Hat 2025 sessions warned about AI-generated deepfakes targeting executives, social engineering attacks using scraped LinkedIn profiles, and synthetic audio calls designed to trick financial institutions. Security researchers documented how adversaries craft sophisticated deceptions using publicly available content. Meanwhile, our own exhibition halls featured countless unverifiable claims about AI capabilities that even the vendors themselves couldn't adequately explain.But to understand what we witnessed, we need to examine the very concept that cybersecurity professionals were discussing as an external threat: narrative attacks. These represent a fundamental shift in how adversaries target human decision-making. Unlike traditional cyberattacks that exploit technical vulnerabilities, narrative attacks exploit psychological vulnerabilities in human cognition. Think of them as social engineering and propaganda supercharged by AI—personalized deception at scale that adapts faster than human defenders can respond. They flood information environments with false content designed to manipulate perception and erode trust, rendering rational decision-making impossible.What makes these attacks particularly dangerous in the AI era is scale and personalization. AI enables automated generation of targeted content tailored to individual psychological profiles. A single adversary can launch thousands of simultaneous campaigns, each crafted to exploit specific cognitive biases of particular groups or individuals.But here's what we may have missed during Black Hat 2025: the same technological forces enabling external narrative attacks have already compromised our internal capacity for truth evaluation. When vendors use AI-optimized language to describe AI capabilities, when marketing departments deploy algorithmic content generation to sell algorithmic solutions, when companies building detection systems can't detect the artificial nature of their own communications, we've entered a recursive information crisis.From a sociological perspective, we're witnessing the breakdown of social infrastructure required for collective knowledge production. Industries like cybersecurity have historically served as early warning systems for technological threats—canaries in the coal mine with enough technical sophistication to spot emerging dangers before they affect broader society.But when the canary becomes unable to distinguish between fresh air and poison gas, the entire mine is at risk.This brings us to something the literary world understood long before we built our first algorithm. Jorge Luis Borges, the Argentine writer, anticipated this crisis in his 1940s stories like "On Exactitude in Science" and "The Library of Babel"—tales about maps that become more real than the territories they represent and libraries containing infinite books, including false ones. In his fiction, simulations and descriptions eventually replace the reality they were meant to describe.We're living in a Borgesian nightmare where marketing descriptions of AI capabilities have become more influential than actual AI capabilities. When a vendor's promotional language about their AI becomes more convincing than a technical demonstration, when buyers make decisions based on algorithmic marketing copy rather than empirical evidence, we've entered that literary territory where the map has consumed the landscape. And we've lost the ability to distinguish between them.The historical precedent is the 1938 War of the Worlds broadcast, which created mass hysteria from fiction. But here's the crucial difference: Welles was human, the script was human-written, the performance required conscious participation, and the deception was traceable to human intent. Listeners had to actively choose to believe what they heard.Today's AI-generated narratives operate below the threshold of conscious recognition. They require no active participation—they work by seamlessly integrating into information environments in ways that make detection impossible even for experts. When algorithms generate technical claims that sound authentic to human evaluators, when the same systems create both legitimate documentation and marketing fiction, we face deception at a level Welles never imagined: the algorithmic manipulation of truth itself.The recursive nature of this problem reveals itself when you try to solve it. This creates a nearly impossible situation. How do you fact-check AI-generated claims about AI using AI-powered tools? How do you verify technical documentation when the same systems create both authentic docs and marketing copy? When the tools generating problems and solving problems converge into identical technological artifacts, conventional verification approaches break down completely.My first Black Hat article explored how we risk losing human agency by delegating decision-making to artificial agents. But this goes deeper: we risk losing human agency in the construction of reality itself. When machines generate narratives about what machines can do, truth becomes algorithmically determined rather than empirically discovered.Marshall McLuhan famously said "We shape our tools, and thereafter they shape us." But he couldn't have imagined tools that reshape our perception of reality itself. We haven't just built machines that give us answers—we've built machines that decide what questions we should ask and how we should evaluate the answers.But the implications extend far beyond cybersecurity itself. This matters far beyond. If the sector responsible for detecting digital deception becomes the first victim of algorithmic narrative pollution, what hope do other industries have? Healthcare systems relying on AI diagnostics they can't explain. Financial institutions using algorithmic trading based on analyses they can't verify. Educational systems teaching AI-generated content whose origins remain opaque.When the industry that guards against deception loses the ability to distinguish authentic capability from algorithmic fiction, society loses its early warning system for the moment when machines take over truth construction itself.So where does this leave us? That moment may have already arrived. We just don't know it yet—and increasingly, we lack the cognitive infrastructure to find out.But here's what we can still do: We can start by acknowledging we've reached this threshold. We can demand transparency not just in AI algorithms, but in the human processes that evaluate and implement them. We can rebuild evaluation criteria that distinguish between technical capability and marketing narrative.And here's a direct challenge to the marketing and branding professionals reading this: it's time to stop relying on AI algorithms and data optimization to craft your messages. The cybersecurity industry's crisis should serve as a warning—when marketing becomes indistinguishable from algorithmic fiction, everyone loses. Social media has taught us that the most respected brands are those that choose honesty over hype, transparency over clever messaging. Brands that walk the walk and talk the talk, not those that let machines do the talking.The companies that will survive this epistemological crisis are those whose marketing teams become champions of truth rather than architects of confusion. When your audience can no longer distinguish between human insight and machine-generated claims, authentic communication becomes your competitive advantage.Most importantly, we can remember that the goal was never to build machines that think for us, but machines that help us think better.The canary may be struggling to breathe, but it's still singing. T
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________A Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3August 9, 2025The Agentic AI Myth in Cybersecurity and the Humanity We Risk When We Stop Deciding for OurselvesReflections from Black Hat USA 2025 on the Latest Tech Salvation NarrativeWalking the floors of Black Hat USA 2025 for what must be the 10th or 11th time as accredited media—honestly, I've stopped counting—I found myself witnessing a familiar theater. The same performance we've seen play out repeatedly in cybersecurity: the emergence of a new technological messiah promising to solve all our problems. This year's savior? Agentic AI.The buzzword echoes through every booth, every presentation, every vendor pitch. Promises of automating 90% of security operations, platforms for autonomous threat detection, agents that can investigate novel alerts without human intervention. The marketing materials speak of artificial intelligence that will finally free us from the burden of thinking, deciding, and taking responsibility.It's Talos all over again.In Greek mythology, Hephaestus forged Talos, a bronze giant tasked with patrolling Crete's shores, hurling boulders at invaders without human intervention. Like contemporary AI, Talos was built to serve specific human ends—security, order, and control—and his value was determined by his ability to execute these ends flawlessly. The parallels to today's agentic AI promises are striking: autonomous patrol, threat detection, automated response. Same story, different millennium.But here's what the ancient Greeks understood that we seem to have forgotten: every artificial creation, no matter how sophisticated, carries within it the seeds of its own limitations and potential dangers.Industry observers noted over a hundred announcements promoting new agentic AI applications, platforms or services at the conference. That's more than one AI agent announcement per hour. The marketing departments have clearly been busy.But here's what baffles me: why do we need to lie to sell cybersecurity? You can give away t-shirts, dress up as comic book superheroes with your logo slapped on their chests, distribute branded board games, and pretend to be a sports team all day long—that's just trade show theater, and everyone knows it. But when marketing pushes past the limits of what's even believable, when they make claims so grandiose that their own engineers can't explain them, something deeper is broken.If marketing departments think CISOs are buying these lies, they have another thing coming. These are people who live with the consequences of failed security implementations, who get fired when breaches happen, who understand the difference between marketing magic and operational reality. They've seen enough "revolutionary" solutions fail to know that if something sounds too good to be true, it probably is.Yet the charade continues, year after year, vendor after vendor. The real question isn't whether the technology works—it's why an industry built on managing risk has become so comfortable with the risk of overselling its own capabilities. Something troubling emerges when you move beyond the glossy booth presentations and actually talk to the people implementing these systems. Engineers struggle to explain exactly how their AI makes decisions. Security leaders warn that artificial intelligence might become the next insider threat, as organizations grow comfortable trusting systems they don't fully understand, checking their output less and less over time.When the people building these systems warn us about trusting them too much, shouldn't we listen?This isn't the first time humanity has grappled with the allure and danger of artificial beings making decisions for us. Mary Shelley's Frankenstein, published in 1818, explored the hubris of creating life—and intelligence—without fully understanding the consequences. The novel raises the same question we face today: what are humans allowed to do with this forbidden power of creation? The question becomes more pressing when we consider what we're actually delegating to these artificial agents. It's no longer just pattern recognition or data processing—we're talking about autonomous decision-making in critical security scenarios. Conference presentations showcased significant improvements in proactive defense measures, but at what cost to human agency and understanding?Here's where the conversation jumps from cybersecurity to something far more fundamental: what are we here for if not to think, evaluate, and make decisions? From a sociological perspective, we're witnessing the construction of a new social reality where human agency is being systematically redefined. Survey data shared at the conference revealed that most security leaders feel the biggest internal threat is employees unknowingly giving AI agents access to sensitive data. But the real threat might be more subtle: the gradual erosion of human decision-making capacity as a social practice.When we delegate not just routine tasks but judgment itself to artificial agents, we're not just changing workflows—we're reshaping the fundamental social structures that define human competence and authority. We risk creating a generation of humans who have forgotten how to think critically about complex problems, not because they lack the capacity, but because the social systems around them no longer require or reward such thinking.E.M. Forster saw this coming in 1909. In "The Machine Stops," he imagined a world where humanity becomes completely dependent on an automated system that manages all aspects of life—communication, food, shelter, entertainment, even ideas. People live in isolation, served by the Machine, never needing to make decisions or solve problems themselves. When someone suggests that humans should occasionally venture outside or think independently, they're dismissed as primitive. The Machine has made human agency unnecessary, and humans have forgotten they ever possessed it. When the Machine finally breaks down, civilization collapses because no one remembers how to function without it.Don't misunderstand me—I'm not a Luddite. AI can and should help us manage the overwhelming complexity of modern cybersecurity threats. The technology demonstrations I witnessed showed genuine promise: reasoning engines that understand context, action frameworks that enable response within defined boundaries, learning systems that improve based on outcomes. The problem isn't the technology itself but the social construction of meaning around it. What we're witnessing is the creation of a new techno-social myth—a collective narrative that positions agentic AI as the solution to human fallibility. This narrative serves specific social functions: it absolves organizations of the responsibility to invest in human expertise, justifies cost-cutting through automation, and provides a technological fix for what are fundamentally organizational and social problems.The mythology we're building around agentic AI reflects deeper anxieties about human competence in an increasingly complex world. Rather than addressing the root causes—inadequate training, overwhelming workloads, systemic underinvestment in human capital—we're constructing a technological salvation narrative that promises to make these problems disappear.Vendors spoke of human-machine collaboration, AI serving as a force multiplier for analysts, handling routine tasks while escalating complex decisions to humans. This is a more honest framing: AI as augmentation, not replacement. But the marketing materials tell a different story, one of autonomous agents operating independently of human oversight.I've read a few posts on LinkedIn and spoke with a few people myself who know this topic way better than me, but I get that feeling too. There's a troubling pattern emerging: many vendor representatives can't adequately explain their own AI systems' decision-making processes. When pressed on specifics—how exactly does your agent determine threat severity? What happens when it encounters an edge case it wasn't trained for?—answers become vague, filled with marketing speak about proprietary algorithms and advanced machine learning.This opacity is dangerous. If we're going to trust artificial agents with critical security decisions, we need to understand how they think—or more accurately, how they simulate thinking. Every machine learning system requires human data scientists to frame problems, prepare data, determine appropriate datasets, remove bias, and continuously update the software. The finished product may give the impression of independent learning, but human intelligence guides every step.The future of cybersecurity will undoubtedly involve more automation, more AI assistance, more artificial agents handling routine tasks. But it should not involve the abdication of human judgment and responsibility. We need agentic AI that operates with transparency, that can explain its reasoning, that acknowledges its limitations. We need systems designed to augment human intelligence, not replace it. Most importantly, we need to resist the seductive narrative that technology alone can solve problems that are fundamentally human in nature. The prevailing logic that tech fixes tech, and that AI will fix AI, is deeply unsettling. It's a recursive delusion that takes us further away from human wisdom and closer to a world where we've forgotten that the most important problems have always required human judgment, not algorithmic solutions.Ancient myt
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: Creative Storytelling in the Age of AI: When Machines Learn to Dream and the Last Stand of Human CreativityGuest: Maury RogowCEO, Rip Media Group | I grow businesses with Ai + video storytelling. Honored to have 70k+ professionals & 800+ brands grow by 2.5Billion Published: Inc, Entrepreneur, ForbesOn LinkedIn: https://www.linkedin.com/in/mauryrogow/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ I sat across - metaversically speaking - from Maury Rogow, a man who's lived three lives—tech executive, Hollywood producer, storytelling evangelist—and watched him grapple with the same question haunting creators everywhere: Are we teaching our replacements to dream? In our latest conversation on Redefining Society and Technology, we explored whether AI is the ultimate creative collaborator or the final chapter in human artistic expression.⸻ Article ⸻ I sat across from Maury Rogow—a tech exec, Hollywood producer, and storytelling strategist—and watched him wrestle with a question more and more of us are asking: Are we teaching our replacements to dream?Our latest conversation on Redefining Society and Technology dives straight into that uneasy space where AI meets human creativity. Is generative AI the ultimate collaborator… or the beginning of the end for authentic artistic expression?I’ve had my own late-night battles with AI writing tools, struggling to coax a rhythm out of ChatGPT that didn’t feel like recycled marketing copy. Eventually, I slammed my laptop shut and thought: “Screw this—I’ll write it myself.” But even in that frustration, something creative happened. That tension? It’s real. It’s generative. And it’s something Maury deeply understands.“Companies don’t know how to differentiate themselves,” he told me. “So they compete on cost or get drowned out by bigger brands. That’s when they fail.”Now that AI is democratizing storytelling tools, the danger isn’t that no one can create—it’s that everyone’s content sounds the same. Maury gets AI-generated brand pitches daily that all echo the same structure, voice, and tropes—“digital ventriloquism,” as I called it.He laughed when I told him about my AI struggles. “It’s like the writer that’s tired,” he said. “I just start a new session and tell it to take a nap.” But beneath the humor is a real fear: What happens when the tools meant to support us start replacing us?Maury described a recent project where they recreated a disaster scene—flames, smoke, chaos—using AI compositing. No massive crew, no fire trucks, no danger. And no one watching knew the difference. Or cared.We’re not just talking about job displacement. We’re talking about the potential erasure of the creative process itself—that messy, human, beautiful thing machines can mimic but never truly live.And yet… there’s hope. Creativity has always been about connecting the dots only you can see. When Maury spoke about watching Becoming Led Zeppelin and reliving the memories, the people, the context behind the music—that’s the spark AI can’t replicate. That’s the emotional archaeology of being human.The machines are learning to dream.But maybe—just maybe—we’re the ones who still know what dreams are worth having.Cheers,Marco⸻ Keywords ⸻ artificial intelligence creativity, AI content creation, human vs AI storytelling, generative AI impact, creative industry disruption, AI writing tools, future of creativity, technology and society, AI ethics philosophy, human creativity preservation, storytelling in AI age, creative professionals AI, digital transformation creativity, AI collaboration tools, machine learning creativity, content creation revolution, artistic expression AI, creative industry jobs, AI generated content, human-AI creative partnership__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: How to hack Global Activism with Tech, Music, and Purpose: A Conversation with Michael Sheldrick, Co-Founder of Global Citizen and Author of “From Ideas to Impact”Guest: Michael SheldrickCo-Founder, Global Citizen | Author of “From Ideas to Impact” (Wiley 2024) | Professor, Columbia University | Speaker, Board Member and Forbes.com ContributorWebSite: https://michaelsheldrick.comOn LinkedIn: https://www.linkedin.com/in/michael-sheldrick-30364051/Global Citizen: https://www.globalcitizen.org/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ Michael Sheldrick returns to Redefining Society and Technology to share how Global Citizen has mobilized billions in aid and inspired millions through music, tech, and collective action. From social media activism to systemic change, this conversation explores how creativity and innovation can fuel a global movement for good.⸻ Article ⸻ Sometimes, the best stories are the ones that keep unfolding — and Michael Sheldrick’s journey is exactly that. When we first spoke, Global Citizen had just (almost) released their book From Ideas to Impact. This time, I invited Michael back on Redefining Society and Technology because his story didn’t stop at the last chapter.From a high school student in Western Australia who doubted his own potential, to co-founding one of the most influential global advocacy movements — Michael’s path is a testament to what belief and purpose can spark. And when purpose is paired with music, technology, and strategic activism? That’s where the real magic happens.In this episode, we dig into how Global Citizen took the power of pop culture and built a model for global change. Picture this: a concert ticket you don’t buy, but earn by taking action. Signing petitions, tweeting for change, amplifying causes — that’s the currency. It’s simple, smart, and deeply human.Michael shared how artists like John Legend and Coldplay joined their mission not just to play music, but to move policy. And they did — unlocking over $40 billion in commitments, impacting a billion lives. That’s not just influence. That’s impact.We also talked about the role of technology. AI, translation tools, Salesforce dashboards, even Substack — they’re not just part of the story, they’re the infrastructure. From grant-writing to movement-building, Global Citizen’s success is proof that the right tools in the right hands can scale change fast.Most of all, I loved hearing how digital actions — even small ones — ripple out globally. A girl in Shanghai watching a livestream. A father in Utah supporting his daughters’ activism. The digital isn’t just real — it’s redefining what real means.As we wrapped, Michael teased a new bonus chapter he’s releasing, The Innovator. Naturally, I asked him back when it drops. Because this conversation isn’t just about what’s been done — it’s about what comes next.So if you’re wondering where to start, just remember Eleanor Roosevelt’s quote Michael brought back:“The way to begin is to begin.”Download the app. Take one action. The world is listening.Cheers,Marco⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, tech innovation, digital transformation, tech, technology, Global Citizen, Michael Sheldrick, ending poverty, pop culture activism, technology for good, social impact, digital advocacy, Redefining Society, AI in nonprofits, youth engagement, music and change, activism app, social movements, John Legend, sustainable development, global action, climate change, eradicating polio, tech for humanity, podcast on technology__________________ Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join me as I continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________The Hybrid Species — When Technology Becomes Human, and Humans Become TechnologyA Musing On Society & Technology Newsletter Written By Marco Ciappelli | Read by TAPE3July 19, 2025We once built tools to serve us. Now we build them to complete us. What happens when we merge — and what do we carry forward?A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliIn my last musing, I revisited Robbie, the first of Asimov’s robot stories — a quiet, loyal machine who couldn’t speak, didn’t simulate emotion, and yet somehow felt more trustworthy than the artificial intelligences we surround ourselves with today. I ended that piece with a question, a doorway:If today’s machines can already mimic understanding — convincing us they comprehend more than they do — what happens when the line between biology and technology dissolves completely? When carbon and silicon, organic and artificial, don’t just co-exist, but merge?I didn’t pull that idea out of nowhere. It was sparked by something Asimov himself said in a 1965 BBC interview — a clip that keeps resurfacing and hitting harder every time I hear it. He spoke of a future where humans and machines would converge, not just in function, but in form and identity. He wasn’t just imagining smarter machines. He was imagining something new. Something between.And that idea has never felt more real than now.We like to think of evolution as something that happens slowly, hidden in the spiral of DNA, whispered across generations. But what if the next mutation doesn’t come from biology at all? What if it comes from what we build?I’ve always believed we are tool-makers by nature — and not just with our hands. Our tools have always extended our bodies, our senses, our minds. A stone becomes a weapon. A telescope becomes an eye. A smartphone becomes a memory. And eventually, we stop noticing the boundary. The tool becomes part of us.It’s not just science fiction. Philosopher Andy Clark — whose work I’ve followed for years — calls us “natural-born cyborgs.” Humans, he argues, are wired to offload cognition into the environment. We think with notebooks. We remember with photographs. We navigate with GPS. The boundary between internal and external, mind and machine, was never as clean as we pretended.And now, with generative AI and predictive algorithms shaping the way we write, learn, speak, and decide — that blur is accelerating. A child born today won’t “use” AI. She’ll think through it. Alongside it. Her development will be shaped by tools that anticipate her needs before she knows how to articulate them. The machine won’t be a device she picks up — it’ll be a presence she grows up with.This isn’t some distant future. It’s already happening. And yet, I don’t believe we’re necessarily losing something. Not if we’re aware of what we’re merging with. Not if we remember who we are while becoming something new.This is where I return, again, to Asimov — and in particular, The Bicentennial Man. It’s the story of Andrew, a robot who spends centuries gradually transforming himself — replacing parts, expanding his experiences, developing feelings, claiming rights — until he becomes legally, socially, and emotionally recognized as human. But it’s not just about a machine becoming like us. It’s also about us learning to accept that humanity might not begin and end with flesh.We spend so much time fearing machines that pretend to be human. But what if the real shift is in humans learning to accept machines that feel — or at least behave — as if they care?And what if that shift is reciprocal?Because here’s the thing: I don’t think the future is about perfect humanoid robots or upgraded humans living in a sterile, post-biological cloud. I think it’s messier. I think it’s more beautiful than that.I think it’s about convergence. Real convergence. Where machines carry traces of our unpredictability, our creativity, our irrational, analog soul. And where we — as humans — grow a little more comfortable depending on the very systems we’ve always built to support us.Maybe evolution isn’t just natural selection anymore. Maybe it’s cultural and technological curation — a new kind of adaptation, shaped not in bone but in code. Maybe our children will inherit a sense of symbiosis, not separation. And maybe — just maybe — we can pass along what’s still beautiful about being analog: the imperfections, the contradictions, the moments that don’t make sense but still matter.We once built tools to serve us. Now we build them to complete us.And maybe — just maybe — that completion isn’t about erasing what we are. Maybe it’s about evolving it. Stretching it. Letting it grow into something wider.Because what if this hybrid species — born of carbon and silicon, memory and machine — doesn’t feel like a replacement… but a continuation?Imagine a being that carries both intuition and algorithm, that processes emotion and logic not as opposites, but as complementary forms of sense-making. A creature that can feel love while solving complex equations, write poetry while accessing a planetary archive of thought. A soul that doesn’t just remember, but recalls in high-resolution.Its body — not fixed, but modular. Biological and synthetic. Healing, adapting, growing new limbs or senses as needed. A body that weathers centuries, not years. Not quite immortal, but long-lived enough to know what patience feels like — and what loss still teaches.It might speak in new ways — not just with words, but with shared memories, electromagnetic pulses, sensory impressions that convey joy faster than language. Its identity could be fluid. Fractals of self that split and merge — collaborating, exploring, converging — before returning to the center.This being wouldn’t live in the future we imagined in the ’50s — chrome cities, robot butlers, and flying cars. It would grow in the quiet in-between: tending a real garden in the morning, dreaming inside a neural network at night. Creating art in a virtual forest. Crying over a story it helped write. Teaching a child. Falling in love — again and again, in new and old forms.And maybe, just maybe, this hybrid doesn’t just inherit our intelligence or our drive to survive. Maybe it inherits the best part of us: the analog soul. The part that cherishes imperfection. That forgives. That imagines for the sake of imagining.That might be our gift to the future. Not the code, or the steel, or even the intelligence — but the stubborn, analog soul that dares to care.Because if Robbie taught us anything, it’s that sometimes the most powerful connection comes without words, without simulation, without pretense.And if we’re now merging with what we create, maybe the real challenge isn’t becoming smarter — it’s staying human enough to remember why we started creating at all.Not just to solve problems. Not just to build faster, better, stronger systems. But to express something real. To make meaning. To feel less alone. We created tools not just to survive, but to say: “We are here. We feel. We dream. We matter.”That’s the code we shouldn’t forget — and the legacy we must carry forward.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com Title: The Human Side of Technology with Abadesi Osunsade — From Diversity to AI and Back AgainGuest: Abadesi OsunsadeFounder @ Hustle Crew WebSite: https://www.abadesi.comOn LinkedIn: https://www.linkedin.com/in/abadesi/Host: Marco CiappelliCo-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.comOn LinkedIn: https://www.linkedin.com/in/marco-ciappelli/_____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ What happens when someone with a multicultural worldview, startup grit, and a relentless focus on inclusion sits down to talk about tech, humanity, and the future? You get a conversation like this one with Abadesi Osunsade. We touched on everything from equitable design and storytelling to generative AI and ethics. This episode isn’t about answers — it’s about questions that matter. And it reminded me why I started this show in the first place. ⸻ Article ⸻ Some conversations remind you why you hit “record” in the first place. This one with Abadesi Osunsade — founder of Hustle Crew, podcast host of Techish, and longtime tech leader — was exactly that kind of moment. We were supposed to connect in person at Infosecurity Europe in London, but the chaos of the event kept us from it. I’m glad it worked out this way instead, because what came out of our remote chat was raw, layered, and deeply human. Abadesi and I explored a lot in just over 30 minutes: her journey through big tech and startups, the origins of Hustle Crew, and how inclusion and equity aren’t just HR buzzwords — they’re the foundation of better design. Better products. Better culture. We talked about the usual “why diversity matters” angle — but went beyond it. She shared viral real-world examples of flawed design (like facial recognition or hand dryers that don’t register dark skin) and challenged the myth that inclusive design is more expensive. Spoiler: it’s more expensive not to do it right the first time. Then we jumped into AI — not just how it’s being built, but who is building it. And what it means when those creators don’t reflect the world they’re supposedly designing for. We talked about generative AI, ethics, simulation, capitalism, utopia, dystopia — you know, the usual light stuff. What stood out most, though, was her reminder that this work — inclusion, education, change — isn’t about shame or guilt. It’s about possibility. Not everyone sees the world the same way, so you meet them where they are, with stories, with data, with empathy. And maybe, just maybe, you shift their perspective. This podcast was never meant to be just about tech. It’s about how tech shapes society — and how society, in turn, must shape tech. Abadesi brought that full circle. Take a listen. Think with us. Then go build something better. ⸻ Keywords ⸻ Society and Technology, AI ethics, generative AI, inclusive design, tech innovation, product development, digital transformation, tech, technology, Diversity & Inclusion, equity in tech, inclusive leadership, unconscious bias, diverse teams, representation matters, belonging at workEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.https://www.linkedin.com/newsletters/musing-on-society-technology-7079849705156870144You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
⸻ Podcast: Redefining Society and Technologyhttps://redefiningsocietyandtechnologypodcast.com _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________Robbie, From Fiction to Familiar — Robots, AI, and the Illusion of Consciousness June 29, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliI recently revisited one of my oldest companions. Not a person, not a memory, but a story. Robbie, the first of Isaac Asimov’s famous robot tales.It’s strange how familiar words can feel different over time. I first encountered Robbie as a teenager in the 1980s, flipping through a paperback copy of I, Robot. Back then, it was pure science fiction. The future felt distant, abstract, and comfortably out of reach. Robots existed mostly in movies and imagination. Artificial intelligence was something reserved for research labs or the pages of speculative novels. Reading Asimov was a window into possibilities, but they remained possibilities.Today, the story feels different. I listened to it this time—the way I often experience books now—through headphones, narrated by a synthetic voice on a sleek device Asimov might have imagined, but certainly never held. And yet, it wasn’t the method of delivery that made the story resonate more deeply; it was the world we live in now.Robbie was first published in 1939, a time when the idea of robots in everyday life was little more than fantasy. Computers were experimental machines that filled entire rooms, and global attention was focused more on impending war than machine ethics. Against that backdrop, Asimov’s quiet, philosophical take on robotics was ahead of its time.Rather than warning about robot uprisings or technological apocalypse, Asimov chose to explore trust, projection, and the human tendency to anthropomorphize the tools we create. Robbie, the robot, is mute, mechanical, yet deeply present. He is a protector, a companion, and ultimately, an emotional anchor for a young girl named Gloria. He doesn’t speak. He doesn’t pretend to understand. But through his actions—loyalty, consistency, quiet presence—he earns trust.Those themes felt distant when I first read them in the ’80s. At that time, robots were factory tools, AI was theoretical, and society was just beginning to grapple with personal computers, let alone intelligent machines. The idea of a child forming a deep emotional bond with a robot was thought-provoking but belonged firmly in the realm of fiction.Listening to Robbie now, decades later, in the age of generative AI, alters everything. Today, machines talk to us fluently. They compose emails, generate artwork, write stories, even simulate empathy. Our interactions with technology are no longer limited to function; they are layered with personality, design, and the subtle performance of understanding.Yet beneath the algorithms and predictive models, the reality remains: these machines do not understand us. They generate language, simulate conversation, and mimic comprehension, but it’s an illusion built from probability and training data, not consciousness. And still, many of us choose to believe in that illusion—sometimes out of convenience, sometimes out of the innate human desire for connection.In that context, Robbie’s silence feels oddly honest. He doesn’t offer comfort through words or simulate understanding. His presence alone is enough. There is no performance. No manipulation. Just quiet, consistent loyalty.The contrast between Asimov’s fictional robot and today’s generative AI highlights a deeper societal tension. For decades, we’ve anthropomorphized our machines, giving them names, voices, personalities. We’ve designed interfaces to smile, chatbots to flirt, AI assistants that reassure us they “understand.” At the same time, we’ve begun to robotize ourselves, adapting to algorithms, quantifying emotions, shaping our behavior to suit systems designed to optimize interaction and efficiency.This two-way convergence was precisely what Asimov spoke about in his 1965 BBC interview, which has been circulating again recently. In that conversation, he didn’t just speculate about machines becoming more human-like. He predicted the merging of biology and technology, the slow erosion of the boundaries between human and machine—a hybrid species, where both evolve toward a shared, indistinct future.We are living that reality now, in subtle and obvious ways. Neural implants, mind-controlled prosthetics, AI-driven decision-making, personalized algorithms—all shaping the way we experience life and interact with the world. The convergence isn’t on the horizon; it’s happening in real time.What fascinates me, listening to Robbie in this new context, is how much of Asimov’s work wasn’t just about technology, but about us. His stories remain relevant not because he perfectly predicted machines, but because he perfectly understood human nature—our fears, our projections, our contradictions.In Robbie, society fears the unfamiliar machine, despite its proven loyalty. In 2025, we embrace machines that pretend to understand, despite knowing they don’t. Trust is no longer built through presence and action, but through the performance of understanding. The more fluent the illusion, the easier it becomes to forget what lies beneath.Asimov’s stories, beginning with Robbie, have always been less about the robots and more about the human condition reflected through them. That hasn’t changed. But listening now, against the backdrop of generative AI and accelerated technological evolution, they resonate with new urgency.I’ll leave you with one of Asimov’s most relevant observations, spoken nearly sixty years ago during that same 1965 interview:“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.”In many ways, we’ve fulfilled Asimov’s vision—machines that speak, systems that predict, tools that simulate. But the question of wisdom, of how we navigate this illusion of consciousness, remains wide open.And, as a matter of fact, this reflection doesn’t end here. If today’s machines can already mimic understanding—convincing us they comprehend more than they do—what happens when the line between biology and technology starts to dissolve completely? When carbon and silicon, organic and artificial, begin to merge for real?That conversation deserves its own space—and it will. One of my next newsletters will dive deeper into that inevitable convergence—the hybrid future Asimov hinted at, where defining what’s human, what’s machine, and what exists in-between becomes harder, messier, and maybe impossible to untangle.But that’s a conversation for another day.For now, I’ll sit with that thought, and with Robbie’s quiet, unpretentious loyalty, as the conversation continues.Until next time,Marco_________________________________________________📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission._________________________________________________Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
⸻ Podcast: Redefining Society and Technology https://redefiningsocietyandtechnologypodcast.com Title: Bridging Worlds: How Technology Connects — or Divides — Our Communities Guest: Lawrence Eta Global Digital AI Thought Leader | #1 International Best Selling Author | Keynote Speaker | TEDx Speaker | Multi-Sector Executive | Community & Smart Cities Advocate | Pioneering AI for Societal AdvancementWebSite: https://lawrenceeta.com On LinkedIn: https://www.linkedin.com/in/lawrence-eta-9b11139/ Host: Marco Ciappelli Co-Founder & CMO @ITSPmagazine | Master Degree in Political Science - Sociology of Communication l Branding & Marketing Consultant | Journalist | Writer | Podcasts: Technology, Cybersecurity, Society, and Storytelling.WebSite: https://marcociappelli.com On LinkedIn: https://www.linkedin.com/in/marco-ciappelli/ _____________________________This Episode’s SponsorsBlackCloak provides concierge cybersecurity protection to corporate executives and high-net-worth individuals to protect against hacking, reputational loss, financial loss, and the impacts of a corporate data breach.BlackCloak:  https://itspm.ag/itspbcweb_____________________________⸻ Podcast Summary ⸻ In this episode of Redefining Society and Technology, I sit down with Lawrence Eta — global technology leader, former CTO of the City of Toronto, and author of Bridging Worlds. We explore how technology, done right, can serve society, reduce inequality, and connect communities. From public broadband projects to building smart — sorry, connected — cities, Lawrence shares lessons from Toronto to Riyadh, and why tech is only as good as the values guiding it. ⸻ Article ⸻ As much as I love shiny gadgets, blinking lights, and funny noises from AI — we both know technology isn’t just about cool toys. It’s about people. It’s about society. It’s about building a better, more connected world. That’s exactly what we explore in my latest conversation on Redefining Society and Technology, where I had the pleasure of speaking with Lawrence Eta. If you don’t know Lawrence yet — let me tell you, this guy has lived the tech-for-good mission. Former Chief Technology Officer for the City of Toronto, current Head of Digital and Analytics for one of Saudi Arabia’s Vision 2030 mega projects, global tech consultant, public servant, author… basically, someone who’s been around the block when it comes to tech, society, and the messy, complicated intersection where they collide. We talked about everything from bridging the digital divide in one of North America’s most diverse cities to building entirely new digital infrastructure from scratch in Riyadh. But what stuck with me most is his belief — and mine — that technology is neutral. It’s how we use it that makes the difference. Lawrence shared his experience launching Toronto’s Municipal Broadband Network — a project that brought affordable, high-speed internet to underserved communities. For him, success wasn’t measured by quarterly profits (a refreshing concept, right?) but by whether kids could attend virtual classes, families could access healthcare online, or small businesses could thrive from home. We also got into the “smart city” conversation — and how even the language we use matters. In Toronto, they scrapped the “smart city” buzzword and reframed the work as building a “connected community.” It’s not about making the city smart — it’s about connecting people, making sure no one gets left behind, and yes, making technology human. Lawrence also shared his Five S principles for digital development: Stability, Scalability, Solutions (integration), Security, and Sustainability. Simple, clear, and — let’s be honest — badly needed in a world where tech changes faster than most cities can adapt. We wrapped the conversation with the big picture — how technology can be the great equalizer if we use it to bridge divides, not widen them. But that takes intentional leadership, community engagement, and a shared vision. It also takes reminding ourselves that beneath all the algorithms and fiber optic cables, we’re still human. And — as Lawrence put it beautifully — no matter where we come from, most of us want the same basic things: safety, opportunity, connection, and a better future for our families. That’s why I keep having these conversations — because the future isn’t just happening to us. We’re building it, together. If you missed the episode, I highly recommend listening — especially if you care about technology serving people, not the other way around. Links to connect with Lawrence and to the full episode are below — stay tuned for more, and let’s keep redefining society, together. ⸻ Keywords ⸻ Connected Communities, Smart Cities, Digital Divide, Public Broadband, Technology and Society, Digital Infrastructure, Technology for Good, Community Engagement, Urban Innovation, Digital Inclusion, Public-Private Partnerships, Tech LeadershipEnjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
What Hump? Thirty Years of Cybersecurity and the Fine Art of Pretending It’s Not a Human ProblemA new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliJune 6, 2025A Post-Infosecurity Europe Reflection on the Strange but Predictable Ways We’ve Spent Thirty Years Pretending Cybersecurity Isn’t About People.⸻ Once there was a movie titled “Young Frankenstein” (1974) — a black-and-white comedy directed by Mel Brooks, written with Gene Wilder, and starring Wilder and Marty Feldman, who delivers the iconic “What hump?” line.Let me describe the scene:[Train station, late at night. Thunder rumbles. Dr. Frederick Frankenstein steps off the train, greeted by a hunched figure holding a lantern — Igor.]Igor: Dr. Frankenstein?Dr. Frederick Frankenstein: It’s Franken-steen.Igor: Oh. Well, they told me it was Frankenstein.Dr. Frederick Frankenstein: I’m not a Frankenstein. I’m a Franken-steen.Igor (cheerfully): All right.Dr. Frederick Frankenstein (noticing Igor’s eyes): You must be Igor.Igor: No, it’s pronounced Eye-gor.Dr. Frederick Frankenstein (confused): But they told me it was Igor.Igor: Well, they were wrong then, weren’t they?[They begin walking toward the carriage.]Dr. Frederick Frankenstein (noticing Igor’s severe hunchback): You know… I’m a rather brilliant surgeon. Perhaps I could help you with that hump.Igor (looks puzzled, deadpan): What hump?[Cut to them boarding the carriage, Igor climbing on the outside like a spider, grinning wildly.]It’s a joke, of course. One of the best. A perfectly delivered absurdity that only Mel Brooks and Marty Feldman could pull off. But like all great comedy, it tells a deeper truth.Last night, standing in front of the Tower of London, recording one of our On Location recaps with Sean Martin, that scene came rushing back. We joked about invisible humps and cybersecurity. And the moment passed. Or so I thought.Because hours later — in bed, hotel window cracked open to the London night — I was still hearing it: “What hump?”And that’s when it hit me: this isn’t just a comedy bit. It’s a diagnosis. Here we are at Infosecurity Europe, celebrating its 30th anniversary. Three decades of cybersecurity: a field born of optimism and fear, grown in complexity and contradiction.We’ve built incredible tools. We’ve formed global communities of defenders. We’ve turned “hacker” from rebel to professional job title — with a 401(k), branded hoodies, and a sponsorship deal. But we’ve also built an industry that — much like poor Igor — refuses to admit something’s wrong.The hump is right there. You can see it. Everyone can see it. And yet… we smile and say: “What hump?”We say cybersecurity is a priority. We put it in slide decks. We hold awareness months. We write policies thick enough to be used as doorstops. But then we underfund training. We silo the security team. We click links in emails that say whatever will make us think it’s important — just like those pieces of snail mail stamped URGENT that we somehow believe, even though it turns out to be an offer for a new credit card we didn’t ask for and don’t want. Except this time, the payload isn’t junk mail — it’s a clown on a spring exploding out of a fun box.Igor The hump moves, shifts, sometimes disappears from view — but it never actually goes away. And if you ask about it? Well… they were wrong then, weren’t they?That's because it’s not a technology problem. This is the part that still seems hard to swallow for some: Cybersecurity is not a technology problem. It never was.Yes, we need technology. But technology has never been the weak link.The weak link is the same as it was in 1995: us. The same it was before the internet and before computers: Humans.With our habits, assumptions, incentives, egos, and blind spots. We are the walking, clicking, swiping hump in the system. We’ve had encryption for decades. We’ve known about phishing since the days of AOL. Zero Trust was already discussed in 2004 — it just didn’t have a cool name yet.So why do we still get breached? Why does a ransomware gang with poor grammar and a Telegram channel take down entire hospitals?Because culture doesn’t change with patches. Because compliance is not belief. Because we keep treating behavior as a footnote, instead of the core.The Problem We Refuse to See at the heart of this mess is a very human phenomenon:vIf we can’t see it, we pretend it doesn’t exist.We can quantify risk, but we rarely internalize it. We trust our tech stack but don’t trust our users. We fund detection but ignore education.And not just at work — we ignore it from the start. We still teach children how to cross the street, but not how to navigate a phishing attempt or recognize algorithmic manipulation. We give them connected devices before we teach them what being connected means. In this Hybrid Analog Digital Society, we need to treat cybersecurity not as an optional adult concern, but as a foundational part of growing up. Because by the time someone gets to the workforce, the behavior has already been set.And worst of all, we operate under the illusion that awareness equals transformation.Let’s be real: Awareness is cheap. Change is expensive. It costs time, leadership, discomfort. It requires honesty. It means admitting we are all Igor, in some way. And that’s the hardest part. Because no one likes to admit they’ve got a hump — especially when it’s been there so long, it feels like part of the uniform.We have been looking the other way for over thirty years. I don’t want to downplay the progress. We’ve come a long way, but that only makes the stubbornness more baffling.We’ve seen attacks evolve from digital graffiti to full-scale extortion. We’ve watched cybercrime move from subculture to multi-billion-dollar global enterprise. And yet, our default strategy is still: “Let’s build a bigger wall, buy a shinier tool, and hope marketing doesn’t fall for that PDF again.”We know what works: Psychological safety in reporting. Continuous learning. Leadership that models security values. Systems designed for humans, not just admins.But those are hard. They’re invisible on the balance sheet. They don’t come with dashboards or demos. So instead… We grin. We adjust our gait. And we whisper, politely:“What hump?”So what Happens now? If you’re still reading this, you’re probably one of the people who does see it. You see the hump. You’ve tried to point it out. Maybe you’ve been told you’re imagining things. Maybe you’ve been told it’s “not a priority this quarter.” And maybe now you’re tired. I get it.But here’s the thing: Nothing truly changes until we name the hump.Call it bias.Call it culture.Call it education.Call it the human condition.But don’t pretend it’s not there. Not anymore. Because every time we say “What hump?” — we’re giving up a little more of the future. A future that depends not just on clever code and cleverer machines, but on something far more fragile:Belief. Behavior. And the choice to finally stop pretending.We joked in front of a thousand-year-old fortress. Because sometimes jokes tell the truth better than keynote stages do. And maybe the real lesson isn’t about cybersecurity at all.Maybe it’s just this: If we want to survive what’s coming next, we have to see what’s already here.- The End➤ Infosecurity Europe: https://www.itspmagazine.com/infosecurity-europe-2025-infosec-london-cybersecurity-event-coverageAnd ... we're not done yet ... stay tuned and follow Sean and Marco as they will be On Location at the following conferences over the next few months:➤ Black Hat USA in Las Vegas in August: https://www.itspmagazine.com/black-hat-usa-2025-hacker-summer-camp-2025-cybersecurity-event-coverage-in-las-vegasFOLLOW ALL OF OUR ON LOCATION CONFERENCE COVERAGEhttps://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageShare this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission.Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
From Cassette Tapes and Phrasebooks to AI Real-Time Translations — Machines Can Now Speak for Us, But We’re Losing the Art of Understanding Each Other May 21, 2025A new transmission from Musing On Society and Technology Newsletter, by Marco CiappelliThere’s this thing I’ve dreamed about since I was a kid.No, it wasn’t flying cars. Or robot butlers (although I wouldn’t mind one to fold the laundry). It was this: having a real conversation with someone — anyone — in their own language, and actually understanding each other.And now… here we are.Reference: Google brings live translation to Meet, starting with Spanish. https://www.engadget.com/apps/google-brings-live-translation-to-meet-starting-with-spanish-174549788.htmlGoogle just rolled out live AI-powered translation in Google Meet, starting with Spanish. I watched the demo video, and for a moment, I felt like I was 16 again, staring at the future with wide eyes and messy hair.It worked. It was seamless. Flawless. Magical.And then — drumroll, please — it sucked!Like… really, existentially, beautifully sucked.Let me explain.I’m a proud member of Gen X. I grew up with cassette tapes and Walkmans, boomboxes and mixtapes, floppy disks and Commodore 64s, reel-to-reel players and VHS decks, rotary phones and answering machines. I felt language — through static, rewinds, and hiss.Yes, I had to wait FOREVER to hit Play and Record, at the exact right moment, tape songs off the radio onto a Maxell, label it by hand, and rewind it with a pencil when the player chewed it up.I memorized long-distance dialing codes. I waited weeks for a letter to arrive from a pen pal abroad, reading every word like it was a treasure map.That wasn’t just communication. That was connection.Then came the shift.I didn’t miss the digital train — I jumped on early, with curiosity in one hand and a dial-up modem in the other.Early internet. Mac OS. My first email address felt like a passport to a new dimension. I spent hours navigating the World Wide Web like a digital backpacker — discovering strange forums, pixelated cities, and text-based adventures in a binary world that felt limitless.I said goodbye to analog tools, but never to analog thinking.So what is the connection with learning languages?Well, here’s the thing: exploring the internet felt a lot like learning a new language. You weren’t just reading text — you were decoding a culture. You learned how people joked. How they argued. How they shared, paused, or replied with silence. You picked up on the tone behind a blinking cursor, or the vibe of a forum thread.Similarly, when you learn a language, you’re not just learning words — you’re decoding an entire world. It’s not about the words themselves — it’s about the world they build. You’re learning gestures. Food. Humor. Social cues. Sarcasm. The way someone raises an eyebrow, or says “sure” when they mean “no.”You’re learning a culture’s operating system, not just its interface. AI translation skips that. It gets you the data, but not the depth. It’s like getting the punchline without ever hearing the setup.And yes, I use AI to clean up my writing. To bounce translations between English and Italian when I’m juggling stories. But I still read both versions. I still feel both versions. I’m picky — I fight with my AI counterpart to get it right. To make it feel the way I feel it. To make you feel it, too. Even now.I still think in analog, even when I’m living in digital.So when I watched that Google video, I realized:We’re not just gaining a tool. We’re at risk of losing something deeply human — the messy, awkward, beautiful process of actually trying to understand someone who moves through the world in a different language — one that can’t be auto-translated.Because sometimes it’s better to speak broken English with a Japanese friend and a Danish colleague — laughing through cultural confusion — than to have a perfectly translated conversation where nothing truly connects.This isn’t just about language. It’s about every tool we create that promises to “translate” life. Every app, every platform, every shortcut that promises understanding without effort.It’s not the digital that scares me. I use it. I live in it. I am it, in many ways. It’s the illusion of completion that scares me.The moment we think the transformation is done — the moment we say “we don’t need to learn that anymore” — that’s the moment we stop being human.We don’t live in 0s and 1s. We live in the in-between. The gray. The glitch. The hybrid.So yeah, cheers to AI-powered translation, but maybe keep your Walkman nearby, your phrasebook in your bag — and your curiosity even closer.Go explore the world. Learn a few words in a new language. Mispronounce them. Get them wrong. Laugh about it. People will appreciate your effort far more than your fancy iPhone.Alla prossima,— Marco 📬 Enjoyed this transmission? Follow the newsletter here:https://www.linkedin.com/newsletters/7079849705156870144/New stories always incoming.🌀 Let’s keep exploring what it means to be human in this Hybrid Analog Digital Society.End of transmission. Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Writer | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
Guest:Guest: Jeremy LasmanWebsite: https://www.jeremylasman.comLinkedIn: https://www.linkedin.com/in/jeremylasman_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society & Technology PodcastVisit Marco's website 👉 https://www.marcociappelli.com _____________________________This Episode’s SponsorsBlackCloak 👉 https://itspm.ag/itspbcweb_____________________________Show Notes Blog:In this thought-provoking episode of Redefining Society & Technology, I sit down with Jeremy Lasman to question the most overlooked gadget in the human-tech equation: our own mind. We ask — if we keep updating our devices, why don’t we update the inner operating system that powers our thoughts, creativity, and connection to the world?Jeremy, a former SpaceX technologist turned philosopher-inventor, shares his journey from corporate IT to what he calls his “soul’s work”: challenging the legacy software running our lives — fear-based, outdated models of thinking — with something he calls “Imagination Technology.” It’s not metaphorical. It’s a real framework. And yes, it sounds wild — but it also makes a lot of sense.We touch on everything from open-source thinking to quantum consciousness, from the speed of technological evolution to the bottlenecks of our cultural structures like education and societal expectations. At the center is a call to action: we need to stop treating passion as a luxury and instead recognize it as the fuel for personal and collective evolution.Together, we reflect on how society tends to silo disciplines, discourage curiosity, and cling to binary thinking in a world that demands fluidity. Jeremy argues that redefining society begins with redefining the self — tearing down internal walls, embracing timelessness, and running life not on fear, but on imagination.Is this transhumanism? Is it spiritual philosophy dressed up in tech language? Maybe. But it’s also deeply human — and urgent. Because in a world where AI and tech evolve by the day, we can’t afford to be running on emotional floppy disks.So here’s the challenge: what if the next big upgrade isn’t an app, a device, or even a new piece of hardware — but a reprogramming of how we see ourselves?Enjoy. Reflect. Share with your fellow humans.And if you haven’t already, subscribe to Musing On Society & Technology on LinkedIn — new transmissions are always incoming.You’re listening to this through the Redefining Society & Technology podcast, so while you’re here, make sure to follow the show — and join us as we continue exploring life in this Hybrid Analog Digital Society.End of transmission.____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
The Future Is a Place We Visit, But Never StayMay 9, 2025A Post-RSAC 2025 Reflection on the Kinda Funny and Pretty Weird Ways Society, Technology, and Cybersecurity Intersect, Interact, and Often Simply Ignore Each Other.By Marco Ciappelli | Musing on Society and TechnologyHere we are — once again, back from RSAC. Back from the future. Or at least the version of the future that fits inside a conference badge, a branded tote bag, and a hotel bill that makes you wonder if your wallet just got hacked.San Francisco is still buzzing with innovation — or at least that’s what the hundreds of self-driving cars swarming the city would have you believe. It’s hard to feel like you’re floating into a Jetsons-style future when your shuttle ride is bouncing through potholes that feel more 1984 than 2049.I have to admit, there’s something oddly poetic about hosting a massive cybersecurity event in a city where most attendees would probably rather not be — and yet, here we are. Not for the scenery. Not for the affordability. But because, somehow, for a few intense days, this becomes the place where the future lives.And yes, it sometimes looks like a carnival. There are goats. There are puppies. There are LED-lit booths that could double as rave stages. Is this how cybersecurity sells the feeling of safety now? Warm fuzzies and swag you’ll never use? I’m not sure.But again: here we are.There’s a certain beauty in it. Even the ridiculous bits. Especially the ridiculous bits.Personally, I’m grateful for my press badge — it’s not just a backstage pass; it’s a magical talisman that wards off the pitch-slingers. The power of not having a budget is strong with this one.But let’s set aside the Frankensteins in the expo hall for a moment.Because underneath the spectacle — behind the snacks, the popcorns, the scanners and the sales demos — there is something deeply valuable happening. Something that matters to me. Something that has kept me coming back, year after year, not for the products but for the people. Not for the tech, but for the stories.What RSAC Conference gives us — what all good conferences give us — is a window. A quick glimpse through the curtain at what might be.And sometimes, if you’re lucky and paying attention, that glimpse stays with you long after the lights go down.We have quantum startups talking about cryptographic agility while schools are still banning phones. We have generative AI writing software — code that writes code — while lawmakers print bills that read like they were faxed in from 1992. We have cybersecurity vendors pitching zero trust to rooms full of people still clinging to the fantasy of perimeter defense — not just in networks, but in their thinking.We’re trying to build the future on top of a mindset that refuses to update.That’s the real threat. Not AI and quantum. Not ransomware. Not the next zero-day.It’s the human operating system. It hasn’t been patched in a while.And so I ask myself — what are these conferences for, really?Because yes, of course, they matter.Of course I believe in them — otherwise I wouldn’t be there, recording stories, chasing conversations, sharing a couch and a mic with whoever is bold enough to speak not just about how we fix things, but why we should care at all.But I’m also starting to believe that unless we do something more — unless we act on what we learn, build on what we imagine, challenge what we assume — these gatherings will become time capsules. Beautiful, well-produced, highly caffeinated, blinking, noisy time capsules.We don’t need more predictions. We need more decisions.One of the most compelling conversations I had wasn’t about tech at all. It was about behavior. Human behavior.Dr. Jason Nurse reminded us that most people are not just confused by cybersecurity — they’re afraid of it.They’re tired.They’re overwhelmed.And in their confusion, they become unpredictable. Vulnerable.Not because they don’t care — but because we haven’t built a system that makes it easy to care.That’s a design flaw.Elsewhere, I heard the term “AI security debt.” That one stayed with me.Because it’s not just technical debt anymore. It’s existential.We are creating systems that evolve faster than our ability to understand them — and we’re doing it with the same blind trust we used to install browser toolbars in the ‘90s.“Sure, it seems useful. Click accept.”We’ve never needed collective wisdom more than we do right now.And yet, most of what we build is designed for speed, not wisdom.So what do we do?We pause. We reflect. We resist the urge to just “move on” to the next conference, the next buzzword, the next promised fix.Because the real value of RSAC isn’t in the badge or the swag or the keynotes.It’s in the aftershock.It’s in what we carry forward, what we refuse to forget, what we dare to question even when the conference is over, the blinking booths vanish, the future packs up early, and the lanyards go into the drawer of forgotten epiphanies — right next to the stress balls, the branded socks and the beautiful prize that you didn't win.We’ll be in Barcelona soon. Then London. Then Vegas.We’ll gather again. We’ll talk again. But maybe — just maybe — we can start to shift the story.From visiting the future… To staying a while.Let’s build something we don’t want to walk away from. And now, ladies and gentlemen… the show is over.The lights dim, the music fades, and the future exits stage left...Until we meet again.—Marco ResourcesRead first newsletter about RSAC 2025 I wrote last week " Securing Our Future Without Leaving Half Our Minds in the Past" https://www.linkedin.com/pulse/securing-our-future-without-leaving-half-minds-past-marco-ciappelli-cry1c/🎙️ Explore Our Full RSAC 2025 Coverage on ITSPmagazine We would like to thank our full event coverage sponsors and look forward to our On Location conversationsMinimize imageEdit imageDelete imageThreatLocker: https://itspm.ag/threatlocker-r974Akamai Technologies: https://itspm.ag/akamailbwcBLACKCLOAK: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher Integrated Risk Management: https://itspm.ag/rsaarchwebISACA: https://itspm.ag/isaca-96808Object First: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868 ... and thank you to our event briefing partners, with whom we will also record On Location briefingsInfinidat: https://itspm.ag/infini3o5dCoalfire: https://itspm.ag/coalfire-yj4wManageEngine: https://itspm.ag/manageen-631623Detecteam: https://itspm.ag/detecteam-21686Stellar Cyber: https://itspm.ag/stellar-cyber--inc--357947Qualys: https://itspm.ag/qualys-908446Corelight: https://itspm.ag/coreligh-954270Anomali: https://itspm.ag/anomali-bdz393 And ... we're not done yet ... stay tuned and follow Sean and Marco as they will be On Location at the following conferences over the next few months:➤ Infosecurity Europe in London in June: https://www.itspmagazine.com/infosecurity-europe-2025-infosec-london-cybersecurity-event-coverage➤ OWASP® Foundation AppSec Global in Barcelona in May: https://www.itspmagazine.com/owasp-global-appsec-barcelona-2025-application-security-event-coverage-in-catalunya-spain➤ Black Hat USA in Las Vegas in August: https://www.itspmagazine.com/black-hat-usa-2025-hacker-summer-camp-2025-cybersecurity-event-coverage-in-las-vegas FOLLOW ALL OF OUR ON LOCATION CONFERENCE COVERAGEhttps://www.itspmagazine.com/technology-and-cybersecurity-conference-coverage Share this newsletter and invite anyone you think would enjoy it!As always, let's keep thinking!— Marco [https://www.marcociappelli.com]_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
During RSAC Conference 2025, Andrew Carney, Program Manager at DARPA, and (remotely via video) Dr. Kathleen Fisher, Professor at Tufts University and Program Manager for the AI Cyber Challenge (AIxCC), guide attendees through an immersive experience called Northbridge—a fictional city designed to showcase the critical role of AI in securing infrastructure through the DARPA-led AI Cyber Challenge.Inside Northbridge: The Stakes Are RealNorthbridge simulates the future of cybersecurity, blending AI, infrastructure, and human collaboration. It’s not just a walkthrough — it’s a call to action. Through simulated attacks on water systems, healthcare networks, and cyber operations, visitors witness firsthand the tangible impacts of vulnerabilities in critical systems. Dr. Fisher emphasizes that the AI Cyber Challenge isn’t theoretical: the vulnerabilities competitors find and fix directly apply to real open-source software relied on by society today.The AI Cyber Challenge: Pairing Generative AI with Cyber ReasoningThe AI Cyber Challenge (AIxCC) invites teams from universities, small businesses, and consortiums to create cyber reasoning systems capable of autonomously identifying and fixing vulnerabilities. Leveraging leading foundation models from Anthropic, Google, Microsoft, and OpenAI, the teams operate with tight constraints—working with limited time, compute, and LLM credits—to uncover and patch vulnerabilities at scale. Remarkably, during semifinals, teams found and fixed nearly half of the synthetic vulnerabilities, and even discovered a real-world zero-day in SQLite.Building Toward DEFCON Finals and BeyondThe journey doesn’t end at RSA. As the teams prepare for the AIxCC finals at DEFCON 2025, DARPA is increasing the complexity of the challenge—and the available resources. Beyond the competition, a core goal is public benefit: all cyber reasoning systems developed through AIxCC will be open-sourced under permissive licenses, encouraging widespread adoption across industries and government sectors.From Competition to CollaborationCarney and Fisher stress that the ultimate victory isn’t in individual wins, but in strengthening cybersecurity collectively. Whether securing hospitals, water plants, or financial institutions, the future demands cooperation across public and private sectors.The Northbridge experience offers a powerful reminder: resilience in cybersecurity is built not through fear, but through innovation, collaboration, and a relentless drive to secure the systems we all depend on.___________Guest: Andrew Carney, AI Cyber Challenge Program Manager, Defense Advanced Research Projects Agency (DARPA) | https://www.linkedin.com/in/andrew-carney-945458a6/Hosts:Sean Martin, Co-Founder at ITSPmagazine | Website: https://www.seanmartin.comMarco Ciappelli, Co-Founder at ITSPmagazine | Website: https://www.marcociappelli.com______________________Episode SponsorsThreatLocker: https://itspm.ag/threatlocker-r974Akamai: https://itspm.ag/akamailbwcBlackCloak: https://itspm.ag/itspbcwebSandboxAQ: https://itspm.ag/sandboxaq-j2enArcher: https://itspm.ag/rsaarchwebDropzone AI: https://itspm.ag/dropzoneai-641ISACA: https://itspm.ag/isaca-96808ObjectFirst: https://itspm.ag/object-first-2gjlEdera: https://itspm.ag/edera-434868___________ResourcesThe DARPA AIxCC Experience at RSAC 2025 Innovation Sandbox: https://www.rsaconference.com/usa/programs/sandbox/darpaLearn more and catch more stories from RSAC Conference 2025 coverage: https://www.itspmagazine.com/rsac25___________KEYWORDSandrew carney, kathleen fisher, marco ciappelli, sean martin, darpa, aixcc, cybersecurity, rsac 2025, defcon, ai cybersecurity, event coverage, on location, conference______________________Catch all of our event coverage: https://www.itspmagazine.com/technology-and-cybersecurity-conference-coverageWant to tell your Brand Story Briefing as part of our event coverage? Learn More 👉 https://itspm.ag/evtcovbrfWant Sean and Marco to be part of your event or conference? Let Us Know 👉 https://www.itspmagazine.com/contact-us
Guest: Dr. Bruce Y LeeSenior Contributor @Forbes | Professor | CEO | Writer/Journalist | Entrepreneur | Digital & Computational Health | #AI | bruceylee.substack.com | bruceylee.com Bruce Y. Lee, MD, MBA is a writer, journalist, systems modeler, AI, computational and digital health expert, professor, physician, entrepreneur, and avocado-eater, not always in that order.Executive Director of PHICOR (Public Health Informatics, Computational, and Operations Research) [@PHICORteam]On LinkedIn | https://www.linkedin.com/in/bruce-y-lee-68a6834/Website | https://www.bruceylee.com/_____________________________Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society PodcastVisit Marco's website 👉 https://www.marcociappelli.com _____________________________This Episode’s SponsorsBlackCloak 👉 https://itspm.ag/itspbcweb_____________________________We’re back at the bar. Bruce is here, the garlic took the day off (too young to drink?), and we’re talking about something that’s not science fiction anymore — the idea that your digital self could outlive you.Yeah. Living forever. Or at least… being replicated forever.It starts with a hologram of Princess Leia and ends with people in Japan marrying bots. And in between? There’s a messy, fascinating, unsettling space filled with AI companions, algorithmic flattery, uncanny valley doppelgängers, and the very real possibility that we’re confusing memory with simulation.Bruce brings up Star Trek — of course he does — where Captain Kirk debates a machine version of a long-dead friend who insists he’s still the real deal. Spoiler: Kirk says no. And I get it. But what if that machine knows everything I’ve ever posted, recorded, written, liked, said, or searched? What if it feels like me?Would you want to talk to it?As always, our conversation doesn’t offer a final answer — we’re not here to draw lines in the philosophical sand. We’re here to hold up a mirror and ask: is that reflection still you if it’s built out of pixels and training data?This episode is personal and playful, but also incredibly relevant. Because we’re already building legacies we don’t fully understand. Every photo, every search, every rant, every laugh — it’s all on the record now. Our historical memory is no longer dusty boxes in the attic; it’s a neural net waiting to be queried.So yeah, one day, you might be sipping your espresso while a synthetic version of your late uncle offers you advice, cracks a joke, and asks if you still listen to that one podcast.Just remember what Captain Kirk said: that might look like him, sound like him, even think like him — but it’s not really him.Still… it’s a hell of a conversation.So join Bruce and me. Pull up a virtual stool. It’s Season 2, Episode 3. And no, that laugh you just heard isn’t AI-generated — not yet.⸻Keywords:digital immortality, AI relationships, uncanny valley, chatbot therapy, synthetic identity, Star Trek, brain uploading, holograms, emotional AI, algorithmic intimacy, digital clone, memory simulation, techno-sociology, posthumanism, virtual consciousness, AI ethics, social engineering, digital legacy, artificial friends, future of identitySee You Next TimeYou'll find links to connect with Bruce and explore his incredible contributions in journalism and medicine. I promise you; he's just as insightful and entertaining as he seems in the series. So, see you next time – same bar, same garlic, new topics!_____________________________Resources/ReferencesThe Singularity Is Nearer: When We Merge with AIby Ray Kurzweil____________________________Listen to more Redefining Society & Technology stories and subscribe to the podcast:👉 https://redefiningsocietyandtechnologypodcast.comWatch the webcast version on-demand on YouTube:👉 https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9Are you interested Promotional Brand Stories for your Company and Sponsoring an ITSPmagazine Channel?👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast
🪐 From Myth to Machine: When Stories Shaped Our Journey to the StarsApril 9, 2025Before humanity launched rockets toward distant planets or placed satellites that quietly orbit our Earth, before telescopes pierced the cosmic veil to reveal distant galaxies, we looked to the night sky armed only with wonder. Beneath starlit skies, humans gathered around fires, weaving myths from scattered constellations. These celestial bodies became our companions—gods, heroes, tricksters—not simply pinpoints of distant light, but storytellers of destiny and reflection.Then came Galileo, a solitary figure who raised a simple tube of lenses skyward and irrevocably altered humanity’s story. His telescope shattered myths, replacing divine portraits with measurable landscapes. Mountains on the moon, moons around Jupiter—Galileo did not silence imagination; instead, he opened a door between wonder and reality, bridging storytelling and science.Yet, even as telescopes multiplied and humanity’s understanding deepened, our dreams kept pace, evolving into vibrant visions and audacious predictions. Writers began to sketch the future with an uncanny precision that blurred fiction and foresight. Jules Verne and H.G. Wells planted the seeds of possibility with lunar voyages and Martian encounters, not as mere entertainment, but as blueprints for what humanity could dare to achieve.As technology accelerated in the twentieth century, our visions became grander, more complex, filled with moral ambiguities and philosophical questions. Isaac Asimov imagined civilizations stretching across galaxies, bound by logic and law, but also warned of humanity’s fragile reliance on machines. Arthur C. Clarke envisioned not just interplanetary travel but the ethical challenges of artificial intelligence. Frank Herbert’s Dune intricately wove ecology, politics, and spirituality into a cosmic tapestry, urging readers to reflect deeply on humanity’s relationship with power and environment.Meanwhile, cinema transformed space narratives from pages to powerful collective experiences. George Lucas and Gene Roddenberry projected humanity’s oldest myths onto the widest canvas imaginable, framing space as a realm not just of exploration but of profound human drama. Star Wars and Star Trek—epics filled with heroism, redemption, and philosophical explorations—became cultural phenomena that informed and inspired generations, molding our collective hopes and cautions about life beyond our planet.Today, we find ourselves not in an imagined future, but in a tangible present shaped by these rich narratives. Private companies and national agencies alike are racing to build orbital stations, lunar outposts, and even laying plans for interplanetary commerce. Space is no longer distant fantasy—it is a critical infrastructure woven deeply into our digital, political, and economic lives.Yet crucial questions linger:What stories do we now tell ourselves about space?Are we still guided by the optimism and cautionary lessons learned from generations of dreamers?Or are we seduced by spectacle, distracted by the headlines, losing sight of the nuanced realities and responsibilities that accompany our cosmic ambitions?The stories we tell about space shape not only our visions of the future but our very journey toward it. Let’s make sure our next chapter is one worth writing.As always, let's keep thinking!— Marco _________________________________________________Join us at ITSPmagazine for a live webinar that separates hype from reality, examining what is achievable today, what remains decades away, and what might still be forever in the realm of fiction. Together with experts in aerospace engineering, space policy, and cybersecurity, we will confront the profound implications of humanity’s increasing reliance on space-based infrastructure.  Space Is Closer Than You Think: But What’s Real, What’s Hype, and What’s NextSpace Innovation, Unfiltered: A reality check on what’s achievable today and what’s merely speculative.The State of Space Governance: Who is shaping the rules of engagement in orbit, and how do these decisions impact life on Earth?The Cybersecurity Front Line: Examining vulnerabilities in space infrastructure and their potential consequences back home.Panelists:Lauryn Williams Former Chief of Staff in the Defense Industrial Base Policy Office at the Pentagon and former Director for Strategy in the White House Office of the National Cyber DirectorJim Free Former NASA - National Aeronautics and Space Administration Associate AdministratorChris Sembroski Chief Astronaut & Founding Advisory Board Member at Titans Space IndustriesTim Fowler Founder and CEO at ETHOS Labs, LLCModerators:Sean Martin, CISSP Co-Founder, ITSPmagazineMarco Ciappelli Co-Founder, ITSPmagazine  🗓️ Join us Live (or later on demand)Thursday, April 10, 2025 | 1:00 PM EST👉 Register here: https://www.crowdcast.io/c/space-is-closer-than-you-think-but-whats-real-whats-hype-and-whats-next-an-itspmagazine-thought-leadership-webinar-april-2025-8592895e690a_________________________________________________This story represents the results of an interactive collaboration between Human Cognition and Artificial Intelligence.Marco Ciappelli | Co-Founder, Creative Director & CMO ITSPmagazine  | Dr. in Political Science / Sociology of Communication l Branding | Content Marketing | Storyteller | My Podcasts: Redefining Society & Technology / Audio Signals / + | MarcoCiappelli.comTAPE3 is the Artificial Intelligence behind ITSPmagazine—created to be a personal assistant, writing and design collaborator, research companion, brainstorming partner… and, apparently, something new every single day.Enjoy, think, share with others, and subscribe to the "Musing On Society & Technology" newsletter on LinkedIn.
loading
Comments