DiscoverThe AI Guardrail Podcast with Kuya Dev
The AI Guardrail Podcast with Kuya Dev
Claim Ownership

The AI Guardrail Podcast with Kuya Dev

Author: Rem "Kuya Dev" Lampa

Subscribed: 0Played: 0
Share

Description

Decoding AI ethics and governance from the engineering front lines. Perspectives from the Global South.

ai.kuya.dev
4 Episodes
Reverse
I’m back after a month away from the podcast, as life got pretty hectic lately. But I’m excited to share what I’ve been thinking about, especially regarding a topic that sparked some interesting conversations on social media.In this episode, I dive deep into a claim that’s been on my mind: vibe coding is fundamentally probabilistic, while traditional programming is deterministic. And no, before you think I’m against AI-assisted development—I’m not. What I want to do is help you understand the crucial difference between these approaches so you can make informed decisions about how to use them responsibly.We’ll walk through what probabilistic and deterministic systems actually mean, examine how they apply to both vibe coding and manual code writing, and explore what that trade-off really costs you. I also share some personal updates about streamlining my machine learning workflow for graduate school.Whether you agree with my take or not, I’d love to hear your thoughts. Comment them down below!Episode Timestamps:- 0:00 — Introduction and update on my MSc in Responsible AI journey- 6:55 — Main topic introduction: Vibe coding vs. traditional programming- 8:08 — Defining probabilistic vs. deterministic systems- 10:59 — How this applies to vibe coding- 12:31 — How this applies to traditional programming- 16:40 — Why people think traditional programming is probabilistic- 20:24 — The accountability question and trade-offs- 21:14 — Wrapping up the core argumentThanks so much for tuning in—it means a lot to have you here. If this resonates with you, I’d really appreciate a like or share to help this conversation reach more people who care about responsible AI development.Looking forward to hearing your perspective in the comments, and I’ll catch you on the next episode. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ai.kuya.dev
I’m thrilled to share that I’ve officially become a certified AI Governance Professional (AIGP)!In this episode, I walk you through my entire certification journey: why I pursued it, how I prepared, what makes the exam so challenging, and concrete tips if you’re thinking about taking it yourself.Beyond the certification, I also dive into updates from my master’s program in responsible AI—including a fascinating session on real-time language translation from a Google engineer—and share my critical take on the latest open-source AI agent hype (Moltbot/OpenClaw).Episode Highlights & Timestamps:- 0:00 – Introduction and my AIGP certification announcement- 7:20 – Graduate school updates: Computer vision, NLP, and learning from industry experts- 12:00 – The Moltbot phenomenon: Why the hype doesn’t match reality- 17:30 – Why I pursued AIGP certification (and why it matters beyond lawyers)- 28:30 – The exam structure, difficulty, and why practice exams are essential- 32:00 – Does the certification make you an expert?- 35:40 – My top tips for aspiring AIGP candidates- 40:30 – Who needs this certification and why governance bridges the gap between tech and lawLike I promised here are resources to help you (these are NOT affiliate links):* IAPP AIGP Certification (Purchase)* AIGP Body of Knowledge (version 2.1, 2026)* AIGP Study Guide* IAPP Memberships* Official AIGP Practice Exam* Dr. Kyle’s AIGP Masterclass on his website* Dr. Kyle’s AIGP Masterclass on Udemy (just purchase it from his website instead)* Official IAPP AIGP Online TrainingAI governance isn’t just for lawyers—it’s for engineers, product managers, and anyone building AI systems. This certification grounded me in the legal frameworks that actually constrain what’s possible, and that’s invaluable knowledge for anyone serious about responsible AI.If you’re exploring AI, AI governance or AI ethics professionally or out of curiosity, I genuinely believe the AIGP path is also worth considering. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ai.kuya.dev
I’m diving into three significant actions the Philippine Department of Information and Communications Technology (DICT) has taken recently, and I wanted to unpack them with you in this episode.These aren’t simple tech decisions—they touch on governance, privacy, free speech, and the responsibility that comes with government technology initiatives.What we discuss:The conversation covers three main areas: the DICT’s exaggerated claims about blockchain security for the national budget, the decision to ban Grok, and a new proposal requiring social media verification for Filipino citizens. I’ll walk through why I believe some of these approaches miss the mark, not because they’re trying to solve the wrong problems, but because the solutions themselves carry risks we need to think through carefully.I also share a bit about my current journey in my Master’s program in Responsible AI and my preparation for the AI Governance Professional exam.Episode Timestamps:- 00:00 – Introduction and episode overview- 01:44 – Personal introduction and career transition story- 05:02 – Updates on my Master’s in Responsible AI- 08:36 – Main topic: DICT’s recent actions begin- 09:14 – The blockchain press conference and the 101% unhackable claim- 19:40 – What responsible communication should sound like- 33:00 – The Grok ban and questions about implementation- 43:13 – Social media verification proposal and privacy concerns- 48:52 – The right to voice concerns (even without perfect solutions)- 52:00 – Closing thoughts and call to action---Thank you so much for taking the time to listen. I know this episode ran long and touched on heavy topics—that’s partly because these issues matter, and partly because the DICT’s recent decisions have sparked important conversations across our tech community.I don’t expect everyone to agree with me, and honestly, you don’t have to. What matters most is that we’re all thinking critically about how technology shapes our governance, our privacy, and our society.If you found value in this episode, I’d genuinely appreciate it if you’d share it with others, leave a comment with your thoughts, or subscribe wherever you listen to podcasts—whether that’s YouTube, Spotify, Apple Podcasts, or here on Substack. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ai.kuya.dev
I'm thrilled to share the inaugural episode of The AI Guardrail Podcast with you: a platform dedicated to examining artificial intelligence through a critical and thoughtful lens.In this episode, I discuss what motivated me to launch this podcast, tracing it back to a TEDx talk I gave in Davao where I first encountered the concept of "data colonialism." I explore why I felt compelled to become a voice for responsible AI, particularly within the global south where these conversations are still largely absent.The episode also features my analysis of recent remarks by AMD CEO Lisa Su about scaling AI compute 100x and her projection that 5 billion people will be using AI by 2031. I examine what these ambitious claims mean for populations in developing nations and question whether such projections account for the billions living below the poverty line. Here's the link to her CES interview.Throughout, I share my perspective on why we must push back against narratives that treat endless AI scaling as inevitable, and why small businesses and communities shouldn't feel pressured to adopt technology simply for the sake of growth.Timestamps- 00:02 — Introduction and episode overview- 02:34 — My TEDx talk and the genesis of this project- 08:22 — Discovering data colonialism and AI's hidden costs- 14:50 — Why I'm starting this podcast now- 22:41 — AMD CEO Lisa Su's AI projections analyzed- 27:55 — Who will actually use AI by 2031?- 35:45 — Why not everything needs to scale- 39:05 — Tech CEO bubble and the global south perspective- 41:38 — Closing remarksThank you for joining me on this first episode. Whether you're curious about AI ethics, governance, or simply want to hear a thoughtful critique of where the AI industry is heading, I hope you'll find value here.I'm not here to dismiss artificial intelligence, it has genuinely transformative applications. Rather, I want to ensure that as AI advances, we don't lose sight of its human and environmental costs, and that voices from communities beyond Silicon Valley get heard.I'd love to hear your thoughts. Drop a comment, share this with someone who'd appreciate the conversation, and subscribe to stay updated as we dive deeper into these critical topics. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit ai.kuya.dev
Comments