DiscoverThe Tech Trek
The Tech Trek

The Tech Trek

Author: Elevano

Subscribed: 10Played: 990
Share

Description

The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.
628 Episodes
Reverse
Anjali Jameson, Chief Product Officer at Arbiter, says the hard part is not gathering data. It is getting action across patients, providers, and payers without breaking what already works.“Automating something that’s broken is not going to necessarily give us better outcomes.”Arbiter is a care orchestration platform built for patients, providers, and payers together, not a single point solution. The operating spine ingests and makes actionable data across the patient journey, including provider directories, EMR integrations, claims, and financial and policy data from health plans, then connects it to highly personalized multi channel agentic outreach. You will hear why cross system context matters, how total cost of care stays in view while each stakeholder chases different leading metrics, and what it looks like to move from automation into optimization, like going from a call center scheduling flow to 60 percent conversion and pushing toward 95 percent conversion.Timeline00:40 Care orchestration platform, operating spine, data across the patient journey04:33 Misaligned incentives, prior authorizations, 12 to 14 hours a week09:42 Total cost of care, star metric, building for different metrics12:25 Long form personalized videos, transportation, education, medication management15:02 Prior authorization from three to six days to almost instantaneous22:07 COVID, provider messaging two, three X, AI responds fasterSubscribe and share it with someone who is building in health tech.
Most data teams do not have a tooling problem. They have a customer service problem.Mo Villagran, Associate Director of Insights, Analytics, and Data at Cambrex, argues that stakeholder expectation management is the difference between being a trusted advisor and being an order taker."In a simple word, it's really just customer service."In this episode, Mo breaks down how to manage stakeholder expectations, define expected delivery value, and keep projects aligned to real business outcomes instead of chasing rebranded tools. She shares why simple solutions often win, how to show progress even when the work is plumbing, and why qualitative stakeholder testimony beats dashboard count KPIs. You will also hear how she thinks about AI as a tool, when it works, when it is just a cool toy, and how to build trust by demoing in real time.00:02:00 Stakeholder expectation management is customer service00:03:00 Why skeleton teams can still deliver value00:06:00 Who defines expected delivery value, and how to shape it00:09:00 Negotiate expectations, do not become an order taker00:18:00 How to show progress when there is nothing visual00:21:00 Stop chasing quantitative KPIs, win with testimonySubscribe and share this episode with anyone who is knee deep in stakeholder management.
Ashok Krishnamurthi, Managing Partner at Great Point Ventures, says the biggest mistake in venture capital is confusing prediction with judgment.Early stage investing is not about perfect stories, it is about first principles and picking the founder who can execute when the story breaks.This episode is for startup founders and investors who want a cleaner filter for what matters.“You have to learn to check your ego at the door because it’s a partnership.”Ashok shares his path from engineering into building companies, then into venture capital, and explains how he forms an investment thesis when markets are noisy. We talk about founder evaluation, why picking the jockey matters more than the idea, and how first principles thinking shows up in real domains like healthcare data and cancer. We also get practical about artificial intelligence, why AI is not only a compute race, and how AI inference, energy efficiency, and cost shape what wins.00:00 Why legacy matters more than VC metrics02:28 Engineer to founder to venture capital11:16 How to pick the jockey14:21 First principles, cancer data, and AI constraints23:24 AI is here to stay, keep your mind open30:15 How to reach AshokIf this episode helped, subscribe and share it with a builder or investor who will use it.
Aditya Agarwal did not plan to work in robotics. He got rejected from his first-choice major, joined a student club to keep his parents off his back, and stumbled into one of the fastest-growing fields in tech. Now he is Head of Robotics at Medra, a company building physical AI scientists that let researchers run experiments remotely at speeds a traditional lab cannot touch."Even the companies that have made the most progress haven't deployed at the scale of laptops, cars, or phones. So if you have experience scaling hardware products, that is super valuable at an early-stage robotics company."What we get into: why the PhD requirement is mostly gone, how AI is shrinking the hardware development timeline, and the cheapest way to start building with robotics today if you cannot afford to go back to school or take a step back in your career.Timestamped Highlights01:19 The accidental path into robotics that actually worked03:04 Whether you still need an engineering degree for hardware roles04:48 Master's degree vs. early-stage startup: what gets you there faster10:57 How AI is replacing the guesswork in hardware configuration15:51 How to start learning robotics at home without spending much18:38 Why rigid hiring processes are costing robotics teams good candidatesIf this one lands, subscribe and share it with someone who has been thinking about making a move into the space.
Ronak Desai, Co-founder and CPTO at Payment Labs, breaks down a surprisingly hard problem that sits at the intersection of fintech, sports, and compliance. If you have ever assumed paying winners is just a simple payout flow, this episode will change that view fast.Payment Labs helps tournament organizers, league operators, and modern sports businesses handle payouts plus tax compliance and support, all in one system. Ronak explains why spot payments are high risk, why manual workflows still dominate the space, and how stablecoins and AI are about to reshape fraud, identity, and trust.Key TakeawaysOne time payouts are a fraud magnet, inconsistent winners and risk based rules make verification and compliance much harder than payrollSolving payments without solving tax and forms still leaves the biggest liability sitting with the organizerMany sports and esports operators still run payouts in a surprisingly analog way, checks, cash, and post event cleanupAI is now good enough to pressure identity verification, and stablecoins make recovery harder because transfers are effectively finalProduct adoption depends on meeting users where they are, younger athletes expect texting and simple flows, not tickets and portalsTimestamped Highlights00:29 What Payment Labs actually does, payouts plus tax compliance plus support for sports, esports, and creator economy use cases01:15 The origin story, a real tax problem hit an esports operator and exposed how broken the payout workflow is02:46 Why spot payments raise risk, random recipients, fraud pressure, and why bank partners treat this differently than payroll04:58 The industry reality check, still running on checks and cash, and what digitizing the workflow unlocks next06:58 AI fraud versus AI detection, how identity verification is getting bypassed and why stablecoin rails raise the stakes11:55 The NIL wild west and the product lesson, meet athletes where they already live, including iMessage supportA Line Worth RepeatingNow you have AI committing the fraud and then you have AI detecting the fraud.Pro Tips for Builders and OperatorsIf your users are young and mobile first, build support where they already communicate, texting beats ticketing for adoptionDo not bolt on AI for a storyline, use it where it replaces manual work you already do and frees time for higher leverage decisionsMap your tasks with the Eisenhower quadrant, then automate what is repetitive before you chase shiny featuresCall to ActionIf this episode helped you think differently about fintech, fraud, and modern payout infrastructure, follow the show and share it with a founder or operator who touches payments. For more conversations at the intersection of tech, data, and real world execution, connect with Amir on LinkedIn and subscribe to the Elevano newsletter.
Healey Cypher, CEO of BoomPop and COO at Atomic, breaks down what separates founders who win from founders who stall. You will hear a clear way to judge whether an idea is truly worth building, plus the trust mechanics that get investors, customers, and teammates to actually follow you.This conversation is a practical map for tech builders who want to pick smarter problems, execute faster, and earn credibility without the founder theater.Key TakeawaysFounders matter most, but the idea is still a gate, the same great team can get wildly different outcomes depending on the market and timingVC backed is a specific game, it requires not just big potential, but fast scale, and the incentives are not the same as building a profitable lifestyle businessA quick reality check for market size, if you need more than about five to seven percent penetration to hit meaningful revenue, it is usually a brutal pathPainkillers beat vitamins, solve an urgent problem people feel right now, or you risk getting cut the moment budgets tightenTrust is built through authenticity, logic, and empathy, if one wobbles, people feel it fast, and progress slows everywhereTimestamped Highlights00:00:00 Healey’s background, why BoomPop, and what the episode is really about00:02:00 The post pandemic spend shift and the why now behind modern events and group travel00:04:30 Founder versus idea, why execution dominates, but the opportunity still decides the ceiling00:06:40 The VC reality, power law returns, speed, and why some good businesses are still a no for venture00:09:15 A simple market math test, penetration levels that become a growth wall00:19:00 Trust as a founder skill, the three ingredients and how to spot when one is missing00:21:30 Vulnerability as a shortcut to real connection, plus the giver mindset that makes people want you to winA line worth stealingIf everyone wants you to win, it is a lot easier to win.Pro Tips for Tech FoundersAsk yourself what you naturally look forward to doing, that is often your zone of strength, hire around the tasks you dreadLearn the financial basics early, especially cash flow, it is the scoreboard that keeps you alive long enough to winWhen trust is lagging, check the three levers, are you showing the real you, can people follow your reasoning, do they feel you care about their outcomesWhat's next:If you build products, lead teams, or are thinking about starting something, follow the show so you do not miss episodes like this. Also connect with me on LinkedIn for short takeaways and clips from each conversation.
Ty Wang, cofounder and CEO of Angle Health, breaks down what it means to give back through public service, then shows how that same mindset drives his mission to modernize healthcare for small and midsize businesses. We get into why legacy health plans feel opaque and painful, what an AI native health plan actually changes behind the scenes, and how better data and workflows can create real cost stability for employers.Ty shares his path from a federal scholarship and national service work to Palantir, and why he chose one of the most regulated, least glamorous industries to build in. If you have ever wondered why healthcare feels impossible to navigate, or why renewals can blindside a company, this conversation will give you a clear mental model of the problem and a practical view of what modernization looks like when it actually ships. Key TakeawaysHealthcare feels broken because the infrastructure is fragmented, data is siloed, and even basic questions become hard to answer across inconsistent systemsModernizing healthcare is not just about a new app, it is about rebuilding the operational core so workflows, claims, underwriting, and member experience can run on integrated dataSmall and midsize businesses are hit hardest by cost volatility because they lack transparency, predictability, and negotiating leverage, yet health insurance is often a top line item after payrollA strong approach to regulated markets is collaborative, treat regulators as partners in consumer protection, not obstacles to work aroundMission and impact can be a recruiting advantage, especially when the technical problems are genuinely hard and the outcomes touch real people fastTimestamped Highlights00:40 What Angle Health is, and what AI native means in a real health plan02:05 The scholarship path that pulled Ty into public service and set his trajectory04:06 The personal story behind the mission, the American dream, and why access matters09:38 Why healthcare infrastructure is so complex, and how siloed systems create bad experiences11:33 Why SMBs get squeezed, and how manual administration blocks customization at scale13:20 The real pain point for employers, cost volatility and zero predictability before renewal16:55 Why the tech can expand beyond SMBs, but why the SMB market is already massive19:51 Lessons from building in a regulated industry, and why credibility and funding matter22:26 Hiring for high agency, mission driven talent in a world full of AI companiesA line that sticks“Unless you are lucky enough to work for a big company, these modern healthcare services are still largely inaccessible to the vast majority of Americans.”Pro Tips for tech operators and buildersIf you are modernizing a legacy industry, start with the infrastructure layer, fix the data model, integrate the systems, then automate workflowsIn regulated markets, build relationships early, show how your product improves consumer outcomes, and make compliance a design constraint, not a bolt onWhen selling into SMBs, predictability beats perfection, give customers a clear breakdown of what drives costs and what they can controlWhat's next:If this episode helped you see healthcare and legacy modernization more clearly, follow the show on Apple Podcasts or Spotify and subscribe so you do not miss the next conversation. Also, share it with one operator or builder who is trying to modernize a messy industry.
Gabe Ravacci, CTO and co-founder at Internet Backyard, breaks down what the “computer economy” really looks like when you zoom in on data centers, billing, invoicing, and the financial plumbing nobody wants to touch. He shares how a rejected YC application, a finance stint, and a handful of hard lessons pushed him from hardware curiosity to building fintech infrastructure for compute.If you care about where compute is headed, or you are early in your career and trying to find your path without overplanning it, this one will land.Key Takeaways• Startups often happen “by accident” when your competence meets the right problem at the right time• Compute accessibility is not only a chip problem, it is also a finance and operations problem• Rejection can be data, not a verdict, treat it as feedback to sharpen the craft• A real online presence is less about networking and more about being genuinely useful in public• Time blocking and single task focus beats grinding when you are juggling school, work, and a startupTimestamped Highlights00:28 What Internet Backyard is building, fintech infrastructure for data center financial operations01:37 The first startup attempt, cheaper compute via FPGA based prototyping, and why investors passed04:48 The pivot, from hardware tools to a finance informed view of compute and transparency gaps06:55 How Gabe reframed YC rejection, process over outcome, “a tree of failures” that builds skill08:29 Building a digital brand on X, what he posted, how he learned in public, and why it worked13:36 The real balancing act, dropping classes, finishing the degree well, and strict time blocking20:00 Books that shaped his thinking, Siddhartha, The Art of Learning, Finite and Infinite GamesA line worth keeping“The process is really more important than any outcome.”Pro Tips for builders• Treat learning like a skill, ask better questions before you chase better answers• Make focus a system, set blocks, mute distractions, and do one thing at a time• Share what you are learning in public, not to perform, but to be useful and find signalCall to ActionIf this episode sparked an idea, follow or subscribe so you do not miss the next one. Also check out Amir’s newsletter for more conversations at the intersection of people, impact, and technology.
Data leaders are being asked to ship real AI outcomes while the foundations are still messy. In this conversation, Dave Shuman, Chief Data Officer at Precisely, breaks down what actually determines whether AI adoption sticks, from hiring “comb shaped” talent to building trusted data products that make AI outputs believable and usable.If you are building in data, AI, or analytics, this episode is a practical map for what needs to be true before AI can move from demos to dependable, repeatable impact.Key TakeawaysComb shaped talent beats narrow specialization, AI work rewards people who can span multiple skills and collaborate wellAdoption is a trust problem, and trust starts with data integrity, lineage, context, and a semantic layer that business users can understandOpen source drives the innovation, commercialization makes it safe and usable at enterprise scale, especially around security and supportData must be fit for purpose, start every AI project by asking what data it needs, who curates it, and what the known warts areHumans are still the last mile, small workflow choices can make adoption jump, even when the model is already accurateTimestamped Highlights00:56 The shift from T shaped to comb shaped talent, what modern AI teams actually need to look like05:36 Hiring for team fit over “world class” niche skills, and when to bring in trusted partners for depth07:37 How open source sparks the ideas, and why enterprises still need hardened, supported versions to scale11:31 Where AI adoption is today, why summarization is only the beginning, and what unlocks “AI 2.0”13:39 The trust stack for AI, clean integrated data, lineage, context, catalog, semantic layer, then agents19:26 A real adoption lesson from machine learning, and why the human experience decides if the system winsA line worth stealing“You do not just take generative AI and throw it at your chaos of data and expect it to make magic out of it.”Pro Tips for data and AI leadersHire and build teams like Tetris, fill skill voids across the group instead of chasing one perfect profileUse partners for the sharp edges, but require knowledge transfer so your team levels up every engagementMake adoption easier by designing for human behavior, sometimes the smallest workflow tweak beats more accuracyBuild governed data products in a catalog, then validate AI outputs side by side with dashboards to earn trust fastCall to ActionIf this helped you think more clearly about AI adoption, talent, and data foundations, follow the show and turn on notifications so you do not miss the next episode. Also, share it with one data or engineering leader who is trying to get AI out of pilots and into real workflows.
Cloud bills are climbing, AI pipelines are exploding, and storage is quietly becoming the bottleneck nobody wants to own. Ugur Tigli, CTO at MinIO, breaks down what actually changes when AI workloads hit your infrastructure, and how teams can keep performance high without letting costs spiral. In this conversation, we get practical about object storage, S3 as the modern standard, what open source really means for security and speed, and why “cloud” is more of an operating model than a place. Key takeaways• AI multiplies data, not just compute, training and inference create more checkpoints, more versions, more storage pressure • Object storage and S3 are simplifying the persistence layer, even as the layers above it get more complex • Open source can improve security feedback loops because the community surfaces regressions fast, the real risk is running unsupported, outdated versions • Public cloud costs are often less about storage and more about variable charges like egress, many teams move data on prem to regain predictability • The bar for infrastructure teams is rising, Kubernetes, modern storage, and AI workflow literacy are becoming table stakes Timestamped highlights00:00 Why cloud and AI workloads force a fresh look at storage, operating models, and cost control 00:00 What MinIO is, and why high performance object storage sits at the center of modern data platforms 01:23 Why MinIO chose open source, and how they balance freedom with commercial reality 04:08 Open source and security, why faster feedback beats the closed source perception, plus the real risk factor 09:44 Cloud cost realities, egress, replication, and why “fixed costs” drive many teams back inside their own walls 15:04 The persistence layer is getting simpler, S3 becomes the standard, while the upper stack gets messier 18:00 Skills gap, why teams need DevOps plus AIOps thinking to run modern storage at scale 20:22 What happens to AI costs next, competition, software ecosystem maturity, and why data growth still wins A line worth keeping“Cloud is not a destination for us, it’s more of an operating model.” Pro tips for builders and tech leaders• If your AI initiative is still a pilot, track egress and data movement early, that is where “surprise” costs tend to show up • Standardize around containerized deployment where possible, it reduces the gap between public and private environments, but plan for integration friction like identity and key management • Treat storage as a performance system, not a procurement line item, the right persistence layer can unblock training, inference, and downstream pipelines What's next:If you’re building with AI, running data platforms, or trying to get your cloud costs under control, follow the show and subscribe so you do not miss upcoming episodes. Share this one with a teammate who owns infrastructure, data, or platform engineering.
This is an early conversation I am bringing back because it feels even more relevant now, the intersection of AI and art is turning into a real cultural shift.I sit down with Marnie Benney, independent curator at the intersection of contemporary art and technology, and co-founder of AIartists.org, a major community for artists working with AI. We talk about what AI art actually is beyond the headlines, where authorship gets messy, and why artists might be the best people to pressure test the societal impact of machine learning.Key takeaways• AI in art is not a single thing, it is a spectrum of choices, dataset, process, medium, and intent• The most interesting work treats AI as a collaborator, not a shortcut, a back and forth that reshapes the artist’s decisions• Authorship is still unsettled, some artists see AI as a tool like an instrument, others treat it as a creative partner• The fear that AI replaces creativity misses the point, artists can use the machine’s unexpected output to expand human expression• Access matters, compute, tooling, and collaboration between artists and technologists will shape who gets to experiment at the frontierTimestamped highlights00:04:00 Curating science, climate, and public engagement, the path into tech driven exhibitions00:07:41 What AI art can mean in practice, datasets, iteration loops, and choosing an output medium00:10:48 Who gets credit, tool versus collaborator, and the art world’s evolving rules00:13:51 Fear, job displacement, and a healthier frame, human plus machine as a creative partnership00:22:57 The new skill stack, what artists need to learn, and where collaboration beats handoffs00:29:28 The pushback from traditional art circles, philosophy and intention versus novelty00:37:17 Inside the New York exhibition, collaboration between human and machine, visuals, sculpture, and sound00:48:16 The magic of the unknown, why the output can surprise even the artistA line that stuck“Artists are largely showing a mirror to society of what this technology is, for the positive and the negative.”Pro tips for builders and operators• Treat creative communities as an early signal, artists surface second order effects before markets do• If you are building AI products, study authorship debates, they map directly to credit, accountability, and trust• Collaboration beats delegation, when domain experts and technologists iterate together, the work gets sharper fastCall to actionIf this episode hits for you, follow the show so you do not miss the next drop. And if you are building in data, AI, or modern tech teams, follow me on LinkedIn for more conversations that connect technology to real world impact.
Most teams are approaching AI from the wrong direction, either chasing the tech with no clear problem or spinning up endless pilots that never earn their keep. In this episode, Amir Bormand sits down with Steve Wunker, Managing Director at New Markets Advisors and co author of AI and the Octopus Organization, to break down what actually works in enterprise AI.You will hear why the real challenge is organizational, not technical, how IT and business have to co own the outcome, and what it takes to keep AI systems valuable over time. If you are trying to move beyond experimentation and into real impact, this conversation gives you a practical blueprint.Key takeaways• Pick a handful of high impact problems, not hundreds of small pilots, focus is what creates measurable ROI• Treat AI as a workflow and change program, not a tool you bolt onto an existing process• IT has to evolve from order taker to strategic partner, including stronger AI ops and ongoing evaluation• Start with the destination, redefine the value proposition first, then redesign the operating model around it• Ongoing ownership matters, AI is not a one and done delivery, it needs stewardship to stay usefulTimestamped highlights00:39 What New Markets Advisors actually does, innovation with a capital I, plus AI in value props and operations01:54 The two common mistakes, pushing AI everywhere and launching hundreds of disconnected pilots04:19 Why IT cannot just take orders anymore, plus why AI ops is not the same as DevOps07:56 Why the octopus is the perfect model for an AI age organization, distributed intelligence and rapid coordination11:08 The HelloFresh example, redesign the destination first, then let everything cascade from that17:37 The line you will remember, AI is an ongoing commitment, not a project you ship and forget20:50 A cautionary pattern from the dotcom era, avoid swinging from timid pilots to extreme headcount mandatesA line worth keepingYou cannot date your AI system, you need to get married to it.Pro tips for leaders building real AI outcomes• Define success metrics before you build, then measure pre and post, otherwise you are guessing• Redesign the process, do not just swap one step for a model, aim for fewer steps, not faster steps• Assign long term ownership, budget for maintenance, evaluation, and model oversight from day oneCall to actionIf this episode helped you rethink how to drive AI results, follow the show and subscribe so you do not miss the next conversation. Share it with a leader who is stuck in pilot mode and wants a path to production.
Manufacturing is getting faster, messier, and more expensive when quality slips.Daniel First, Founder and CEO at Axion, joins Amir to break down how AI is changing the way manufacturers detect issues in the field, trace root causes across messy data, and shorten the time from “customers are hurting” to “we fixed it.”Episode SummaryDaniel First, Founder and CEO at Axion, explains why modern manufacturing is living in the bottom of the quality curve longer than ever, and how AI can help companies spot issues early, investigate faster, and actually close the loop before warranty costs and customer trust spiral. If you work anywhere near hardware, infrastructure, or complex systems, this is a sharp look at what “AI first” means when real products fail in the real world.You will hear why quality is becoming a competitive weapon, how unstructured signals hide the truth, and what changes when AI agents start doing the detection, investigation, and coordination work humans have been drowning in.What you will take awayQuality is not just a defect problem, it is a speed and trust problem, especially when product cycles keep compressing.AI creates leverage by pulling together signals across the full product life cycle, not by sprinkling a chatbot on one system.The fastest teams win by finding issues earlier, scoping impact correctly, and fixing what matters before customers notice the pattern.A clear ROI often lives in warranty cost avoidance and downtime reduction, not just “efficiency” metrics.“AI first” gets real when strategy becomes operational, and contradictions in how teams prioritize issues get exposed.Timestamped highlights00:00 Why manufacturing is a different kind of problem, and why speed is harder than it looks01:10 What Axion does, and how it detects, investigates, and resolves customer impacting issues05:10 The new reality, faster product cycles mean living in the bottom of the quality curve10:05 Why it can take hundreds of days to truly solve an issue, and where the time disappears16:20 How to evaluate AI vendors in manufacturing, specialization, integrations, and cross system workflows22:40 The shift coming to quality teams, from reading data all day to making higher level decisions28:10 What “AI first” looks like in practice, and how AI exposes misalignment across teamsA line worth repeating“Humans are not that great at investigating tens of millions of unstructured data points, but AI can detect, scope, root cause, and confirm the fix.”Pro tips you can applyWhen evaluating an AI solution, ask three questions up front: how specialized the AI must be, whether you need a full workflow solution or just an API, and whether the use case spans multiple systems and teams.Treat early detection as a first class objective, the longer the accumulation phase, the more cost and customer damage you silently absorb.Align issue prioritization to strategy, not just frequency, cost, or the loudest internal voice.Follow:If this episode helped you think differently about quality, speed, and AI in the real world, follow the show on Apple Podcasts or Spotify so you do not miss the next one. If you want more conversations like this, subscribe to the newsletter and connect with Amir on LinkedIn.
Synthetic data is moving from a niche concept to a practical tool for shipping AI in the real world. In this episode, Amit Shivpuja, Director of Data Product and AI Enablement at Walmart, breaks down where synthetic data actually helps, where it can quietly hurt you, and how to think about it like a data leader, not a demo builder.We dig into what blocks AI from reaching production, how regulated industries end up with an unfair advantage, and the simple test that tells you whether synthetic data belongs anywhere near a decision making system.Key Takeaways• AI success still lives or dies on data quality, trust, and traceability, not model hype. • Synthetic data is best for exploration, stress testing, and prototyping, but it should not be the backbone of high stakes decisions. • If you cannot explain how an output was produced, synthetic only pipelines become a risk multiplier fast. • Regulated industries often move faster with AI because their data standards, definitions, and documentation are already disciplined. • The smartest teams plan data early in the product requirements phase, including whether they need synthetic data, third party data, or better metadata. Timestamped Highlights00:01 The real blockers to getting AI into production, data, culture, and unrealistic scale assumptions 03:40 The satellite launch pad analogy, why data is the enabling infrastructure for every serious AI effort 07:52 Regulated vs unregulated industries, why structure and standards can become a hidden advantage 10:47 A clean definition of synthetic data, what it is, and what it is not 16:56 The “explainability” yardstick, when synthetic data is reasonable and when it is a red flag 19:57 When to think about data in stakeholder conversations, why data literacy matters before the build starts A line worth sharing“AI is like launching satellites. Data is the launch pad.” Pro Tips for tech leaders shipping AI• Start data discovery at the same time you write product requirements, not after the prototype works• Use synthetic data early, then set milestones to shift weight toward real world data as you approach production • Sanity check the solution, sometimes a report, an email, or a deterministic workflow beats an AI system Call to ActionIf this episode helped you think more clearly about data strategy and AI delivery, follow the show on Apple Podcasts and Spotify, and share it with a builder or leader who is trying to get AI out of pilot mode. You can also follow me on LinkedIn for more episodes and clips.
Tom Pethtel, VP of Engineering at Flock Safety, breaks down the real learning curve of moving from builder to manager, and how to keep your technical edge while scaling your impact through people.You will hear how Tom’s path from rural Ohio to leading high stakes engineering teams shaped his approach to leadership, hiring, and staying close to the customer. Key Takeaways​ Promotions usually come from doing your current job well, plus stepping into the work above you that is not getting done​ Great leaders do not fully detach from the craft, they stay close enough to the work to make good calls and keep context​ Put yourself where the real learning is happening, watch customers, go to the failure point, get proximity to the source of truth​ Hiring is not only pedigree, it is fundamentals plus grit, the willingness to solve what looks hard because it is “just software”​ As you scale to teams of teams, your job becomes time allocation, jump on the biggest business fire while still making rounds everywhereTimestamped Highlights00:32 What Flock Safety actually builds, from AI enabled devices to Drone as a First Responder02:04 Dropping out of Georgia Tech, switching disciplines, and choosing software for speed and impact03:30 A life threatening detour, learning you owe 18,000 dollars, and teaching yourself to build an iPhone app to survive06:33 Why Tom values grit and non traditional backgrounds in hiring, and the “it is just software” mindset08:46 Proximity and learning, go to the problem, plus the lessons he borrows from Toyota Production System09:55 A practical story of chasing expertise, from Kodak to Nokia, and hiring the right leader by going where the knowledge lives14:27 The truth about becoming a manager, you rarely feel ready, you take the seat and learn fast19:18 Leading teams of teams, you cannot be everywhere, so you go where the biggest fire is, without neglecting the rest22:08 The promotion playbook, stop only doing your job, start solving the next jobA line worth stealing“Do your job really well, plus go do the work above you that is not getting done, that’s how you rise.”Pro Tips for engineers stepping into leadership​ Stay technical enough to keep your judgment sharp, even if it is only five or ten percent of your week​ If you want to grow, chase proximity, sit with the customer, sit with the failure, sit with the best people in the space​ Measure your impact as leverage, if a team of ten is producing ten times, your role is not less valuable, it is multiplied​ When you lead multiple disciplines, rotate your attention intentionally, do not camp on one fire for a full yearCall to ActionIf this episode helped you rethink leadership, share it with one builder who is about to step into management. Subscribe on Apple Podcasts, Spotify, and YouTube, and follow Amir on LinkedIn for more conversations with operators building real teams in the real world.
Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.Key takeaways• Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it• Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up• Remote can work long term when you design for it, hire for communication, and invest in real relationship building• Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture• Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic marketsTimestamped highlights00:02:13 The founders, the pivots, and why Phil joined before Close was even Close00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are askedOne line worth stealing“Inertia is really powerful. One person championing an idea can really make a difference.”Practical ideas you can apply• If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step• If you lead a team, create parallel growth paths, management is not the only promotion ladder• If you are remote, hire for writing, decision clarity, and follow through, not just technical depth• If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specsStay connected:If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.
Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways• Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable• Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on• Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team• Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden• Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights00:00:50 What Dagster is, and why orchestration matters for every data driven team00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data00:07:02 The architectural shift, moving from task based workflows to asset based pipelines00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth RepeatingData orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams• If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes• If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk• Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to ActionIf this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.
Sandesh Patnam, Managing Partner at Premji Invest, breaks down how long duration capital changes the way you evaluate companies, founders, and moats. We talk about what most growth investors miss, why product strength still matters, and how to separate real AI businesses from thin wrappers in a noisy market.Premji Invest is a captive, evergreen fund built to grow an endowment that supports major education work, which gives the team flexibility on time horizon and partnership style. Sandesh shares how that shows up in diligence, how they think about backing contrarian founders, and why the best companies in this AI era may still be ahead of us.Key TakeawaysFocus on the long arc, not quarter by quarter optics, founders make better decisions when they are not trapped in short term metricsIn growth investing, TAM models and KPI spreadsheets can distract from the core question, does the product have real strength and an expanding roadmapEnduring outcomes often come from backing a contrarian view early, then helping it move from contrarian to consensus over timeEvergreen capital changes behavior, you can slow down, build relationships, and partner across private and public markets instead of treating IPO as the finish lineIn AI, separate the stack into data center, foundation models, and applications, then look for defensibility like vertical depth, data moats, and compounding usage valueTimestamped highlights00:38 Premji Invest explained, evergreen structure, one LP, and why public markets can be part of the journey, not the exit04:47 Two common growth investor lenses and what gets missed when product and roadmap do not lead the thesis08:48 Partnership mindset, building trust, and being the first call when things get hard12:48 The contrarian to consensus path, what creates alpha, and how to support founders through the lonely middle19:54 Why rushing decisions is a trap, and how flexibility changes when and how you can partner with a company20:55 AI investing framework, three layers, what looks frothy, what can endure, and where moats still exist26:48 The cost of intelligence is collapsing, why this may still be the early internet moment, and what that implies for the next waveA line that stuck with me“We want to be the first port of call when the seas are turbulent.”Practical moves you can stealPressure test the roadmap, ask when product two ships, what adjacency comes next, and what tradeoffs change at scaleWhen evaluating AI apps, demand a defensibility story beyond the model, look for proprietary data, vertical workflow depth, and value that improves with usageTreat speed as a risk factor, if you cannot complete your churn cycle of doubt and validation, step back rather than force certaintyCall to ActionIf you liked this one, follow the show and share it with a founder, operator, or investor who is building in AI right now. For more conversations at the intersection of tech, business, and execution, subscribe and connect with me on LinkedIn.
Software engineering is changing fast, but not in the way most hot takes claim. Robert Brennan, Co founder and CEO at OpenHands, breaks down what happens when you outsource the typing to the LLM and let software agents handle the repetitive grind, without giving up the judgment that keeps a codebase healthy. This is a practical conversation about agentic development, the real productivity gains teams are seeing, and which skills will matter most as the SDLC keeps evolving. Key TakeawaysAI in the IDE is now table stakes for most engineers, the bigger jump is learning when to delegate work to an agentThe best early wins are the unglamorous tasks, fixing tests, resolving merge conflicts, dependency updates, and other maintenance work that burns time and attentionBigger output creates new bottlenecks, QA and code review can become the limiting factor if your workflow does not adaptSenior engineering judgment becomes more valuable, good architecture and clean abstractions make it easier to delegate safely and avoid turning the codebase into a messThe most durable human edge is empathy, for users, for teammates, and for your future self maintaining the systemTimestamped Highlights00:40 What OpenHands actually is, a development agent that writes code, runs it, debugs, and iterates toward completion02:38 The adoption curve, why most teams start with IDE help, and what “agent engineers” do differently to get outsized gains06:00 If an engineer becomes 10x faster, where does the time go, more creative problem solving, less toil15:01 A real example of the SDLC shifting, a designer shipping working prototypes and even small UI changes directly16:51 The messy middle, why many teams see only moderate gains until they redraw the lines between signal and noise20:42 Skills that last, empathy, critical thinking, and designing systems other people can understand22:35 Why this is still early, even if models stopped improving today, most orgs have not learned how to use them well yetA line worth sharing“The durable competitive advantage that humans have over AI is empathy.”Pro Tips for Tech TeamsStart by delegating low creativity tasks, CI failures, dependency bumps, and coverage improvements are great training wheelsDefine “safe zones” for non engineers contributing, like UI tweaks, while keeping application logic behind clearer guardrailsInvest in abstractions and conventions, you want a codebase an agent can work with, and a human can trustTrack where throughput stalls, if PR review and QA are the bottleneck, productivity gains will not show up where you expectCall to ActionIf you got value from this one, follow the show and share it with an engineer or product leader who is sorting out what “agentic development” actually means in practice.
Deborah Hanus, Co-founder and CEO at Sparrow, joins Amir to unpack the founder journey from academia to building a scaled company. They dig into why leave management is still a messy, high stakes problem, and how Sparrow is turning it into a clean, guided experience for both HR and employees.Sparrow helps companies provide employee leave across the United States and Canada, and Deborah shares what it really takes to scale a compliance driven business without slowing down. From founder resilience and early stage emotional swings to hiring, onboarding, and culture design, this one is packed with lessons for operators and builders.Key takeaways• Academia can be real founder training, especially for building resilience and hearing “no” without losing your edge• Early stage startups feel brutal because you have too few data points, it is easy to overreact to every win or setback• Compliance and leave are fundamentally data problems, the right info to the right person at the right time changes everything• Scaling leadership is mostly communication and alignment, five people and 250 people require totally different systems• Culture does not stay stable by accident, values must drive hiring, training, rewards, and performance managementTimestamped highlights00:37 What Sparrow does, and the 300 million dollars in payroll cost savings milestone01:37 Why academia can prepare you for founding, and how customer pain beats outside skepticism03:40 The leave compliance mess, and why state by state rules made the problem explode08:25 The two real ways startups die, and why morale matters as much as cash12:55 Leading at scale, onboarding, clarity, and the feedback questions that keep teams aligned19:54 “Scale intentionally” as a culture principle for a company that cannot afford to break things25:48 Keeping values stable while everything else evolves as the team growsA line worth sharing“Companies end when you run out of cash or you run out of morale.”Pro tips you can steal• Treat the employee journey like a product journey, from recruiting through promotions and hard moments• Before a big change, collect questions early so the message lands where people actually are• After a meeting, ask “What were the main points?” to see what people heard, then tighten your messaging• Invest in onboarding and goal clarity to prevent teams from drifting into competing prioritiesCall to actionIf you enjoyed this conversation, follow and subscribe so you do not miss what is next.
loading
Comments