Discover
The Tech Trek
The Tech Trek
Author: Elevano
Subscribed: 10Played: 968Subscribe
Share
© Elevano
Description
The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.
604 Episodes
Reverse
Marco DeMeireles, co founder and managing partner at ANSA, breaks down how a modern VC firm wins by being focused, data driven, and allergic to hype. If you want a clearer view of how investors evaluate open source, mission critical industries, and AI categories, this is a practical, operator minded look behind the curtain. Marco explains ANSA’s focus on what they call undercover markets, from open source and open core businesses to defense, intelligence, cybersecurity, healthcare IT, and infrastructure companies that become deeply embedded and rarely lose customers. We also get into how they raised their first fund, why portfolio concentration changes everything, and how they push founders toward efficiency and profitability without killing ambition. Key Takeaways• In open source, two things matter more than most people admit: founder DNA tied to the project, and what you put behind the paywall that enterprises will pay for• Concentration forces rigor, fewer bets means deeper diligence, clearer underwriting, and more hands on support post investment• Great early stage support is not just advice, it is people, capital planning, and operating help that changes outcomes• AI investing gets easier when you start with category selection, avoid fickle demand, then hunt for non obvious wedges in real workflows• Long term winners tend to show compounding growth, improving efficiency, real demand, durable business models, founder strength, and an asymmetric risk reward at the price Timestamped Highlights00:00 Marco’s quick intro and what ANSA invests in00:36 Undercover markets, open source, and mission critical industries explained01:54 The two open source filters that change how ANSA underwrites a deal03:31 Why open source can work in defense, plus the Defense Unicorns example05:29 How a new firm raises a first fund, and what the right LP partners look for10:50 The three levers ANSA pulls with founders: people, capital, operations15:22 Marco’s six part framework for evaluating investments17:39 How to tell who wins in crowded AI categories, and why niche wedges matter21:41 The first investment they will never forget, and the air gapped cloud problem A line worth stealing“You can’t outsource greatness. You can’t outsource people selection.” Pro Tips• If you are building open source, be intentional about what is free versus paid, security, compliance, and auditability tend to earn real pricing power• If your business depends on paid acquisition, test a path to organic growth early, it can unlock profitability and give you leverage in fundraising and exits• In crowded AI spaces, pick a wedge where documentation is heavy, complexity is low, and ROI is obvious, then expand once you own that lane Call to ActionIf this episode helped you think more clearly about investing and building, follow the show, subscribe, and share it with one founder or operator who is navigating funding, pricing, or go to market right now
AI is everywhere, but most teams are stuck talking about efficiency and headcount. In this episode, Dave Edelman, executive advisor and best selling author, shares a sharper lens, how to use AI to create real customer value and real growth.We get into the high road vs low road of AI, what personalization should look like now, and why data has to become an enterprise asset, not a bunch of disconnected departmental files.Key Takeaways• Efficiency is table stakes, the real win is using AI to build new experiences that customers actually want• Start with customer friction, find the biggest compromises and frustrations in your category, then design around that• Personalization is no longer limited by content scale in the same way, AI changes the economics of tailoring experiences• You do not always need one giant database, modern tools can pull and connect data across systems in real time• Treat data as an enterprise resource, getting cross functional alignment is often the hardest and most important stepTimestamped Highlights• 00:46 Dave’s origin story, from early loyalty programs to Segment of One marketing• 03:33 The high road and low road of AI, growth experiences vs spam at scale• 06:51 Where to start, map the biggest customer frustrations, then build use cases from there• 16:31 The data myth, why you may not need a single mega database to get value from AI• 21:31 Data as a leadership problem, shifting from functional ownership to enterprise ownership• 25:14 Strategy that actually sticks, balancing bottom up automation with top down customer led directionA line worth stealing“Use those efficiencies to invest in growth.”Pro Tips you can apply this week• List the top five customer frustrations in your category, pick one and design an AI powered fix that removes a compromise• Audit your data reality, identify where the same customer facts live in multiple places, then decide what must be unified first• Run a simple test and learn loop, create multiple variations of one experience, measure what works, and keep iterating• Put strategy on the calendar, make room for a recurring discussion that is not just metrics and cost cuttingCall to ActionIf this episode helped you think differently about AI and growth, follow the show, leave a quick rating, and share it with one operator who is building product, data, or customer experience right now.
Amanda Kahlow, CEO and founder of 1Mind, joins Amir to break down what AI changes in modern sales and go to market, and what it does not. If you lead revenue, product, or growth, this is a practical look at where AI creates leverage today, where humans still matter, and how teams actually adopt it without chaos.Amanda shares how “go to market superhumans” can handle everything from early buyer conversations to demos, sales engineering support, and customer success. They also dig into trust, hallucinations, and why the bar for AI feels higher than the bar for people.Key takeaways• Most buyers want answers early, without the pressure that comes with talking to a salesperson• AI can remove friction by turning static content into a two way conversation that helps buyers move faster• The hardest part of adoption is not capability, it is change management and trust inside the team• Humans still shine in relationship and nuance, but AI can outperform on recall, depth, and real time access to the right info• As AI levels the selling experience, product quality matters more, and the best product has a clearer path to winTimestamped highlights00:31 What 1Mind builds, and what “go to market superhumans” actually do across the full buyer journey02:00 The buyer lens, why early conversations matter, and how AI gives control back to the buyer06:14 Why the SDR experience is frustrating for buyers, and where AI can improve both sides09:42 Change management in the real world, why “everyone build an agent” gets messy fast13:04 Why “swivel chair” AI fails, and what real time help should look like in live conversations15:52 Hallucinations and trust, plus the blunt question every leader should ask about human error22:26 Competitive advantage today, and why adoption eventually pushes markets toward “best product wins”A line worth sharing“Do your humans hallucinate, and how often do they do it?”Pro tips you can use this week• Start with low stakes usage, bring AI into calls quietly, then ask it for a summary and what you missed• Build adoption top down, define what good looks like, otherwise you get a pile of similar agents and no clarity• Focus AI on what it does best first, recall, context, and instant answers, then expand into workflow and process laterCall to actionIf this episode sparked ideas for your sales team or your product led funnel, follow the show so you do not miss the next one. Share it with one revenue leader who is trying to modernize their go to market motion, and connect with Amir on LinkedIn for more clips and operator level takes.
What if the best people on your investing team are still in college? Peter Harris, Partner at University Growth Fund, breaks down how they run a roughly 100 million dollar venture fund with 50 to 60 students doing real diligence, real founder calls, and real deal work.You will hear how their student led model stays disciplined with checks and balances, why repeat games matter in venture and in business, and how this approach creates a flywheel that helps founders, investors, and the next generation of operators win together.Key Takeaways• Student led does not mean unstructured, the process is built around clear stages, data room access, investment memos, student votes, and an advisory style investment committee, with final fiduciary responsibility held by the partners• Real autonomy is the unlock, when interns are trusted with meaningful work, the best ones level up fast and start leading teams, not just supporting them• The goal is win win win outcomes, founders get capital plus a high effort support network, investors get disciplined underwriting, students get experience that compounds into career leverage• Repeat games beat short term incentives, the alumni network becomes a long term advantage, bringing the fund into high quality opportunities years later• Mistakes are inevitable, the difference is containment and systems, avoiding errors big enough to break trust, then building process improvements so they do not repeatTimestamped Highlights00:32 A 100 million dollar fund powered by 50 to 60 students, and what empowered really means01:43 The decision path, from founder screen to student memo to student vote to the advisory investment committee06:44 Why most venture internships underdeliver, and how longer tenures change outcomes10:37 Repeat games and the trust flywheel, how former students now pull the fund into top tier deals13:55 What happens when something goes wrong, damage control, learning loops, and confidentiality as a core discipline24:39 The bigger vision, expanding beyond venture into additional asset classes to create more student opportunitiesA line worth stealingIf you give people real autonomy, they’ll surprise you with what they do.Pro Tips• If you are building an internship program, start by deciding what real ownership means, then build guardrails around it, not the other way around• Treat trust like an asset, design your process so every stakeholder wants to work with you againCall to ActionIf you enjoyed this one, follow The Tech Trek and share it with a founder, operator, or student who cares about building real advantage through talent and process.
Yulun Wang, executive chairman and co founder at Sovato Health, joins Amir Bormand to unpack the next wave after telemedicine, procedural care at a distance. If you have ever wondered what it would take for a top surgeon to operate without being in the same room, this conversation gets practical fast, from the real bottlenecks inside operating rooms to the health system changes required to make remote robotics mainstream.Key takeaways• Better care can actually cost less when the right expertise reaches the right patient at the right time• Telemedicine is already normalized, which sets the stage for faster adoption of remote procedures once infrastructure and workflows catch up• Surgical robots already have two sides, the surgeon console and the patient side, today connected by a short cable, the leap is making that connection work reliably across hundreds or thousands of miles• Volume drives proficiency, the outcomes gap between high volume specialists and low volume settings is one of the biggest reasons access matters• Operating rooms spend more than half their time on steps around surgery, which creates room to dramatically increase surgeon throughput when workflows are redesignedTimestamped highlights• 00:42 What Sovato Health is building, bringing procedural expertise to patients without requiring travel• 02:10 The early days of surgical robotics and the transatlantic gallbladder surgery on September 7, 2001• 05:30 The counterintuitive idea, higher quality care can reduce total cost in healthcare• 10:27 What actually changes for patients, local hospitals stay the destination, expertise becomes the thing that travels• 14:57 Why repetition matters, the first question patients ask is still the right one• 17:53 Inside the operating room schedule, where time is really spent and why productivity can jumpA line that sticks“Healthcare is different, higher quality, if done right, costs less.”Practical angles you can steal• If you are building in regulated industries, adoption is rarely about the tech alone, it is about trust, workflows, and incentives• If you sell into health systems, position the value around system level outcomes, access, quality, and margin improvement, not just novelty• If you are designing new workflows, look for the hidden capacity, the biggest gains often sit outside the core taskCall to actionIf you want more conversations like this at the intersection of tech, systems, and real world impact, follow The Tech Trek on Apple Podcasts and Spotify.
Moiz Kohari, VP of Enterprise AI and Data Intelligence at DDN, breaks down what it actually takes to get AI into production and keep it there. If your org is stuck in pilot mode, this conversation will help you spot the real blockers, from trust and hallucinations to data architecture and GPU bottlenecks.Key takeaways• GenAI success in the enterprise is less about the demo and more about trust, accuracy, and knowing when the system should say “I don’t know.”• “Operationalizing” usually fails at the handoff, when humans stay permanently in the loop and the business never captures the full benefit.• Data architecture is the multiplier. If your data is siloed, slow, or hard to access safely, your AI roadmap stalls, no matter how good your models are.• GPU spend is only worth it if your pipelines can feed the GPUs fast enough. A lot of teams are IO bound, so utilization stays low and budgets get burned.• The real win is better decisions, faster. Moving from end of day batch thinking to intraday intelligence can change risk, margin, and response time in major ways.Timestamped highlights00:35 What DDN does, and why data velocity matters when GPUs are the pricey line item02:12 AI vs GenAI in the enterprise, and why “taking the human out” is where value shows up08:43 Hallucinations, trust, and why “always answering” creates real production risk12:00 What teams do with the speed gains, and why faster delivery shifts you toward harder problems12:58 From hours to minutes, how GPU acceleration changes intraday risk and decision making in finance20:16 Data architecture choices, POSIX vs object storage, and why your IO layer can make or break AI readinessA line worth stealing“Speed is great, but trust is the frontier. If your system can’t admit what it doesn’t know, production is where the project stops.”Pro tips you can apply this week• Pick one workflow where the output can be checked quickly, then design the path from pilot to production up front, including who approves what and how exceptions get handled.• Audit your bottleneck before you buy more compute. If your GPUs are waiting on data, fix storage, networking, and pipeline throughput first.• Build “confidence behavior” into the system. Decide when it should answer, when it should cite, and when it should escalate to a human.Call to actionIf you got value from this one, follow the show and turn on notifications so you do not miss the next episode.
New leaders face a choice fast. Do you adapt to the organization you inherit, or reshape it around the way you lead?In this conversation, Amir sits down with Gian Perrone, engineering leader at Nav, to unpack how org design really works in the first 30 to 120 days, and how to drive change without spiking anxiety or losing trust.You will hear how Gian treats leadership as triage, why “listen and learn” is rarely passive, and what separates a thoughtful reorg from one that feels chaotic.Key takeawaysLeaders almost always arrive with hypotheses, the real work is testing them without rushing to force a playbookA reorg is not automatically bad, perception turns negative when the why is unclear and people feel unsafeOver communicating helps, but thinking out loud too often can create noise, a structured comms plan keeps change steadyA simple way to spot a collaborative culture is to disagree in the interview and see how they respondManagers are the front line in change, set clear expectations so teams hear a consistent story about what is changing and whyTimestamped highlights00:01 What Nav does, and the real question behind org design for new leaders01:59 Why “first 90 days” is usually triage, not passive observation04:14 The reorg stopwatch, and why structure reflects your worldview08:36 How to communicate change without destabilizing teams12:54 A practical interview move to test whether a company truly collaborates17:03 The manager layer, how Gian sets expectations so change lands wellA line worth repeating“If you arrive and something is on fire, you are going to fix it.”A few practical moves worth stealingWhen you are new, write down your hypotheses early, then use real signals to confirm or kill themFloat a change as a real idea first, gather feedback, then come back with details before you finalizeCreate a simple comms map of who hears what, when, and from whom, then follow itBe matter of fact about changes, teams often mirror the tone you setCall to actionIf this episode helped you think more clearly about leadership and org design, follow the show and share it with one operator who is navigating change right now.
Nir Soudry, Head of R&D at 7AI, breaks down how teams can move from early experimentation to real production work fast, without shipping chaos. If you are building AI features or agent workflows, this conversation is a practical look at speed, safety, and what it actually takes to earn customer trust.Nir shares how 7AI ships in tight loops with a real customer in mind, why pushing decisions closer to the engineers removes bottlenecks, and how guardrails and evaluation keep fast releases from turning into security risks. You will also hear a grounded take on human plus AI collaboration, and why “just hook up an LLM” falls apart at scale.Key takeaways• Speed starts with focus, pick one customer and ship something usable in two or three weeks, then iterate every couple of weeks based on real feedback• If you want velocity, remove the meeting chain, get engineers in the room with customers and push decisions downstream• Agent workflows are not automatically testable, you need scoped blast radius, strong input and output guardrails, and an evaluation plan that matches real production complexity• “LLM as a judge” helps, but it is not magic, you still need humans reviewing, labeling, and tuning, especially once you have multi step workflows• In security, trust is earned through side by side proof, run a real pilot against human outcomes, measure accuracy and thoroughness, then improve with tight feedback loopsTimestamped highlights00:28 What 7AI is building, security alert fatigue, and why minutes matter02:03 A fast shipping cadence, one customer, quick prototypes, rapid iterations03:51 The velocity playbook, engineers plus sales in the same meetings, fewer bottlenecks08:08 Shipping agents safely, blast radius, guardrails, and why testing is still hard14:37 Human plus AI in practice, how ideas become working agents with review and monitoring18:04 Why early AI adoption works for some customers, and how pilots build confidence24:12 The startup reality, faster execution, traction, and why hiring still mattersA line worth sharing“When it’s wrong, click a button, and next time it will be better.”Pro tips you can steal• Run a two to four week pilot with one real customer and ship weekly, the goal is learning speed, not perfect coverage• Put engineers directly in customer conversations, keep leadership focused on unblocking, not gatekeeping• Treat every agent like a product surface, define strict inputs and outputs, sanitize both, and limit what it can affect• Build evaluation around real workflows, not single prompts, and combine automated checks with human review• Add feedback buttons everywhere, route feedback to both model improvement and the team that tunes production behaviorCall to actionIf you want more conversations like this on building real tech that ships, follow and subscribe to The Tech Trek.
B2B pricing is still way harder than it should be, even in 2026. In this conversation, Tina Kung, Founder and CTO at Nue.ai, breaks down why quote to revenue can take weeks, and how a flexible pricing engine can turn it into something closer to one click.You will hear how fast changing pricing models, AI driven products, and new selling motions are forcing revenue teams to rethink the entire system, not just one tool in the stack.Key takeaways• B2B quoting is basically a shopping cart, but the real complexity is cross team workflow, accounting controls, and downstream revenue rules.• Fragmented systems break the moment pricing changes, and in fast markets that can mean you only get one real pricing change per year.• AI companies often evolve from simple subscriptions to usage, services, and even physical goods, which creates billing chaos without a unified backbone.• Commit based models can make revenue more predictable while staying flexible for customers, but only if you can track entitlement, burn down, overspend, and approvals cleanly.• The most useful AI in revenue ops is not just insight, it is action, meaning it can generate the right transaction safely inside a system of record.Timestamped highlights00:43 What Nue.ai actually does, one platform for billing, usage, and revenue ops with intelligence on top02:43 Why a one minute checkout in B2C turns into weeks or months in B2B05:28 The real reason quote to revenue stays broken, fragmentation and brittle integrations08:03 How AI era pricing evolves, subscriptions to consumption, services, and physical goods12:51 Why Tina designed for flexibility from day one, and what 70 plus customer calls revealed19:42 Transactional intelligence, AI that can create the quote, route approvals, and move revenue work forwardA line worth keeping“It should be as easy as one click.”Practical moves you can steal• Map every pricing change to the downstream work it triggers, quoting, billing, revenue recognition, and approvals, then measure how many handoffs exist today.• If you sell both self serve and enterprise, design for multiple selling motions early, because the same objects can have totally different context and risk.• Treat pricing as a product surface, if your systems make changes slow, you are giving up speed in the market.Call to actionIf you want more conversations like this on how modern tech companies actually operate, follow the show on Apple Podcasts or Spotify, and connect with me on LinkedIn for clips and episode takeaways.
Tim Bucher, CEO and cofounder of Agtonomy, joins Amir to break down what physical AI looks like when it leaves the lab and shows up on the farm. Tim shares how his sixth generation farming roots and a lucky intro computer science class led to a career that included Microsoft, Apple, and Dell, then back into agriculture with a mission that hits the real world fast.This conversation is about building tech that earns its keep, delivers clear ROI, and improves quality of life for the people who keep the food supply moving.Key takeaways• Deep domain experience is a real advantage, especially in ag tech, you cannot fake the last mile of operations• The win is ROI first, but quality of life is right behind it, less stress, more time, and fewer dangerous moments on the job• Agtonomy focuses on autonomy software inside existing equipment ecosystems, not building tractors from scratch, because service networks and financing matter• One operator can run multiple vehicles, shifting the role from tractor driver to tech enabled fleet operator• Hiring can change when the work changes, some farms started attracting younger candidates by posting roles like ag tech operatorTimestamped highlights00:42 What Agtonomy does, physical AI for off road equipment like tractors01:45 Tim’s origin story, sixth generation farming roots and the class that changed his path03:59 Lessons from Bill Gates, Steve Jobs, and Michael Dell, and how Tim filtered the mantras into his own leadership05:53 The moment everything shifted, labor pressure, regulations, and the prototype built to save his own farm09:17 The blunt advice for ag tech founders, if you do not have a farmer on the team, fix that11:54 ROI in plain terms, one person operating a fleet from a phone or tablet14:29 Why Agtonomy partners with equipment manufacturers instead of building new vehicles, dealers, parts, service, and financing are the backbone17:39 The overlooked benefit, quality of life, reduced stress, and a more resilient food supply chain20:18 How farms started hiring differently, “ag tech operator” roles and even “video game experience” as a signalA line that stuck with me“This is not just for Trattori farms. This is for the whole world. Let’s go save the world.”Pro tips you can actually use• If you are building in a physical industry, hire a real operator early, not just advisors, get someone who lives the workflow• Write job posts that match the modern workflow, if the work is screen based, label it that way and recruit for it• Design onboarding around familiar tools, if your UI feels like a phone app, training time can collapseCall to actionIf you got value from this one, follow the show and share it with a builder who cares about real world impact. For more conversations like this, subscribe and connect with Amir on LinkedIn.
Data and AI are everywhere right now, but most teams are still guessing where to start. In this episode, Cameran Hetrick, VP of Data and Insights at BetterUp, breaks down what actually works when you move from AI hype to real business impact. You will hear a practical way to choose AI and analytics projects, how to spot low risk wins, and why clean, governed data still decides what is possible. Cameran also shares a simple mindset shift, stop copying broken workflows, and start rethinking the outcome you are trying to create.Key Takeaways• AI is a catchall term right now, the best early wins usually come from “assist” use cases that boost speed and quality, not full replacement• Start with low context, low complexity work, then earn your way into higher context projects as data quality and governance mature• Pick use cases with an impact versus effort lens, quick wins create proof, buy in, and budget for bigger bets• Stakeholders often ask for a data point or feature, but the real value comes from digging into the goal, and redesigning the workflow• Data teams cannot stop at insights, adoption matters, if the next team cannot act on the output, the project stallsTimestamped Highlights00:40 BetterUp’s mission, building a human transformation platform for peak performance01:57 AI as a “catchall,” where expectations are realistic, and where they are not05:19 A useful way to think about AI work, context versus complexity, and why “intern level” framing helps07:33 How to choose projects with an impact and level of effort calculator, and why trust in data is everything10:33 The hard part, translating stakeholder requests into real outcomes, and reimagining workflows instead of automating bad ones13:47 Systems thinking across handoffs, plus why teams need deeper business fluency, including P and L basics16:59 The last mile problem, if the next stakeholder cannot act, the value never lands20:27 The bottom line, AI does not change the fundamentals, it accelerates themA Line Worth Saving“AI is like an intern, it still needs direction from somebody who understands the mechanics of the business.” Practical Moves You Can Use• Run every idea through two quick questions, what business impact do we expect, and what level of effort will it take• Look for a win you can explain in one minute, then use it to fund the harder work• When someone asks for a metric or feature, ask why twice, then validate the workflow, then redesign the outcome• Invest in governed data early, untrusted outputs kill adoption fastCall to ActionIf this episode helped you think more clearly about AI in the real world, follow the show, leave a quick review, and share it with one operator who is trying to move from experiments to impact. You can also follow Amir on LinkedIn for more clips and practical notes from each episode.
Yogi Goel, cofounder and CEO of Maxima AI, breaks down how he hires outlier talent, people who think like future founders and thrive when the plan changes fast. We get practical on what to look for beyond pedigree, how to assess it without relying on easy resume signals, and how culture scales when your team doubles.Yogi also shares what Maxima AI is building, an agentic platform for enterprise accounting that automates day to day operations and month end work, and why the best teams win by pairing speed with real ownership.Key takeaways• Outlier candidates often look “non standard” on paper, the signal is founder mentality, fast thinking, grit, and a point to prove• Hiring gets easier when it is always on, keep a living bench of great people long before you have a headcount• Use long form conversations to assess how someone thinks, not just what they have done, ask for their life story and listen for the choices they highlight• Train the specifics, but set a baseline for domain aptitude, then coach the narrow parts once the fundamentals are there• Culture scales through leaders and through what you reward and penalize, not through posters and slogansTimestamped highlights00:39 What Maxima AI does and the real value of agentic accounting01:38 Defining an outlier candidate as a future founder, and why school matters less than you think07:34 The conveyor belt approach to recruiting, building an inventory of great people before you need them11:35 Where to draw the line on training, test for general aptitude, coach the specifics14:20 How diverse teams disagree productively, bring evidence, run small bets, then double down or pivot18:25 Scaling culture with values driven leaders, and the simple rule of reward versus penaltyA line worth keeping“Culture is two things, what you reward and what you penalize.”Pro tips you can steal• Keep a short list of the best people you have ever met for each function, update it constantly• Ask candidates for their journey from day zero, then pay attention to what they choose to emphasize• When the team disagrees, grab quick evidence, customer texts, small pulse checks, then place a small bet that will not kill the company• Expect great people to want autonomy and scope, manage like a mentor, not a hovercraftCall to actionIf this episode helped you rethink hiring, share it with a founder or engineering leader who is building a team right now. Follow the show for more conversations on people, impact, and technology, and connect with Yogi Goel on LinkedIn by searching his name and Maxima AI.
Chandan Lodha, Co-founder at CoinTracker, joins Amir Bormand to unpack the real shift from big tech to building your own company. From Harvard to Google to Y Combinator, Chandan shares what pushed him to take the leap, how he found the right idea, and what he had to unlearn to lead at startup speed.This conversation is for builders and leaders who want to grow faster, ship faster, and build teams that can actually execute.Key Takeaways• The early career advantage is learning velocity, optimize for environments that stretch you fast• Managing the business is rarely the hardest part, people problems scale with headcount• Big company habits can break you at a startup, especially around distribution, speed, and getting your first users• YC helped most through peer proximity, being surrounded by real users and founders who move quickly• Founder growth is a system, use feedback loops like reviews, 360 input, and personal goal trackingTimestamped Highlights00:00 From Harvard and Google to founder mode, what made him leave the safe path00:35 CoinTracker in plain English, crypto taxes and accounting for individuals and businesses03:32 Leap first, think later, the messy six month search for a real idea05:00 Runway reality, setting a 12 to 18 month window to figure it out06:09 Crypto skepticism to conviction, reading the Bitcoin white paper changed his frame10:05 Leadership lessons at 100 people, why people issues become the main work14:43 Y Combinator benefits, users everywhere and a practical playbook for early company building17:55 Personal growth systems, performance feedback and personal OKRs, plus changing your mind on three issues each year21:04 Becoming a new parent, structure, efficiency, and cutting non essentials23:24 The two skills to build before you leap, building and sellingA line worth keepingManaging the business is easy, managing people is hard.Pro Tips• Set a real runway window, then use it to iterate hard with users every week• Expect to unlearn big company instincts, distribution and speed do not come for free• Build a feedback cadence for yourself, not just your team, reviews and 360 input can surface blind spots• Practice building and selling in small side projects now, those skills compound in any startupCall to ActionIf this episode helped you think differently about leadership and the founder path, follow The Tech Trek on Apple Podcasts or Spotify, and share it with one person who is building or thinking about making the leap.
Joel Dolisy, CTO at WellSky, joins the podcast to reveal why organizational design is the ultimate "operating system" for scaling tech companies. This conversation is a deep dive into how engineering leaders must adapt their strategies when moving between the hyper growth of Venture Capital and the disciplined profitability of Private Equity.Building a high performing team is about much more than just hiring. Joel explains the necessity of maximizing the "multiplier effect" where the collective output far exceeds the sum of individual parts. We explore the pragmatic reality of digital transformation, the "art" of timing disruptive technology adoption like Generative AI, and how to use the Three Horizons framework to keep your core business stable while chasing the next big innovation. Whether you are leading a team of ten or an organization of hundreds, these insights on design principles and leadership context are essential for navigating the complexities of modern software delivery.Core InsightsShifting the perspective of software from a cost center to a core growth enabler is the fundamental requirement for any company aiming to be a true innovator.Private Equity environments require a specialized leadership approach because the "hold period" clock dictates when to prioritize aggressive growth versus EBITDA margin acceleration.Scaling successfully requires a "skeleton" of design principles, such as maintaining team sizes around eight people to ensure optimal communication flow and minimize overhead.The most critical role of a senior leader is providing constant context to the engineering org, ensuring teams understand the "why" behind shifting constraints as the company matures.Timestamped Highlights01:12 Defining the broad remit of a CTO from infrastructure and security to the unusual addition of UX.04:44 Treating your organizational structure as a living operating system that must be upgraded as you grow.10:07 Why innovation must include internal efficiency gains to free up resources for new revenue streams.15:01 Navigating the massive waves of disruption from the internet to mobile and now large language models.23:11 The tactical differences in funding engineering efforts during a five to seven year Private Equity hold period.28:57 Applying Team Topologies to create clear responsibilities across platform, feature, and enablement teams.Words to Lead By"You are trying to optimize what a set of people can do together to create bigger and greater things than the sum of the individual parts there".Expert Tactics for Tech LeadersWhen evaluating new technology like AI, Joel suggests looking at the "adoption curve compression". Unlike the mid nineties when businesses had a decade to figure out the internet, the window to integrate modern disruptors is shrinking. Leaders should use the Three Horizons framework to move dollars from the core business (Horizon 1) to speculative innovation (Horizon 3) without making knee jerk reactions based solely on hype.Join the ConversationIf you found these insights on organizational design helpful, please subscribe to the show on your favorite platform and share this episode with a fellow engineering leader. You can also connect with Joel Dolisy on LinkedIn to keep up with his latest thoughts on healthcare technology and leadership.
Stop chasing shiny objects and start driving real business outcomes. Marathon Health CTO Venkat Chittoor joins the show to explain why AI is the ultimate enabler for digital transformation but only when it is anchored by a rock solid business strategy. Essential Insights for Tech LeadersAI is not a standalone strategy. It is a powerful tool to accelerate a pre-existing business North Star. Success in digital transformation follows a specific maturity curve. Start with personal productivity, move to replacing mundane tasks, and eventually aim for cognitive automation. Governance must come before experimentation. Establishing guardrails for data privacy is critical before launching any AI pilot. Measure value through tangible efficiency gains. In healthcare, this means reducing administrative burden or "pajama time" so providers can focus on patient care. Don't let marketing speak fool you. Always validate vendor claims against your specific industry use cases. Timestamped Highlights00:50 Defining advanced primary care and the mission of Marathon Health 02:44 Why AI strategy is useless without a defined business strategy 05:01 The three steps of AI adoption from productivity to cognition 12:14 How to define success metrics for a pilot versus a scaled V1 solution 16:40 Real world ROI including call deflections and charting efficiency 21:43 Advice for leaders on data quality and avoiding vendor traps A Perspective to CarryAI is actually enabling [efficiency], but without a solid business strategy, AI strategy is not useful. Tactical Advice for the FieldWhen launching an AI initiative, focus heavily on the underlying data quality. Ensure your team accounts for data recency, accuracy, and potential biases, as these factors determine whether an experiment succeeds or fails. Start small with pilots to build muscle memory before attempting to scale complex systems. Join the ConversationIf you found these insights helpful, subscribe to the podcast for more deep dives into the tech landscape. You can also connect with Venkat Chittoor on LinkedIn to follow his work in healthcare innovation.
Stop treating data governance as a "data cop" function and start using it as a high ROI offensive weapon. In this episode, Peter Kapur, Head of Data Governance and Data Quality at CarMax, breaks down how to move beyond defensive compliance to drive profitability, customer experience, and better data science outcomes.Critical Insights for LeadersShift from defense to offense Data defense covers the mandatory regulatory and legal requirements like privacy and cybersecurity. Data offense involves everything else that hits your bottom line, such as investing in data quality to save or make money.Prioritize problems over frameworks Avoid bringing rigid policies and "data geek" terminology to business leaders. Instead, spend time listening to their specific data struggles and apply governance capabilities as solutions to those problems.Data quality makes governance tangible Without high quality data, governance is just a collection of abstract policies. Improving data quality empowers data scientists to produce better models and gives analytics teams the ability to discover and trust their data.Key Moments in the Conversation02:41 Defining the clear line between defensive regulation and offensive growth 06:03 Why data quality and data governance must sit together to be effective 11:00 Shifting from "data school" to "business school" to communicate value 13:12 Quantifying the ROI of data governance through customer wins and time savings 18:35 Actionable advice for starting an offensive strategy from scratch Wisdom from the Episode"If we meet the laws, we meet the regulations, we meet the legal, how do we leverage our data? It is a mindset shift versus, let me lock my data down, no one use it." Tactical Advice for ImplementationEnsure adoption through personalization Design tools and processes that are personalized to specific roles so they feel like a natural part of the workflow rather than a burden.Focus on the eye of the consumer Treat every person in the organization as a "data citizen" and remember that data quality is ultimately defined by the needs of the people consuming it.Join the ConversationSubscribe to the podcast on your favorite platform to catch every episode. Follow us on LinkedIn to stay updated on the latest trends in data leadership.
Afrooz Ansaripour, Director of Data Science at Walmart, joins the show to explain how global leaders are shifting from simple historical tracking to predicting psychological triggers and customer intent. This episode explores the evolution of customer intelligence and how Generative AI is turning massive data sets into personalized, value driven experiences. Listeners will learn how to balance hyper personalization with foundational privacy to build lasting consumer trust.Key InsightsPredict intent rather than just reporting past transactions to understand why a customer is with the brand.Use Generative AI as an explainability layer to transform complex data platforms from black boxes into conversational tools.Prioritize customer trust as a critical part of the user experience rather than just a legal requirement.Integrate digital and physical signals to create a 360 degree view that reveals insights which would otherwise be invisible.Focus on rapid technology adoption and curiosity as the primary drivers of success in modern AI teams.Timestamped Highlights01:51 Identifying the challenges and opportunities when managing millions of real time signals.06:43 Strategies for showing genuine value to the customer without making them feel like just a part of a sale.09:51 How LLMs are fundamentally changing the way data teams interpret unstructured feedback and behavioral patterns.14:42 Managing privacy and ethical data practices while building personalized conversational AI.19:14 Stitching together the online and offline journey to create a seamless customer experience.22:52 The necessary evolution of data science skills toward storytelling and execution bias.A Powerful Thought"Personalization should never come at the expense of customer trust." Tactical StepsCombat the garbage in garbage out problem by refining cleaning processes to handle modern AI requirements.Build an interactive layer or chatbot on top of data products to make insights instantly accessible and automated.Translate technical insights into real world decisions to ensure customers actually benefit from data models.Next StepsSubscribe to the show for more insights into the future of tech. Share this episode with a peer who is currently navigating the complexities of customer data.
Shahryar Qadri, CTO of OneImaging, joins me to unpack a hard truth about healthcare tech: the goal is not to remove humans, it is to give them more room to be human.We talk about where cost “optimization” actually helps patients, why radiology is a perfect fit for AI but still held back by data access, and how better workflows can improve trust, speed, and outcomes without losing the human touch.OneImaging sits in the radiology benefits space, helping members book imaging in a national network with more transparency and a high touch booking experience, while helping employers cut imaging costs significantly.Key takeaways• The “human touch” in healthcare is not going away, the better play is using tech to increase capacity so caregivers can spend more time being caregivers• Cost optimization is not always about paying less for expertise, it is often about wasting less human time, improving trust, and removing friction around services• Healthcare still runs on outdated plumbing in places you would not expect, including fax based workflows that slow everything down• Radiology is one of the best real world use cases for AI, but the bigger blocker is getting access to imaging data in usable form, not model capability• Your health data is already “there”, but it is not working for you yet. The next wave is tools that scan your longitudinal record and surface what to ask your doctor about, so you can be a stronger advocate for your own careTimestamped highlights• 00:36 What OneImaging actually does, and why “transparent imaging” is more than a pricing story• 02:00 Why healthcare stays personal, and how tech should increase capacity instead of replacing care• 03:36 The real definition of cost optimization, commodity versus service, and where trust matters• 07:01 The surprising reality of imaging ops, why it still feels like 1998, and what gets digitized next• 17:19 AI in radiology is real, but the data access and interoperability gap is the bottleneck• 24:21 Your CDs are full of value, the problem is we do almost nothing with that data todayA line worth replaying“These LLM models are the worst that they’ll ever be today. They’re only going to get better and better and better.”Call to actionIf this episode sparked a new way of thinking about healthcare tech, follow The Tech Trek on your podcast app, share it with a friend in product or engineering, and connect with me on LinkedIn for more conversations like this.
Swarupa Mahambrey, Vice President of Software Engineering at The College Board, breaks down what tech debt really looks like in a mission critical environment, and how an engineering mindset can prevent it from quietly choking delivery. She shares a practical operating model for paying down debt without stopping the roadmap, and the cultural habits that make it stick.You will hear how College Board carved out durable space for engineering excellence, how they use testing and automation to protect reliability at scale, and how to make the trade offs between features, simplicity, and user experience without slowing the team to a crawl.Key Takeaways• Tech debt behaves like financial debt, delay the payment and the interest compounds until even simple changes become painful• A permanent allocation of capacity can work, dedicating 20 percent of every sprint to tech debt can reduce support load and improve delivery• Shipping more features can slow you down, simplifying workflows and validating with real usage can increase velocity and reduce tickets• Resilience is not about avoiding every failure, it is about designing for graceful degradation so spikes and outages become small blips instead of crises• Automation is not “extra,” it is part of the definition of done, including unit tests as acceptance criteria and clear code coverage expectationsTimestamped Highlights• 00:00 Why tech debt is a mindset problem, not just a backlog problem• 01:00 Tech debt explained with a real example, what happens when a proof of concept becomes production• 03:45 The feature trap, how “powerful” workflows can overwhelm users and explode maintenance costs• 11:03 Engineering Tuesday, one day a week to strengthen foundations, not ship features• 14:39 Stability vs resilience, designing systems that bend instead of shatter• 20:06 Testing and automation at scale, unit tests as a requirement and code coverage guardrailsA line worth keeping“If we don’t intentionally carve out space for engineering excellence, the urgent will always crowd out the important.”Practical moves you can steal• Protect a fixed slice of capacity for tech debt, make it part of the operating model, not a one time cleanup• Treat automation as acceptance criteria, no test, no merge, no release• Use pilots and targeted releases to learn early, then iterate based on metrics and real user behavior• Design for graceful degradation with retries, fallback paths, and clear failure visibilityCall to actionIf this episode helped you think differently about tech debt and engineering culture, follow The Tech Trek, leave a quick rating, and share it with one engineer who is fighting fires right now.
Software is still eating the world, and AI is speeding up the clock. In this episode, Amir talks with Tariq Shaukat, co CEO at Sonar, about what it really takes for non tech companies to build like software companies, without breaking trust, security, or quality. Tariq shares how leaders can treat AI like a serious capability, not a shiny add on, and why clean code, governance, and smart pricing models are becoming board level topics. Key Takeaways• “Every company is a software company” does not mean selling SaaS, it means software is now core to differentiation, even in legacy industries. • The hardest shift is not tools, it is mindset: moving from slow, capital style planning to fast iteration, test, learn, and ship. • AI works best when leaders stay educated and involved, outsourcing the whole strategy is a real risk. • “Trust but verify” needs to be a default posture, especially for code generation, security, and compliance. • Pricing will keep moving toward value aligned consumption models, not simple per seat formulas. Timestamped Highlights• 00:56 What Sonar does, and why clean code is really about security, reliability, and maintainability • 05:36 The Tesla lesson: mechanics commoditize, software becomes the experience people buy • 09:11 Culture plus education: why software capability cannot live in one silo • 14:21 Cutting through AI hype with program discipline and a “trust but verify” mindset • 18:23 Boards, governance, and setting an “acceptable use” policy for AI before something goes wrong • 25:18 How software pricing changes in an AI world, and why Sonar prices by lines of code analyzed A line worth saving:“Define acceptable risk as opposed to no risk.” Pro Tips you can steal• Write down what you want AI to achieve, the steps to get there, and the metric you will use to verify outcomes. • For code generation, scan and review before shipping, treat AI output like a draft, not a final answer.• Set clear rules for what is allowed with AI inside the company, then iterate as you learn. Call to ActionIf you want more conversations like this on software leadership, AI governance, and building real impact, follow The Tech Trek and subscribe on your favorite podcast app. If someone on your team is wrestling with AI rollout or developer productivity, share this episode with them.























