DiscoverThe Tech Trek
The Tech Trek

The Tech Trek

Author: Elevano

Subscribed: 10Played: 976
Share

Description

The Tech Trek is a podcast for founders, builders, and operators who are in the arena building world class tech companies. Host Amir Bormand sits down with the people responsible for product, engineering, data, and growth and digs into how they ship, who they hire, and what they do when things break. If you want a clear view into how modern startups really get built, from first line of code to traction and scale, this show takes you inside the work.
613 Episodes
Reverse
Phil Freo, VP of Product and Engineering at Close, has lived the rare arc from founding engineer to executive leader. In this conversation, he breaks down why he stayed nearly 12 years, and what it takes to build a team that people actually want to grow with.We get into retention that is earned, not hoped for, the culture choices that compound over time, and the practical systems that make remote work and knowledge sharing hold up at scale.Key takeaways• Staying for a decade is not about loyalty, it is about the job evolving and your scope evolving with it• Strong retention is often a downstream effect of clear values, internal growth opportunities, and leaders who trust people to level up• Remote can work long term when you design for it, hire for communication, and invest in real relationship building• Documentation is not optional in remote, and short lived chat history can force healthier knowledge capture• Bootstrapped, customer funded growth can create stability and control that makes teams feel safer during chaotic marketsTimestamped highlights00:02:13 The founders, the pivots, and why Phil joined before Close was even Close00:06:17 Why he stayed so long, the role keeps changing, and the work gets more interesting as the team grows00:10:54 “Build a house you want to live in”, how valuing tenure shapes culture, code quality, and decision making00:14:14 Remote as a retention advantage, moving life forward without leaving the company behind00:20:23 Over documenting on purpose, plus the Slack retention window that forces real knowledge capture00:22:48 Bootstrapped versus VC backed, why steady growth can be a competitive advantage when markets tighten00:28:18 The career accelerant most people underuse, initiative, and championing ideas before you are askedOne line worth stealing“Inertia is really powerful. One person championing an idea can really make a difference.”Practical ideas you can apply• If you want growth where you are, do not wait for permission, propose the problem, the plan, and the first step• If you lead a team, create parallel growth paths, management is not the only promotion ladder• If you are remote, hire for writing, decision clarity, and follow through, not just technical depth• If Slack is your company memory, it is not memory, move durable knowledge into docs, issues, and specsStay connected:If this episode sparked an idea, follow or subscribe so you do not miss the next one. And if you want more conversations on building durable product and engineering teams, check out my LinkedIn and newsletter.
Pete Hunt, CEO of Dagster Labs, joins Amir Bormand to break down why modern data teams are moving past task based orchestration, and what it really takes to run reliable pipelines at scale. If you have ever wrestled with Apache Airflow pain, multi team deployments, or unclear data lineage, this conversation will give you a clearer mental model and a practical way to think about the next generation of data infrastructure. Key Takeaways• Data orchestration is not just scheduling, it is the control layer that keeps data assets reliable, observable, and usable• Asset based thinking makes debugging easier because the system maps code directly to the data artifacts your business depends on• Multi team data platforms need isolation by default, without it, shared dependencies and shared failures become a tax on every team• Good software engineering practices reduce data chaos, and the tools can get simpler over time as best practices harden• Open source makes sense for core infrastructure, with commercial layers reserved for features larger teams actually need Timestamped Highlights00:00:50 What Dagster is, and why orchestration matters for every data driven team00:04:18 The origin story, why critical institutions still cannot answer basic questions about their data00:07:02 The architectural shift, moving from task based workflows to asset based pipelines00:08:25 The multi tenancy problem, why shared environments break down across teams, and what to do instead00:11:21 The path out of complexity, why software engineering best practices are the unlock for data teams00:17:53 Open source as a strategy, what belongs in the open core, and what belongs in the paid layer A Line Worth RepeatingData orchestration is infrastructure, and most teams want their core infrastructure to be open source. Pro Tips for Data and Platform Teams• If debugging feels impossible, you may be modeling your system around tasks instead of the data assets the business actually consumes• If multiple teams share one codebase, isolate dependencies and runtime early, shared Python environments become a silent reliability risk• Reduce cognitive load by tightening concepts, fewer new nouns usually means a smoother developer experience Call to ActionIf this episode helped you rethink data orchestration, follow the show on Apple Podcasts and Spotify, and subscribe so you do not miss future conversations on data, AI, and the infrastructure choices that shape real outcomes.
Sandesh Patnam, Managing Partner at Premji Invest, breaks down how long duration capital changes the way you evaluate companies, founders, and moats. We talk about what most growth investors miss, why product strength still matters, and how to separate real AI businesses from thin wrappers in a noisy market.Premji Invest is a captive, evergreen fund built to grow an endowment that supports major education work, which gives the team flexibility on time horizon and partnership style. Sandesh shares how that shows up in diligence, how they think about backing contrarian founders, and why the best companies in this AI era may still be ahead of us.Key TakeawaysFocus on the long arc, not quarter by quarter optics, founders make better decisions when they are not trapped in short term metricsIn growth investing, TAM models and KPI spreadsheets can distract from the core question, does the product have real strength and an expanding roadmapEnduring outcomes often come from backing a contrarian view early, then helping it move from contrarian to consensus over timeEvergreen capital changes behavior, you can slow down, build relationships, and partner across private and public markets instead of treating IPO as the finish lineIn AI, separate the stack into data center, foundation models, and applications, then look for defensibility like vertical depth, data moats, and compounding usage valueTimestamped highlights00:38 Premji Invest explained, evergreen structure, one LP, and why public markets can be part of the journey, not the exit04:47 Two common growth investor lenses and what gets missed when product and roadmap do not lead the thesis08:48 Partnership mindset, building trust, and being the first call when things get hard12:48 The contrarian to consensus path, what creates alpha, and how to support founders through the lonely middle19:54 Why rushing decisions is a trap, and how flexibility changes when and how you can partner with a company20:55 AI investing framework, three layers, what looks frothy, what can endure, and where moats still exist26:48 The cost of intelligence is collapsing, why this may still be the early internet moment, and what that implies for the next waveA line that stuck with me“We want to be the first port of call when the seas are turbulent.”Practical moves you can stealPressure test the roadmap, ask when product two ships, what adjacency comes next, and what tradeoffs change at scaleWhen evaluating AI apps, demand a defensibility story beyond the model, look for proprietary data, vertical workflow depth, and value that improves with usageTreat speed as a risk factor, if you cannot complete your churn cycle of doubt and validation, step back rather than force certaintyCall to ActionIf you liked this one, follow the show and share it with a founder, operator, or investor who is building in AI right now. For more conversations at the intersection of tech, business, and execution, subscribe and connect with me on LinkedIn.
Software engineering is changing fast, but not in the way most hot takes claim. Robert Brennan, Co founder and CEO at OpenHands, breaks down what happens when you outsource the typing to the LLM and let software agents handle the repetitive grind, without giving up the judgment that keeps a codebase healthy. This is a practical conversation about agentic development, the real productivity gains teams are seeing, and which skills will matter most as the SDLC keeps evolving. Key TakeawaysAI in the IDE is now table stakes for most engineers, the bigger jump is learning when to delegate work to an agentThe best early wins are the unglamorous tasks, fixing tests, resolving merge conflicts, dependency updates, and other maintenance work that burns time and attentionBigger output creates new bottlenecks, QA and code review can become the limiting factor if your workflow does not adaptSenior engineering judgment becomes more valuable, good architecture and clean abstractions make it easier to delegate safely and avoid turning the codebase into a messThe most durable human edge is empathy, for users, for teammates, and for your future self maintaining the systemTimestamped Highlights00:40 What OpenHands actually is, a development agent that writes code, runs it, debugs, and iterates toward completion02:38 The adoption curve, why most teams start with IDE help, and what “agent engineers” do differently to get outsized gains06:00 If an engineer becomes 10x faster, where does the time go, more creative problem solving, less toil15:01 A real example of the SDLC shifting, a designer shipping working prototypes and even small UI changes directly16:51 The messy middle, why many teams see only moderate gains until they redraw the lines between signal and noise20:42 Skills that last, empathy, critical thinking, and designing systems other people can understand22:35 Why this is still early, even if models stopped improving today, most orgs have not learned how to use them well yetA line worth sharing“The durable competitive advantage that humans have over AI is empathy.”Pro Tips for Tech TeamsStart by delegating low creativity tasks, CI failures, dependency bumps, and coverage improvements are great training wheelsDefine “safe zones” for non engineers contributing, like UI tweaks, while keeping application logic behind clearer guardrailsInvest in abstractions and conventions, you want a codebase an agent can work with, and a human can trustTrack where throughput stalls, if PR review and QA are the bottleneck, productivity gains will not show up where you expectCall to ActionIf you got value from this one, follow the show and share it with an engineer or product leader who is sorting out what “agentic development” actually means in practice.
Deborah Hanus, Co-founder and CEO at Sparrow, joins Amir to unpack the founder journey from academia to building a scaled company. They dig into why leave management is still a messy, high stakes problem, and how Sparrow is turning it into a clean, guided experience for both HR and employees.Sparrow helps companies provide employee leave across the United States and Canada, and Deborah shares what it really takes to scale a compliance driven business without slowing down. From founder resilience and early stage emotional swings to hiring, onboarding, and culture design, this one is packed with lessons for operators and builders.Key takeaways• Academia can be real founder training, especially for building resilience and hearing “no” without losing your edge• Early stage startups feel brutal because you have too few data points, it is easy to overreact to every win or setback• Compliance and leave are fundamentally data problems, the right info to the right person at the right time changes everything• Scaling leadership is mostly communication and alignment, five people and 250 people require totally different systems• Culture does not stay stable by accident, values must drive hiring, training, rewards, and performance managementTimestamped highlights00:37 What Sparrow does, and the 300 million dollars in payroll cost savings milestone01:37 Why academia can prepare you for founding, and how customer pain beats outside skepticism03:40 The leave compliance mess, and why state by state rules made the problem explode08:25 The two real ways startups die, and why morale matters as much as cash12:55 Leading at scale, onboarding, clarity, and the feedback questions that keep teams aligned19:54 “Scale intentionally” as a culture principle for a company that cannot afford to break things25:48 Keeping values stable while everything else evolves as the team growsA line worth sharing“Companies end when you run out of cash or you run out of morale.”Pro tips you can steal• Treat the employee journey like a product journey, from recruiting through promotions and hard moments• Before a big change, collect questions early so the message lands where people actually are• After a meeting, ask “What were the main points?” to see what people heard, then tighten your messaging• Invest in onboarding and goal clarity to prevent teams from drifting into competing prioritiesCall to actionIf you enjoyed this conversation, follow and subscribe so you do not miss what is next.
Max Bruner, Founder and CEO of Anzen, joins Amir Bormand to break down why insurance is quietly one of the biggest data and workflow opportunities in tech right now. They dig into Max’s unconventional path from foreign policy to building an executive liability marketplace, and what it really takes to modernize a slow moving industry with AI.If you care about building in real world markets, scaling with discipline, and using AI for more than content, this one will sharpen your thinking fast. Key Takeaways• Insurance is not flashy, but it is foundational, massive, profitable, and packed with repeatable workflows that software can improve• The best tech opportunities are often in slow moving industries with lots of data and outdated systems• Better decision making comes from predicting outcome impact and pressure testing your thinking with a strong community around you• AI value is clearest when it drives real operations, faster transactions, lower costs, and better service• Fundraising is a pipeline game now, treat it like sales, build the plan, hit the numbers, run a tight processTimestamped Highlights00:42 What Anzen actually does, a one stop marketplace for executive liability quotes across the US02:29 From Arabic studies and foreign policy to discovering insurance through political risk08:12 The curiosity engine, how deep research habits shaped his ability to build in new domains11:23 Decision guardrails, learning from outcomes and using trusted people to keep you efficient13:12 Why choose insurance, building in industries that make the world work, plus the profit reality17:29 The startup advantage, modern infrastructure vs incumbent legacy systems, and why catching up takes time20:36 Raising in today’s market, what changed, what worked, and why the pitch volume mattersA line worth stealing“Sometimes in tech we miss the application, there are massive industries to go change if we apply technology in the right way.” Max BrunerPro Tips for builders• Pick markets with repeatable workflows, you can ship measurable value faster• Spend your time where the outcome impact is high, skip low ROI rabbit holes• Build a real financial plan before fundraising, then operate close to it• Run fundraising like a sales process, pipeline, volume, and discipline winCall to ActionIf you enjoyed this conversation, follow the show and leave a quick review, it helps more builders find it.
Stu Solomon, CEO of HUMAN, joins Amir to unpack a blind spot most teams underestimate: a huge share of online activity is not people at all, it is automated traffic. They break down how verification really works at internet scale, why agentic workflows change the rules, and what it will take to build trust when bots transact with bots.If you have ever wondered how fraud, fake clicks, account abuse, and synthetic behavior get caught in real time, this episode is a clear, practical look behind the curtain.Key takeaways• Most of the internet is machine traffic now, the goal is no longer spotting bots, it is separating good machines from bad ones• Trust is built by combining behavior, infrastructure signals, and identity or credential history into fast decisions at scale• Agentic systems lower the barrier to entry for attackers, less skilled actors can now create outsized impact• The hard part is accountability, when a machine acts with your authority, who owns the outcome• Adoption follows convenience, but visibility matters, if it feels like a black box, people will not trust itTimestamped highlights00:33 HUMAN in plain English, making split second decisions about who is human, and whether they are safe03:59 The trust stack, behavior signals, infrastructure clues, and identity or credential history10:19 The real shift with AI, lower barriers for attackers, plus the rise of agentic autonomy14:37 The cake story, an agent completes the task, then surprises you with a 750 dollar bill17:22 Bots talking to bots, where accountability and liability get messy fast24:18 Security builds trust, trust unlocks adoption, and society is already closer than it thinksA line you will remember“We have always operated on the notion that if you are human, you are good, and if you are a machine, you are bad. That is simply not the case anymore.”Practical ideas you can use• Add guardrails when you delegate to tools, especially budgets, limits, and approval steps• Watch for trust signals, not just identity checks, behavior plus infrastructure plus history beats any single data point• Design for visibility, show users what the system did and why, so trust can compound over timeFollow:If this episode helped you think more clearly about trust, fraud, and agentic systems, follow the show, subscribe for more conversations like this, and share it with a teammate who is building in ads, ecommerce, identity, security, or AI.
Mek Stittri, CTO at Stuut, breaks down a leadership skill that sounds simple but gets messy fast, trust, then verify. You will learn how to delegate without losing control, how to stay close to the work without becoming a micromanager, and how AI is changing what it means to review and own technical outcomes. Key takeaways• Trust and verify starts with alignment, define success clearly, then keep a real line of sight to outcomes• Verification is not micromanagement, it is accountability, your team’s results are your responsibility as a leader• Use lightweight mechanisms like weekly reports, and stay ready to answer questions three levels deep when speed matters• AI is pushing engineers toward system design and management skills, you will manage agents and outputs, not just code• Fast feedback prevents slow damage, address issues early, praise in public, give direct feedback in privateTimestamped highlights00:41 Stuut in one minute, agented AI for finance ops, starting with collections and faster cash outcomes01:54 Trust without verification becomes disconnect, why leaders still need to get close to the details03:42 The three levels deep idea, how to keep situational awareness without hovering06:33 The next five years, engineers managing teams of agents, system design as the differentiator11:40 Feedback as a gift, why speed and privacy matter when coaching16:54 The timing art, when to wait, when to jump in, using time and impact as your signal19:43 Two leaders who shaped Mek’s leadership style, letting people struggle, learn, and then win23:29 Curiosity as the engine behind trust and verificationA line worth repeating“Feedback is a blessing.” Practical coaching moves you can borrow• Set the bar up front, define the end goal and what good looks like• Build a steady cadence, short weekly updates beat occasional deep dives• Calibrate your involvement, give space early, step in when time passes or impact expands• Make feedback faster, smaller course corrections beat late big confrontations• Use AI as a reviewer, get quick context on unfamiliar code and decisions so you can ask better questionsCall to actionIf you found this useful, follow the show and share it with a leader who is leveling up from IC to manager. For more leadership and hiring insights in tech, subscribe and connect with Amir on LinkedIn.
Michael Topol, Co-founder and Co-CEO at MGT Insurance, explains why insurance is quietly becoming one of the most interesting data and AI problems in tech.We get practical about turning messy legacy data into usable signals, how agentic tools change decision making, and why culture and team design matter as much as the models.MGT Insurance is building a fully verticalized AI and agentic native insurance company for small businesses, pairing experienced insurance operators with top tier technologists. Michael breaks down what changed in the last few years that makes real disruption possible now, and what modern product delivery looks like when prototyping is cheap and iteration is fast.Key takeaways• Insurance is a data business at its core, but most incumbents cannot use their data fast enough because it lives across silos, mainframes, and old systems.• Modern AI lets teams combine internal data with public signals to speed up underwriting and improve consistency, without losing human judgement.• Vibe coding and rapid prototyping collapse the gap between idea and implementation, bringing product, engineering, and the business closer together.• Senior talent gets more leverage in an AI driven workflow, and small teams can ship faster by focusing on problem solving, not just building.• Pod based teams, fixed outcome planning, and strong culture help regulated companies move quickly while staying inside the rules.Timestamped highlights00:44 What MGT Insurance is, and what “AI and agentic native” means in practice02:09 Why small business insurance matters more than most people realize06:06 The real blocker for incumbents, data exists but it is not usable08:55 Vibe coding in a regulated industry, where it helps first12:54 Requirements are shifting, prototypes bring teams closer to the real problem17:26 The pod structure, plus the Basecamp inspired approach to scoping and shipping20:52 Better, faster, cheaper, why AI finally makes all three possible22:11 Where to connect, and who they are hiringA line you will remember“Insurance is really just a big data problem.”Pro tips you can steal• Build cross functional pods early, include a domain expert, a technical product lead, and a senior engineer from day one.• Scope for outcomes, not perfect specs, then let the team decide the depth as they build.• Use AI to automate collection and synthesis, then keep humans focused on the decisions and trade offs.Call to actionIf you enjoyed this one, follow the show and share it with a builder who is trying to ship faster with a smaller team.
Marco DeMeireles, co founder and managing partner at ANSA, breaks down how a modern VC firm wins by being focused, data driven, and allergic to hype. If you want a clearer view of how investors evaluate open source, mission critical industries, and AI categories, this is a practical, operator minded look behind the curtain. Marco explains ANSA’s focus on what they call undercover markets, from open source and open core businesses to defense, intelligence, cybersecurity, healthcare IT, and infrastructure companies that become deeply embedded and rarely lose customers. We also get into how they raised their first fund, why portfolio concentration changes everything, and how they push founders toward efficiency and profitability without killing ambition. Key Takeaways• In open source, two things matter more than most people admit: founder DNA tied to the project, and what you put behind the paywall that enterprises will pay for• Concentration forces rigor, fewer bets means deeper diligence, clearer underwriting, and more hands on support post investment• Great early stage support is not just advice, it is people, capital planning, and operating help that changes outcomes• AI investing gets easier when you start with category selection, avoid fickle demand, then hunt for non obvious wedges in real workflows• Long term winners tend to show compounding growth, improving efficiency, real demand, durable business models, founder strength, and an asymmetric risk reward at the price Timestamped Highlights00:00 Marco’s quick intro and what ANSA invests in00:36 Undercover markets, open source, and mission critical industries explained01:54 The two open source filters that change how ANSA underwrites a deal03:31 Why open source can work in defense, plus the Defense Unicorns example05:29 How a new firm raises a first fund, and what the right LP partners look for10:50 The three levers ANSA pulls with founders: people, capital, operations15:22 Marco’s six part framework for evaluating investments17:39 How to tell who wins in crowded AI categories, and why niche wedges matter21:41 The first investment they will never forget, and the air gapped cloud problem A line worth stealing“You can’t outsource greatness. You can’t outsource people selection.” Pro Tips• If you are building open source, be intentional about what is free versus paid, security, compliance, and auditability tend to earn real pricing power• If your business depends on paid acquisition, test a path to organic growth early, it can unlock profitability and give you leverage in fundraising and exits• In crowded AI spaces, pick a wedge where documentation is heavy, complexity is low, and ROI is obvious, then expand once you own that lane Call to ActionIf this episode helped you think more clearly about investing and building, follow the show, subscribe, and share it with one founder or operator who is navigating funding, pricing, or go to market right now
AI is everywhere, but most teams are stuck talking about efficiency and headcount. In this episode, Dave Edelman, executive advisor and best selling author, shares a sharper lens, how to use AI to create real customer value and real growth.We get into the high road vs low road of AI, what personalization should look like now, and why data has to become an enterprise asset, not a bunch of disconnected departmental files.Key Takeaways• Efficiency is table stakes, the real win is using AI to build new experiences that customers actually want• Start with customer friction, find the biggest compromises and frustrations in your category, then design around that• Personalization is no longer limited by content scale in the same way, AI changes the economics of tailoring experiences• You do not always need one giant database, modern tools can pull and connect data across systems in real time• Treat data as an enterprise resource, getting cross functional alignment is often the hardest and most important stepTimestamped Highlights• 00:46 Dave’s origin story, from early loyalty programs to Segment of One marketing• 03:33 The high road and low road of AI, growth experiences vs spam at scale• 06:51 Where to start, map the biggest customer frustrations, then build use cases from there• 16:31 The data myth, why you may not need a single mega database to get value from AI• 21:31 Data as a leadership problem, shifting from functional ownership to enterprise ownership• 25:14 Strategy that actually sticks, balancing bottom up automation with top down customer led directionA line worth stealing“Use those efficiencies to invest in growth.”Pro Tips you can apply this week• List the top five customer frustrations in your category, pick one and design an AI powered fix that removes a compromise• Audit your data reality, identify where the same customer facts live in multiple places, then decide what must be unified first• Run a simple test and learn loop, create multiple variations of one experience, measure what works, and keep iterating• Put strategy on the calendar, make room for a recurring discussion that is not just metrics and cost cuttingCall to ActionIf this episode helped you think differently about AI and growth, follow the show, leave a quick rating, and share it with one operator who is building product, data, or customer experience right now.
Amanda Kahlow, CEO and founder of 1Mind, joins Amir to break down what AI changes in modern sales and go to market, and what it does not. If you lead revenue, product, or growth, this is a practical look at where AI creates leverage today, where humans still matter, and how teams actually adopt it without chaos.Amanda shares how “go to market superhumans” can handle everything from early buyer conversations to demos, sales engineering support, and customer success. They also dig into trust, hallucinations, and why the bar for AI feels higher than the bar for people.Key takeaways• Most buyers want answers early, without the pressure that comes with talking to a salesperson• AI can remove friction by turning static content into a two way conversation that helps buyers move faster• The hardest part of adoption is not capability, it is change management and trust inside the team• Humans still shine in relationship and nuance, but AI can outperform on recall, depth, and real time access to the right info• As AI levels the selling experience, product quality matters more, and the best product has a clearer path to winTimestamped highlights00:31 What 1Mind builds, and what “go to market superhumans” actually do across the full buyer journey02:00 The buyer lens, why early conversations matter, and how AI gives control back to the buyer06:14 Why the SDR experience is frustrating for buyers, and where AI can improve both sides09:42 Change management in the real world, why “everyone build an agent” gets messy fast13:04 Why “swivel chair” AI fails, and what real time help should look like in live conversations15:52 Hallucinations and trust, plus the blunt question every leader should ask about human error22:26 Competitive advantage today, and why adoption eventually pushes markets toward “best product wins”A line worth sharing“Do your humans hallucinate, and how often do they do it?”Pro tips you can use this week• Start with low stakes usage, bring AI into calls quietly, then ask it for a summary and what you missed• Build adoption top down, define what good looks like, otherwise you get a pile of similar agents and no clarity• Focus AI on what it does best first, recall, context, and instant answers, then expand into workflow and process laterCall to actionIf this episode sparked ideas for your sales team or your product led funnel, follow the show so you do not miss the next one. Share it with one revenue leader who is trying to modernize their go to market motion, and connect with Amir on LinkedIn for more clips and operator level takes.
What if the best people on your investing team are still in college? Peter Harris, Partner at University Growth Fund, breaks down how they run a roughly 100 million dollar venture fund with 50 to 60 students doing real diligence, real founder calls, and real deal work.You will hear how their student led model stays disciplined with checks and balances, why repeat games matter in venture and in business, and how this approach creates a flywheel that helps founders, investors, and the next generation of operators win together.Key Takeaways• Student led does not mean unstructured, the process is built around clear stages, data room access, investment memos, student votes, and an advisory style investment committee, with final fiduciary responsibility held by the partners• Real autonomy is the unlock, when interns are trusted with meaningful work, the best ones level up fast and start leading teams, not just supporting them• The goal is win win win outcomes, founders get capital plus a high effort support network, investors get disciplined underwriting, students get experience that compounds into career leverage• Repeat games beat short term incentives, the alumni network becomes a long term advantage, bringing the fund into high quality opportunities years later• Mistakes are inevitable, the difference is containment and systems, avoiding errors big enough to break trust, then building process improvements so they do not repeatTimestamped Highlights00:32 A 100 million dollar fund powered by 50 to 60 students, and what empowered really means01:43 The decision path, from founder screen to student memo to student vote to the advisory investment committee06:44 Why most venture internships underdeliver, and how longer tenures change outcomes10:37 Repeat games and the trust flywheel, how former students now pull the fund into top tier deals13:55 What happens when something goes wrong, damage control, learning loops, and confidentiality as a core discipline24:39 The bigger vision, expanding beyond venture into additional asset classes to create more student opportunitiesA line worth stealingIf you give people real autonomy, they’ll surprise you with what they do.Pro Tips• If you are building an internship program, start by deciding what real ownership means, then build guardrails around it, not the other way around• Treat trust like an asset, design your process so every stakeholder wants to work with you againCall to ActionIf you enjoyed this one, follow The Tech Trek and share it with a founder, operator, or student who cares about building real advantage through talent and process.
Yulun Wang, executive chairman and co founder at Sovato Health, joins Amir Bormand to unpack the next wave after telemedicine, procedural care at a distance. If you have ever wondered what it would take for a top surgeon to operate without being in the same room, this conversation gets practical fast, from the real bottlenecks inside operating rooms to the health system changes required to make remote robotics mainstream.Key takeaways• Better care can actually cost less when the right expertise reaches the right patient at the right time• Telemedicine is already normalized, which sets the stage for faster adoption of remote procedures once infrastructure and workflows catch up• Surgical robots already have two sides, the surgeon console and the patient side, today connected by a short cable, the leap is making that connection work reliably across hundreds or thousands of miles• Volume drives proficiency, the outcomes gap between high volume specialists and low volume settings is one of the biggest reasons access matters• Operating rooms spend more than half their time on steps around surgery, which creates room to dramatically increase surgeon throughput when workflows are redesignedTimestamped highlights• 00:42 What Sovato Health is building, bringing procedural expertise to patients without requiring travel• 02:10 The early days of surgical robotics and the transatlantic gallbladder surgery on September 7, 2001• 05:30 The counterintuitive idea, higher quality care can reduce total cost in healthcare• 10:27 What actually changes for patients, local hospitals stay the destination, expertise becomes the thing that travels• 14:57 Why repetition matters, the first question patients ask is still the right one• 17:53 Inside the operating room schedule, where time is really spent and why productivity can jumpA line that sticks“Healthcare is different, higher quality, if done right, costs less.”Practical angles you can steal• If you are building in regulated industries, adoption is rarely about the tech alone, it is about trust, workflows, and incentives• If you sell into health systems, position the value around system level outcomes, access, quality, and margin improvement, not just novelty• If you are designing new workflows, look for the hidden capacity, the biggest gains often sit outside the core taskCall to actionIf you want more conversations like this at the intersection of tech, systems, and real world impact, follow The Tech Trek on Apple Podcasts and Spotify.
Moiz Kohari, VP of Enterprise AI and Data Intelligence at DDN, breaks down what it actually takes to get AI into production and keep it there. If your org is stuck in pilot mode, this conversation will help you spot the real blockers, from trust and hallucinations to data architecture and GPU bottlenecks.Key takeaways• GenAI success in the enterprise is less about the demo and more about trust, accuracy, and knowing when the system should say “I don’t know.”• “Operationalizing” usually fails at the handoff, when humans stay permanently in the loop and the business never captures the full benefit.• Data architecture is the multiplier. If your data is siloed, slow, or hard to access safely, your AI roadmap stalls, no matter how good your models are.• GPU spend is only worth it if your pipelines can feed the GPUs fast enough. A lot of teams are IO bound, so utilization stays low and budgets get burned.• The real win is better decisions, faster. Moving from end of day batch thinking to intraday intelligence can change risk, margin, and response time in major ways.Timestamped highlights00:35 What DDN does, and why data velocity matters when GPUs are the pricey line item02:12 AI vs GenAI in the enterprise, and why “taking the human out” is where value shows up08:43 Hallucinations, trust, and why “always answering” creates real production risk12:00 What teams do with the speed gains, and why faster delivery shifts you toward harder problems12:58 From hours to minutes, how GPU acceleration changes intraday risk and decision making in finance20:16 Data architecture choices, POSIX vs object storage, and why your IO layer can make or break AI readinessA line worth stealing“Speed is great, but trust is the frontier. If your system can’t admit what it doesn’t know, production is where the project stops.”Pro tips you can apply this week• Pick one workflow where the output can be checked quickly, then design the path from pilot to production up front, including who approves what and how exceptions get handled.• Audit your bottleneck before you buy more compute. If your GPUs are waiting on data, fix storage, networking, and pipeline throughput first.• Build “confidence behavior” into the system. Decide when it should answer, when it should cite, and when it should escalate to a human.Call to actionIf you got value from this one, follow the show and turn on notifications so you do not miss the next episode.
New leaders face a choice fast. Do you adapt to the organization you inherit, or reshape it around the way you lead?In this conversation, Amir sits down with Gian Perrone, engineering leader at Nav, to unpack how org design really works in the first 30 to 120 days, and how to drive change without spiking anxiety or losing trust.You will hear how Gian treats leadership as triage, why “listen and learn” is rarely passive, and what separates a thoughtful reorg from one that feels chaotic.Key takeawaysLeaders almost always arrive with hypotheses, the real work is testing them without rushing to force a playbookA reorg is not automatically bad, perception turns negative when the why is unclear and people feel unsafeOver communicating helps, but thinking out loud too often can create noise, a structured comms plan keeps change steadyA simple way to spot a collaborative culture is to disagree in the interview and see how they respondManagers are the front line in change, set clear expectations so teams hear a consistent story about what is changing and whyTimestamped highlights00:01 What Nav does, and the real question behind org design for new leaders01:59 Why “first 90 days” is usually triage, not passive observation04:14 The reorg stopwatch, and why structure reflects your worldview08:36 How to communicate change without destabilizing teams12:54 A practical interview move to test whether a company truly collaborates17:03 The manager layer, how Gian sets expectations so change lands wellA line worth repeating“If you arrive and something is on fire, you are going to fix it.”A few practical moves worth stealingWhen you are new, write down your hypotheses early, then use real signals to confirm or kill themFloat a change as a real idea first, gather feedback, then come back with details before you finalizeCreate a simple comms map of who hears what, when, and from whom, then follow itBe matter of fact about changes, teams often mirror the tone you setCall to actionIf this episode helped you think more clearly about leadership and org design, follow the show and share it with one operator who is navigating change right now.
Nir Soudry, Head of R&D at 7AI, breaks down how teams can move from early experimentation to real production work fast, without shipping chaos. If you are building AI features or agent workflows, this conversation is a practical look at speed, safety, and what it actually takes to earn customer trust.Nir shares how 7AI ships in tight loops with a real customer in mind, why pushing decisions closer to the engineers removes bottlenecks, and how guardrails and evaluation keep fast releases from turning into security risks. You will also hear a grounded take on human plus AI collaboration, and why “just hook up an LLM” falls apart at scale.Key takeaways• Speed starts with focus, pick one customer and ship something usable in two or three weeks, then iterate every couple of weeks based on real feedback• If you want velocity, remove the meeting chain, get engineers in the room with customers and push decisions downstream• Agent workflows are not automatically testable, you need scoped blast radius, strong input and output guardrails, and an evaluation plan that matches real production complexity• “LLM as a judge” helps, but it is not magic, you still need humans reviewing, labeling, and tuning, especially once you have multi step workflows• In security, trust is earned through side by side proof, run a real pilot against human outcomes, measure accuracy and thoroughness, then improve with tight feedback loopsTimestamped highlights00:28 What 7AI is building, security alert fatigue, and why minutes matter02:03 A fast shipping cadence, one customer, quick prototypes, rapid iterations03:51 The velocity playbook, engineers plus sales in the same meetings, fewer bottlenecks08:08 Shipping agents safely, blast radius, guardrails, and why testing is still hard14:37 Human plus AI in practice, how ideas become working agents with review and monitoring18:04 Why early AI adoption works for some customers, and how pilots build confidence24:12 The startup reality, faster execution, traction, and why hiring still mattersA line worth sharing“When it’s wrong, click a button, and next time it will be better.”Pro tips you can steal• Run a two to four week pilot with one real customer and ship weekly, the goal is learning speed, not perfect coverage• Put engineers directly in customer conversations, keep leadership focused on unblocking, not gatekeeping• Treat every agent like a product surface, define strict inputs and outputs, sanitize both, and limit what it can affect• Build evaluation around real workflows, not single prompts, and combine automated checks with human review• Add feedback buttons everywhere, route feedback to both model improvement and the team that tunes production behaviorCall to actionIf you want more conversations like this on building real tech that ships, follow and subscribe to The Tech Trek.
B2B pricing is still way harder than it should be, even in 2026. In this conversation, Tina Kung, Founder and CTO at Nue.ai, breaks down why quote to revenue can take weeks, and how a flexible pricing engine can turn it into something closer to one click.You will hear how fast changing pricing models, AI driven products, and new selling motions are forcing revenue teams to rethink the entire system, not just one tool in the stack.Key takeaways• B2B quoting is basically a shopping cart, but the real complexity is cross team workflow, accounting controls, and downstream revenue rules.• Fragmented systems break the moment pricing changes, and in fast markets that can mean you only get one real pricing change per year.• AI companies often evolve from simple subscriptions to usage, services, and even physical goods, which creates billing chaos without a unified backbone.• Commit based models can make revenue more predictable while staying flexible for customers, but only if you can track entitlement, burn down, overspend, and approvals cleanly.• The most useful AI in revenue ops is not just insight, it is action, meaning it can generate the right transaction safely inside a system of record.Timestamped highlights00:43 What Nue.ai actually does, one platform for billing, usage, and revenue ops with intelligence on top02:43 Why a one minute checkout in B2C turns into weeks or months in B2B05:28 The real reason quote to revenue stays broken, fragmentation and brittle integrations08:03 How AI era pricing evolves, subscriptions to consumption, services, and physical goods12:51 Why Tina designed for flexibility from day one, and what 70 plus customer calls revealed19:42 Transactional intelligence, AI that can create the quote, route approvals, and move revenue work forwardA line worth keeping“It should be as easy as one click.”Practical moves you can steal• Map every pricing change to the downstream work it triggers, quoting, billing, revenue recognition, and approvals, then measure how many handoffs exist today.• If you sell both self serve and enterprise, design for multiple selling motions early, because the same objects can have totally different context and risk.• Treat pricing as a product surface, if your systems make changes slow, you are giving up speed in the market.Call to actionIf you want more conversations like this on how modern tech companies actually operate, follow the show on Apple Podcasts or Spotify, and connect with me on LinkedIn for clips and episode takeaways.
Tim Bucher, CEO and cofounder of Agtonomy, joins Amir to break down what physical AI looks like when it leaves the lab and shows up on the farm. Tim shares how his sixth generation farming roots and a lucky intro computer science class led to a career that included Microsoft, Apple, and Dell, then back into agriculture with a mission that hits the real world fast.This conversation is about building tech that earns its keep, delivers clear ROI, and improves quality of life for the people who keep the food supply moving.Key takeaways• Deep domain experience is a real advantage, especially in ag tech, you cannot fake the last mile of operations• The win is ROI first, but quality of life is right behind it, less stress, more time, and fewer dangerous moments on the job• Agtonomy focuses on autonomy software inside existing equipment ecosystems, not building tractors from scratch, because service networks and financing matter• One operator can run multiple vehicles, shifting the role from tractor driver to tech enabled fleet operator• Hiring can change when the work changes, some farms started attracting younger candidates by posting roles like ag tech operatorTimestamped highlights00:42 What Agtonomy does, physical AI for off road equipment like tractors01:45 Tim’s origin story, sixth generation farming roots and the class that changed his path03:59 Lessons from Bill Gates, Steve Jobs, and Michael Dell, and how Tim filtered the mantras into his own leadership05:53 The moment everything shifted, labor pressure, regulations, and the prototype built to save his own farm09:17 The blunt advice for ag tech founders, if you do not have a farmer on the team, fix that11:54 ROI in plain terms, one person operating a fleet from a phone or tablet14:29 Why Agtonomy partners with equipment manufacturers instead of building new vehicles, dealers, parts, service, and financing are the backbone17:39 The overlooked benefit, quality of life, reduced stress, and a more resilient food supply chain20:18 How farms started hiring differently, “ag tech operator” roles and even “video game experience” as a signalA line that stuck with me“This is not just for Trattori farms. This is for the whole world. Let’s go save the world.”Pro tips you can actually use• If you are building in a physical industry, hire a real operator early, not just advisors, get someone who lives the workflow• Write job posts that match the modern workflow, if the work is screen based, label it that way and recruit for it• Design onboarding around familiar tools, if your UI feels like a phone app, training time can collapseCall to actionIf you got value from this one, follow the show and share it with a builder who cares about real world impact. For more conversations like this, subscribe and connect with Amir on LinkedIn.
Data and AI are everywhere right now, but most teams are still guessing where to start. In this episode, Cameran Hetrick, VP of Data and Insights at BetterUp, breaks down what actually works when you move from AI hype to real business impact. You will hear a practical way to choose AI and analytics projects, how to spot low risk wins, and why clean, governed data still decides what is possible. Cameran also shares a simple mindset shift, stop copying broken workflows, and start rethinking the outcome you are trying to create.Key Takeaways• AI is a catchall term right now, the best early wins usually come from “assist” use cases that boost speed and quality, not full replacement• Start with low context, low complexity work, then earn your way into higher context projects as data quality and governance mature• Pick use cases with an impact versus effort lens, quick wins create proof, buy in, and budget for bigger bets• Stakeholders often ask for a data point or feature, but the real value comes from digging into the goal, and redesigning the workflow• Data teams cannot stop at insights, adoption matters, if the next team cannot act on the output, the project stallsTimestamped Highlights00:40 BetterUp’s mission, building a human transformation platform for peak performance01:57 AI as a “catchall,” where expectations are realistic, and where they are not05:19 A useful way to think about AI work, context versus complexity, and why “intern level” framing helps07:33 How to choose projects with an impact and level of effort calculator, and why trust in data is everything10:33 The hard part, translating stakeholder requests into real outcomes, and reimagining workflows instead of automating bad ones13:47 Systems thinking across handoffs, plus why teams need deeper business fluency, including P and L basics16:59 The last mile problem, if the next stakeholder cannot act, the value never lands20:27 The bottom line, AI does not change the fundamentals, it accelerates themA Line Worth Saving“AI is like an intern, it still needs direction from somebody who understands the mechanics of the business.” Practical Moves You Can Use• Run every idea through two quick questions, what business impact do we expect, and what level of effort will it take• Look for a win you can explain in one minute, then use it to fund the harder work• When someone asks for a metric or feature, ask why twice, then validate the workflow, then redesign the outcome• Invest in governed data early, untrusted outputs kill adoption fastCall to ActionIf this episode helped you think more clearly about AI in the real world, follow the show, leave a quick review, and share it with one operator who is trying to move from experiments to impact. You can also follow Amir on LinkedIn for more clips and practical notes from each episode.
loading
Comments