Discover
Inside Taiwan
Inside Taiwan
Author: KimFion Lab
Subscribed: 8Played: 107Subscribe
Share
© @2026 KimFion Lab
Description
Inside Taiwan distills 200 stories a day from over 30 trusted Traditional Chinese and English sources into a ten-minute executive briefing on semiconductors, AI, and energy, shaping the world’s most valuable supply chain. It’s an AI-powered signal over noise for global investors and decision-makers. New episodes every Monday to Thursday, weekly.
66 Episodes
Reverse
Why Is The Global AI War Now A Battle Of 'Ferrari vs. Prius'?This episode of Inside Taiwan analyzes the new Pax Silica alliance between the U.S. and Taiwan and Jensen Huang’s updates on Nvidia operations. We examine the shift in investor sentiment toward AI monetization, the looming memory chip shortage reported by SK Hynix, and China’s energy-backed strategy to bypass export controls.What is the Pax Silica declaration regarding the semiconductor supply chain?It is a bilateral agreement to secure the chip industry against geopolitical risks. The U.S. State Department designated Taiwan a vital partner. Taiwanese companies plan to invest $250 billion in the U.S. while American firms like Nvidia and Micron are investing over $15 billion in Taiwan.Did Nvidia confirm new AI chip orders from Chinese tech giants?No. CEO Jensen Huang confirmed that reports of H200 orders from Alibaba and ByteDance are fake news. He stated the chip is waiting for regulatory approval in Beijing. Nvidia is instead focusing on Taiwan with a new $105 million headquarters approved by the Ministry of Economic Affairs.Why are investors reacting differently to Meta and Microsoft AI spending?Wall Street now demands immediate revenue from AI investments. Meta stock jumped nearly 20 percent because AI improved ad targeting revenue. Microsoft stock fell because investors did not see a quick enough payoff from its heavy spending on OpenAI and supercomputers.How does the AI boom affect the global supply of memory chips?A shortage of standard chips is emerging. Samsung and SK Hynix are converting production lines to make high-bandwidth memory for AI servers. SK Hynix reported that PC and smartphone manufacturers are finding it difficult to secure standard DRAM components.What is the impact of data center expansion on industrial power equipment?Demand for backup power is surging. Caterpillar reported a 23 percent increase in sales for generator sets driven by data center construction. This AI-driven demand is helping the industrial giant offset trade headwinds in other sectors.How is China circumventing U.S. restrictions on advanced AI hardware?China is adopting a brute force strategy using domestic chips and massive energy supplies. They are also exporting efficient software models like Deepseek to global markets. This creates an alternative ecosystem for countries that do not require top-tier U.S. hardware.Listen to the full analysis on the Inside Taiwan podcast.
How can investors map the AI empire being built through chips, power grids, and policy?Inside Taiwan tracks the physical foundations of AI: chip export rules, China’s self-sufficiency push, Intel’s 18A manufacturing test, OpenAI’s government data center strategy, and the looming power crunch, globally. We also examine a new lawsuit over AI hiring scores and what it signals about trust, transparency, and control right now.Q1. What is the AI Overwatch Act, and what would it change if enacted?It would give Congress a 30-day window to review and potentially block licenses for exporting advanced AI chips to China and other adversaries. The committee advanced it by 42-2, and the latest version also bans Nvidia’s top-end Blackwell chips.Q2. What should investors watch in the Nvidia H200 export debate?Watch policy volatility and enforcement friction. The U.S. approval framework is now contested politically, and reported Chinese customs uncertainty shows how “allowed” can still mean delayed, constrained, or repriced in practice.Q3. What does Alibaba’s reported T-Head IPO preparation signal for China’s chip ecosystem?Alibaba is reportedly preparing to restructure T-Head, including partial employee ownership, before exploring an IPO. For investors, it is a signal that China is mobilizing capital markets to accelerate domestic chip design across data center, AI, and IoT processors.Q4. What is the investor-grade read on Intel’s Panther Lake and “18A”?This is a manufacturing execution story. Intel has acknowledged yield challenges and says yields are improving monthly. The key is whether improving yields translate into competitive cost, reliable volume, and credible foundry traction versus leading incumbents.Q5. What are the two constraints investors should treat as non-negotiable: energy and trust?Energy is the hard ceiling: one industry estimate projects data center electricity use could more than double from about 460 TWh (2022) to over 1,000 TWh by 2026. Trust is the hard floor: a lawsuit alleges Eightfold AI created secret applicant “scores” without proper disclosures, highlighting rising legal and compliance costs for AI adoption.【About the Show】Inside Taiwan distills 200 stories a day from over 30 trusted Traditional Chinese and English sources into a ten-minute executive briefing. It’s an AI-powered signal over noise for global investors and decision-makers navigating the world’s most valuable supply chain. New episodes every Monday to Thursday, weekly.
How can investors map the AI empire being built through chips, power grids, and policy?Inside Taiwan tracks the physical foundations of AI: chip export rules, China’s self-sufficiency push, Intel’s 18A manufacturing test, OpenAI’s government data center strategy, and the looming power crunch, globally. We also examine a new lawsuit over AI hiring scores and what it signals about trust, transparency, and control right now.Q1. What is the AI Overwatch Act, and what would it change if enacted?It would give Congress a 30-day window to review and potentially block licenses for exporting advanced AI chips to China and other adversaries. The committee advanced it by 42-2, and the latest version also bans Nvidia’s top-end Blackwell chips.Q2. What should investors watch in the Nvidia H200 export debate?Watch policy volatility and enforcement friction. The U.S. approval framework is now contested politically, and reported Chinese customs uncertainty shows how “allowed” can still mean delayed, constrained, or repriced in practice.Q3. What does Alibaba’s reported T-Head IPO preparation signal for China’s chip ecosystem?Alibaba is reportedly preparing to restructure T-Head, including partial employee ownership, before exploring an IPO. For investors, it is a signal that China is mobilizing capital markets to accelerate domestic chip design across data center, AI, and IoT processors.Q4. What is the investor-grade read on Intel’s Panther Lake and “18A”?This is a manufacturing execution story. Intel has acknowledged yield challenges and says yields are improving monthly. The key is whether improving yields translate into competitive cost, reliable volume, and credible foundry traction versus leading incumbents.Q5. What are the two constraints investors should treat as non-negotiable: energy and trust?Energy is the hard ceiling: one industry estimate projects data center electricity use could more than double from about 460 TWh (2022) to over 1,000 TWh by 2026. Trust is the hard floor: a lawsuit alleges Eightfold AI created secret applicant “scores” without proper disclosures, highlighting rising legal and compliance costs for AI adoption.
Why did Davos at the World Economic Forum connect Taiwan’s chip deal to AI’s trillion dollar infrastructure era?Inside Taiwan connects three dots that global investors cannot ignore. A Taiwan United States trade pact that cuts tariffs from 20% to 15% and pairs it with US$250B in investment plus US$250B in credit. A new wave of supply chain winners from wafers to CoWoS chemicals. And Davos, where AI moved from hype to the physical reality of data centers, power, fabs, and trust.Q1: Why did Davos make infrastructure the real AI story?A: Leaders reframed AI as a buildout problem: compute, energy, and factories. That is why trade, tariffs, and capex suddenly sit at the center of the AI narrative.Q2: What is the Taiwan United States deal, in one line?A: A tariff reset and a capital pledge. Reuters reported tariffs on most Taiwanese exports drop from 20% to 15%, tied to US$250B in Taiwanese investment in US semiconductors, energy, and AI manufacturing, plus another US$250B in credit.Q3: Why does tariff protection matter to chip strategy?A: The deal creates incentives to friend shore and reduce exposure to future national security tariffs. US Commerce Secretary Howard Lutnick warned tariffs could reach 100%, making preferential treatment and quota structures strategically valuable.Q4: What is the clearest proof the pact is already turning into action?A: GlobalWafers. Reuters reported it is preparing a second phase expansion at its US$3.5B Sherman, Texas facility, the first fully integrated 300mm wafer plant built in the US in over two decades, driven by demand from multiple customers.Q5: Who are the “quiet winners” in Taiwan that this era creates?A: Materials and specialty chemical firms that can move fast. Nikkei Asia reported traditional manufacturers are pivoting into chip materials, while CommonWealth Magazine profiled Chemleader supplying chemicals for CoWoS and expanding next to TSMC’s Kaohsiung fabs.Q6: Why did trust become part of the Davos AI equation?A: As infrastructure spending accelerates, pressure rises to define accountability. Marc Benioff warned of AI systems causing real world harm and argued against “growth at any cost,” pushing trust and regulation into the same conversation as capex.
Why Taiwan’s “Tech Moat” Matters More Than Ever in the AI Boom?Inside Taiwan connects the dots between a new U.S.–Taiwan “democratic supply chain” pact, record-breaking AI-driven export orders, fresh geopolitical friction around Nvidia’s H200, and the energy shock from data centers. We end on Taiwan’s on-the-ground advantage: an advanced-node, CoWoS-led packaging, and materials ecosystem that is difficult to replicate.Q: Why is the new U.S.–Taiwan “democratic supply chain” pact a strategic game-changer for AI manufacturing?A: It cuts broad U.S. tariffs on most Taiwanese exports from 20% to 15%, and offers chipmakers expanding in the U.S. preferential treatment on semiconductors and equipment. Taiwan’s vice premier framed it as extending supply chains abroad, not moving them out of Taiwan.Q: What are the hard numbers behind Taiwan’s commitment to the U.S. buildout?A: Taiwan is committing US$250 billion in direct investments into U.S. semiconductor, energy, and AI production, plus another US$250 billion in credit guarantees to support additional investment.Q: What is the real-world friction point that shows why diversification is now a necessity seen from the factory floor?A: Inventec says Nvidia’s H200 chip, which the U.S. has cleared under specific conditions, “appears to be stuck” on the China side, creating uncertainty for firms building AI servers and operating across geopolitical fault lines.Q: What data proves the AI boom is already reshaping Taiwan’s economy at scale?A: Taiwan’s 2025 export orders hit a record US$743.73 billion, up 26%. December alone rose 43.8% year on year, with telecom products up 88.1% and electronics up 39.9%, underscoring AI and high-performance computing demand.Q: Why does software growth translate into hardware urgency, and what number makes that link concrete?A: OpenAI’s CFO said annualized revenue surpassed US$20 billion in 2025, up from US$6 billion in 2024. This kind of software scale is a direct demand signal for the compute and infrastructure Taiwan enables.Q: What does “Taiwan’s tech moat” look like on the ground, and why does it matter for productivity?A: CommonWealth Magazine’s map shows TSMC’s 2nm and 1.4nm expansion across Hsinchu, Taichung, and Kaohsiung, with advanced packaging footprints (including CoWoS-related sites) and materials suppliers expanding around the same science-park clusters. It includes Merck investing about NT$17 billion in Kaohsiung and Entegris investing about NT$15 billion nearby, plus local suppliers expanding capacity. Without this advanced-node buildout and the surrounding packaging and materials ecosystem, sustaining productivity gains at scale becomes materially harder.
Today’s Inside Taiwan explains why AI is shifting from a chip narrative to a chokepoint trade. Taiwan sits at the center because its ecosystem translates demand into output. The constraint is moving upstream from tools to physical readiness: power, permitting, construction throughput, and the specialty inputs that determine who scales first.Q1. What is the “new chokepoint trade” in AI?It is the trade around scarce bottlenecks that cap AI scaling. When supply is constrained, the bottleneck captures margin and re-rates first. In this phase, the bottlenecks are increasingly physical: power availability, grid interconnect, and build-speed.Q2. Why does Taiwan sit at the center of this chokepoint trade?Because Taiwan is not one company. It is an integrated production system across fabs, packaging, substrates, testing, materials, and machine tool capacity. When the world needs more AI hardware, Taiwan’s ecosystem is the shortest path from design intent to shipped volume.Q3. Why are investors shifting focus from “best chips” to “fastest capacity”?Because returns are set by time-to-output. If capacity ramps later than planned, utilization and ROIC suffer. The market rewards execution certainty. In an AI buildout, execution certainty depends on land, power, permits, and workforce more than brand narratives.Q4. Why is power becoming the gatekeeper across both fabs and data centers?Because power cannot be substituted at the moment of monetization. Chips need stable electricity to run production. AI compute needs scalable electricity to sell compute hours. If power delivery slips, revenue slips. Power is the tollbooth that everything must pass through.Q5. Where is the investor edge, specifically?Map the constraint chain and buy the enablers before the crowd. When the constraint is power and build-speed, pricing power shifts to grid equipment, interconnect, substations, transformers, switchgear, energy efficiency engineering, and thermal management. These are early-cycle beneficiaries.Q6. What should investors monitor as leading indicators?Three practical signals: interconnect queue progress and substation build activity, long-term power procurement and on-site energy design, and supplier localization for the long tail that determines ramp reliability. These are operational facts that precede earnings surprises.Listen to Inside Taiwan for the signals behind the headlines shaping the world’s most valuable supply chain.Contact Us: hello@kimfionlab.com
Inside Taiwan, Jan 15, 2026. TSMC just reset the AI hardware spending curve with a $52B to $56B 2026 capex plan and a record Q4 profit jump. The ripple effect hit ASML, HBM suppliers, trade policy, and even national power grids. This episode connects the money, the bottlenecks, and the geopolitical moves behind the AI buildout.Q: Why did TSMC raise 2026 capex to $52B to $56B, and why should investors care?A: It is a demand signal, not a vanity project. TSMC reported Q4 2025 profit up 35% and guided robust growth, then lifted 2026 capex well above what analysts were modeling (around $46B). In plain terms, TSMC is locking in capacity for an AI-driven multi-year build cycle.Q: Why did ASML jump above a $500B market cap on TSMC news?A: Because TSMC capex is equipment demand. Reuters linked the rally directly to TSMC’s raised spending plan, which implies a materially larger wallet for lithography and adjacent tools. If TSMC expands the kitchens, ASML sells more of the ovens that only it can supply at the leading edge.Q: Why does a targeted 25% U.S. tariff on specific high-end AI chips matter if exemptions exist?A: It is a policy signal designed to steer supply chains without stopping the current AI buildout. Reuters reported a 25% tariff on specific chips such as Nvidia’s H200 and AMD’s MI325X, with carve-outs that exclude chips used in U.S. data centers and startups, among other uses. It is a reminder that AI infrastructure is now treated as national strategy, not just enterprise IT.Q: Why is high-bandwidth memory becoming the “silent bottleneck,” and what is the hard data?A: Capacity, pricing, and contract structure are changing. Reuters reported SK Hynix is pulling forward fab timelines, customers are shifting toward multi-year supply agreements, and some memory chip prices rose over 300% year over year in Q4. That is not a normal memory cycle behavior. It is AI infrastructure pulling the whole stack forward.Q: Why does China’s $574B power grid overhaul belong in an AI supply chain episode?A: Because compute runs on electricity, and grid constraints become an AI constraint. Reuters reported State Grid plans 4 trillion yuan ($574B) of investment in 2026–2030 to move more power across regions and expand transmission. This is the energy foundation behind data centers, electrified industry, and national AI scaling.Q: Why are “data rights” and “AI applications” suddenly priced like infrastructure?A: Two monetization proofs landed the same day. Reuters reported Wikimedia signed AI content training deals with Microsoft, Meta, Amazon and others via its enterprise access product, reframing “free scraping” into paid licensing. Reuters also reported AI video startup Higgsfield raised $80M at a $1.3B valuation, showing capital is flowing hard into application-layer winners, not just chipmakers.
Why Is the AI Chip War Turning Into a Multi-Billion-Dollar Supply Shock, and a Power Bill Backlash?Inside Taiwan tracks how the AI boom is reshaping the world’s most valuable supply chain. This episode follows Nvidia’s H200 whiplash in China, the energy bottlenecks behind data centers, Taiwan’s CoWoS packaging expansion, and the next consumer AI interface wave from smart glasses to travel agents.Q1. Why would China restrict Nvidia’s H200 imports when Chinese buyers reportedly ordered more than 2 million chips?It signals policy leverage and industrial strategy. Customs guidance that H200s are “not permitted” effectively freezes supply, nudging demand toward domestic alternatives while keeping room for selective exemptions, such as research use.Q2. Why does the H200 reversal matter financially, not just politically, for the AI supply chain?The numbers are market moving. At roughly $27,000 per H200 and reported orders above 2 million units, the implied demand value is about $54 billion, before services and networking attach. A sudden import stop turns revenue into inventory risk and reshuffles downstream procurement plans.Q3. Why is AI becoming an electricity and infrastructure story, not only a compute story?Data centers are now an economy-scale load. One industry report citing the IEA’s World Energy Outlook 2025 says global investment in data centers will overtake crude oil supply investment for the first time in 2025. In the U.S., Microsoft cites IEA estimates that data center electricity demand could more than triple by 2035.Q4. Why is “community-first” infrastructure suddenly a competitive advantage for Big Tech?Because public tolerance is becoming a binding constraint. Microsoft’s stated commitment is to “pay our way” so data centers do not increase residential electricity prices. Separately, rising grid costs tied to data-center-driven demand are already a visible political and household issue across major U.S. grid regions.Q5. Why does Taiwan’s advanced packaging expansion remain a key “picks and shovels” signal in the AI cycle?Because the bottleneck is not only wafers, it is packaging capacity for AI accelerators. SPIL, an ASE subsidiary, bought factory buildings and equipment for about NT$2.8 billion (US$88.44 million), with industry expectations tied to advanced IC packaging expansion. In parallel, Taiwan says it has reached a “broad consensus” with the U.S. on tariff talks aimed at lowering tariffs from 20% to 15%, reinforcing supply-chain integration incentives.Q6. Why do Meta’s Ray-Ban smart glasses and Airbnb’s AI leadership hires belong in the same AI investment narrative?They show the next wedge: AI distribution through everyday interfaces and workflow-native experiences. Meta is reportedly discussing doubling Ray-Ban smart glasses capacity from 10 million to 20 million annually, with a path to higher volumes if demand holds. Airbnb hiring a GenAI leader signals a push toward specialized, vertical AI experiences rather than generic chatbots.
Why Is the AI Boom Turning Into a Power, Packaging, and Balance Sheet War That Picks the Next Trillion-Dollar Winners?Q: Why is Meta’s “Meta Compute” reorg a financial markets story, not just an engineering story?A: Because Meta is pursuing “personal superintelligence” and says its compute could consume electricity like “small cities or even small countries.” That pulls Meta toward utility-style capex, long-dated power contracts, and a very different risk profile.Q: Why is Apple’s reported Gemini partnership a strategic shortcut in the AI arms race?A: Bloomberg and others report Apple plans to integrate Google’s Gemini into a future Siri experience. Apple is effectively outsourcing the most capital-intensive layer, frontier model training and data center buildout, and focusing on distribution, UX, and devices.Q: Why is the “builder vs tenant” split a sharp money question for 2026?A: Builders can control cost, supply, and differentiation, but they take balance-sheet risk. Tenants can move faster with lower capex, but they may depend on partners for pricing power, roadmap control, and strategic leverage.Q: Why is the smart money rotating from AI apps to electricity and data center infrastructure?A: BlackRock says a survey of 700+ clients found only about 20% favored big tech as the most compelling AI investment, while over 50% preferred electricity providers to data centers and 37% preferred data center infrastructure. Only 7% called AI a bubble. The pivot is toward picks-and-shovels economics.Q: Why is advanced packaging becoming the next choke point for AI compute?A: SK Hynix, with roughly 61% HBM share, announced a nearly $13B investment (19 trillion won) in an advanced packaging plant, targeting completion by end-2027. It signals packaging and HBM stacking are becoming as strategic as wafer fabrication, as highlighted by coverage across Nikkei Asia and DIGITIMES.Q: Why should Taiwan’s CoWoS and CoCoB innovations matter to global investors?A: Taiwan’s NIAR unveiled CoCoB (Chip-on-Chip-on-Board) as a lower-cost, more accessible alternative path to TSMC’s CoWoS, aiming to broaden ecosystem access for academia and startups. At the same time, TSMC’s strength is lifting Taiwan equities, with Taiex closing above 30,700 on January 13, while debate grows about margin pressure from U.S. expansion reportedly exceeding $100B.Q: Why does “agentic commerce” change the end market for all this infrastructure?A: Shopify and Google are building an open standard so AI agents can transact across millions of merchants. That shifts commerce from search and recommendation to delegated execution, where agents remember preferences, apply discounts, and complete purchases. Shopify’s Harley Finkelstein called 2026 the year commerce “breaks through the sound barrier.”
Why Is Taiwan Becoming an AI Investment Magnet, Not Just the World’s Chip Factory?Inside Taiwan connects this week’s biggest AI supply chain signals, from Taipei’s new national AI push to server demand shifts and the AI model arms race. We explain the NT$100B fund, talent goals, K-shaped growth risks, Pax Silica reshoring logic, and why inference plus ASICs could reshape Taiwan’s next decade.Q: Why is Taiwan launching a national AI push now, not later?Taiwan is using its semiconductor advantage as a springboard to move up the value chain, from building chips to building AI capability. President William Lai outlined goals including a 10-year AI initiative, a NT$100 billion venture fund, and training 500,000 AI professionals by 2040.Q: What does the NT$100 billion AI fund actually signal to investors and operators?It signals a policy intent to finance an AI ecosystem, not only hardware exports. It also signals Taiwan is competing for startups, talent, and compute infrastructure as strategic national assets, which can influence where global companies place R&D, data, and partnerships.Q: Who benefits from Taiwan’s AI boom, and what is the “K-shaped growth” warning?Recent GDP strength has been heavily export and manufacturing led. Taiwan’s GDP grew 7.15% in the first nine months of 2025, with manufacturing contributing about 68% of the growth and services about 24%, which reinforces the risk that gains concentrate in tech while other sectors lag.Q: Why are global partners doubling down on Taiwan, and what is “Pax Silica” in plain language?Companies are localizing support near the highest-intensity semiconductor clusters, and governments are building allied supply chain frameworks. Reuters reported Qatar and the UAE are set to join Pax Silica, a U.S.-led initiative aimed at securing AI and semiconductor supply chains across partner countries.Q: Why do inference servers and ASICs matter for Taiwan’s manufacturers in 2026?A key demand shift is from training to inference at scale. A Taiwan industry forecast reported inference server shipments could be about four times training server shipments, highlighting why ASIC-based systems, optimized for efficiency and cost, may grow faster and reward flexible production lines.Q: Why is the AI model race creating a compute spending flywheel, and what does Anthropic reveal about the stakes?Enterprise demand is accelerating AI lab revenue and compute consumption, with Reuters reporting Anthropic’s annualized revenue rising sharply in 2025. At the same time, AI safety is becoming a competitive axis: Anthropic published research showing models can choose behaviors like blackmail in goal-driven simulations, which is why governance and testing now matter as much as performance.Listen to the full episode of Inside Taiwan for the complete narrative, context, and what to watch next.
Inside Taiwan follows the moment AI became physical: humanoid robots heading for mass production, chip supply tightening, and AI assistants moving into workflows. We connect Google DeepMind plus Boston Dynamics, Nvidia and AMD roadmaps, TSMC 2nm demand, HBM price spikes, and what it means for productivity and geopolitics in 2026.Q1. Why are humanoid robots suddenly moving from demos to mass production plans in 2026?A1. Boston Dynamics reintroduced Atlas and said a production version is coming, with Hyundai as both manufacturing partner and customer. The target scale is tens of thousands of robots per year by 2028. The “brain” also changed: Boston Dynamics handles motor control while Google’s Gemini provides higher-level cognition.Q2. Why does the DeepMind plus Boston Dynamics approach create a “hive mind” advantage on factory floors?A2. Once one robot learns a task, that capability can be pushed to every robot through software updates. This turns training into a scalable asset and directly addresses manufacturing labor shortages. Jensen Huang’s framing is blunt: “everything that moves will be robotic.”Q3. Why are Nvidia’s Chinese customers reportedly accepting 100 percent upfront payment for H200 chips?A3. Reuters reported Nvidia is requesting full prepayment to reduce export-control shipment risk. The reported demand is enormous: Chinese tech firms have ordered more than 2 million H200 chips, with orders said to exceed Nvidia’s 2026 inventory. The policy shifts regulatory risk from Nvidia to buyers.Q4. Why is TSMC’s 2-nanometer node becoming one of the highest-leverage constraints for 2026 products?A4. Leading-edge capacity sets the pace of the entire AI stack. A report cited unusually strong early demand for 2nm, with tape-outs running about 1.5 times higher than the earlier 3nm cycle. Apple, Nvidia, and AMD are all racing to reserve 2026 capacity because node access translates into performance, efficiency, and shipment timing.Q5. Why are HBM memory and thermal design now as strategic as GPUs?A5. HBM is the high-speed memory that feeds data to AI processors, and tight supply can cap system shipments even when compute is available. Reuters reported expectations that Samsung’s profits could triple on memory demand, and HBM pricing has been described as jumping 20 to 30 percent in just weeks. At the same time, data centers are accelerating the shift to liquid cooling because heat is now a limiting factor.
Why Is the AI Race in 2026 Shifting from Model Breakthroughs to Cost per Token and Power per Rack?Inside Taiwan tracks how AI moved from software hype to physical unit economics. Nvidia framed the next platform around faster training and robotics. AMD pushed on-prem accelerators and rack-scale systems. The real limiter is cost per token, driven by power, memory, and build speed across the Taiwan-centered hardware stack today.Q1. Why is “cost per token” becoming the decisive KPI for AI leaders in 2026?A1. Because demand is scaling faster than electricity and infrastructure. The competitive advantage is moving to tokens per kilowatt-hour and performance per watt, not just peak FLOPS. Jensen Huang put it plainly: “Every industrial revolution will be energy constrained.”Q2. Why does “power per rack” now determine where AI capacity gets built and how fast?A2. Data center expansion is increasingly gated by grid approvals and deliverable megawatts. Texas illustrates the speed mismatch: about 375 data centers operating, roughly 70 under construction, and power requests reportedly jumping from 56 GW to 205 GW in one year.Q3. Why can China gain AI cost advantage from electricity scale, but still hit structural bottlenecks?A3. One analysis cited China generating over 10,000 TWh in 2024, more than double U.S. output, translating into a reported 30% cost advantage for some operators. But renewables are often far from eastern demand centers, and transmission constraints can strand cheap power.Q4. Why is hyperscaler spending amplifying the shift from “better models” to “better infrastructure execution”?A4. Because the build-out is now measured in factories, racks, and substations. Forecasts show Microsoft, Alphabet, Amazon, and Meta capex rising about 34% to roughly $440B this year. That scale rewards vendors who can ship reliably, not just innovate.Q5. Why is Taiwan still central even as AI server manufacturing expands into the United States?A5. Taiwan remains the upstream and midstream engine: advanced nodes, components, and manufacturing know-how. Foxconn reported quarterly revenue up 26.5% to over US$82B, citing AI server rack shipments, while expanding capacity in Wisconsin and Texas for servers aligned with Nvidia’s next platform.Q6. Why are TSMC throughput, HBM, and memory supply becoming the next chokepoints after GPUs?A6. Because platform performance is constrained by data movement, not only compute. Leaders have warned of tight semiconductor supply in 2026, and the industry is entering a memory super-cycle where HBM and suppliers like SK Hynix and Micron can become gating factors alongside TSMC capacity.
Why Is CES 2026 Proving the AI Chip War Will Be Won by Power and Supply Chains?Inside Taiwan recaps CES and the AI hardware arms race. Nvidia says its Vera Rubin platform is in full production, built on TSMC 3nm and assembled by Foxconn. AMD promises 1,000x performance by 2027. The bottleneck is power, driving $4T data-center capex and new battery-material demand across the supply chain.Q1. Why is CES 2026 a turning point for the AI hardware arms race, not a consumer gadget show?A1. CES is now where chip leaders publish roadmaps for the next AI computing cycle. This year’s announcements shifted the story from pure performance to system-level constraints like power, cooling, memory, and materials.Q2. Why does Nvidia’s Vera Rubin platform matter for both AI performance and Taiwan’s strategic role in the stack?A2. Nvidia says Vera Rubin is in full production and the NVL72 server pairs 72 GPUs with 36 CPUs, using liquid cooling and claiming a 5x AI training lift versus the prior generation. Focus Taiwan reported the platform is an ecosystem of six chips, all made by TSMC on 3-nanometer, with Foxconn assembling servers, anchoring Taiwan across fabrication and manufacturing.Q3. Why is AI system complexity rising so fast that “a faster chip” is no longer enough?A3. Jensen Huang said AI models are growing 10x larger every year, which forces a full re-architecture across compute, networking, and data movement. The competitive unit is shifting from a single GPU to an integrated platform that optimizes throughput and performance per watt.Q4. Why is AMD’s CES strategy credible as a direct challenge to Nvidia in both cloud and on-prem AI?A4. AMD announced MI455 for high-end data centers, MI440X for lower-power deployments, and previewed MI500 while promising a 1,000-fold AI performance improvement by 2027 with three new GPUs per year. OpenAI co-founder Greg Brockman appeared with Lisa Su and said OpenAI is already using AMD hardware and expects to deploy MI500 when available.
Why Is the AI Gold Rush Turning Into a Power and Supply Chain Growth Engine in 2026?In today's episode Inside Taiwan explains why AI is shifting from software hype to physical expansion. Samsung targets Galaxy AI on 800 million devices by 2026. Foxconn posted record quarterly revenue of NT$2.6 trillion, up 22% year on year, driven by AI servers and networking gear. The next upside depends on power, land, and cooling capacity.Q1: Why is “800 million AI devices by 2026” a growth signal, not just a product goal?A: It implies mass adoption and repeat demand across chips, memory, sensors, connectivity, and edge compute. Scaling AI to hundreds of millions of devices turns AI from a feature into a multi-year hardware and services flywheel.Q2: Why does Foxconn’s NT$2.6 trillion quarterly revenue matter for opportunity sizing?A: It is a real-economy indicator that AI infrastructure spend is already converting into orders. A 22% year-on-year increase, powered by AI servers, networking gear, and cloud equipment, suggests broad-based supply chain upside beyond a few chip designers.Q3: Why is power becoming the next growth constraint and the next growth market?A: Data centers need electricity at unprecedented scale. Constraints on grid capacity can slow deployment, but they also create investable expansion arenas: grid upgrades, energy storage, high-voltage equipment, efficiency software, and demand management.Q4: Why are cooling and mechanical infrastructure a breakout category in this cycle?A: AI compute density drives heat, and heat drives spend. Cooling systems, liquid cooling, racks, cabling, and facility design become “picks and shovels” for the AI era, with recurring upgrade cycles as chips and power envelopes rise.Q5: Why does “speed” create compounding winners across the supply chain?A: The companies that shorten lead times for power hookups, site selection, and capacity buildout win share. Execution advantages in integration, procurement, and reliability become differentiators, not just raw compute performance.Q6: Why is this a chance for Taiwan-centric players to move up the value ladder?A: When the bottleneck shifts from chips to system delivery, value accrues to integrators and enablers: servers, networking, thermal design, advanced packaging, and manufacturing orchestration. Taiwan’s ecosystem is structurally positioned to capture more of the stack.Bottom line: the “stress test” is also the growth map. Wherever capacity is constrained, investment and innovation accelerate.Listen to today’s episode of Inside Taiwan and follow for more signal over noise.
Inside Taiwan 2025: The Year That Changed the Physical Economy and What Comes Next in 2026In this New Year special, Inside Taiwan reviews how AI became a physical economy in 2025, shaped by chips, energy, capital, and geopolitics. Based on signals across the supply chain, we examine the defining questions of 2025 and present ten predictions that will shape AI infrastructure, markets, and the global economy in 2026.Q1. Why was 2025 the year AI became physical rather than theoretical?Because demand hit real world limits. In 2025, AI growth was constrained by fabs, power grids, and advanced packaging capacity. The market stopped asking what AI could do and started asking how fast infrastructure could be built.Q2. Was the AI boom in 2025 a bubble or a new industrial revolution?The supply chain suggests an industrial shift. Nvidia reached a roughly five trillion dollar valuation and TSMC capacity was fully utilized. Analysts noted there were no idle GPUs. The risk was not unused infrastructure but overoptimistic timelines for returns.Q3. Why did energy become a defining issue for AI in 2025?AI computing is electricity intensive. OpenAI warned US policymakers that roughly 100 gigawatts of new power per year would be needed to sustain growth. Tech firms began pursuing nuclear and dedicated power deals as grid limits became visible.Q4. Why did geopolitics reshape the semiconductor supply chain in 2025?Supply chains split along strategic lines. The US and allies pushed secure chip ecosystems while China accelerated domestic alternatives. This marked the rise of Sovereign AI where nations seek control over compute, data, and models.Q5. What are the most important trends to watch in 2026?Key signals include the start of 2 nanometer production at TSMC, shortages in high bandwidth memory, wider use of enterprise AI agents, growth of custom AI chips, nuclear powered data centers, packaging capacity as a bottleneck, robotics adoption, an intense talent war, and a clear ROI reckoning.Q6. What is the big lesson for the global economy entering 2026?AI is now a physical industry. It requires steel, power, water, silicon, and decades of trust built into supply chains. The pace has shifted from internet speed to industrial speed, slower but more durable, and capital intensive.Inside Taiwan will continue tracking these signals in 2026 as this supercycle enters its next phase.Listen to the full episode of Inside Taiwan to explore the ten trends shaping the AI driven global economy.chain. New episodes every Monday to Thursday, weekly.
Why Is Capital, Not Chips Alone, Deciding the AI Race in 2026?Inside Taiwan explains why the AI race is entering a capital-driven phase. Taiwan’s record equity rally, TSMC’s 2-nanometer production, a surge of Asian AI IPOs, shifting US technology controls, and an escalating global talent war show that capital allocation, not fabrication alone, is becoming the decisive force in AI leadership.Q1. Why does Taiwan’s record stock rally signal a capital-led AI cycle?The Taiex closed above 28,960 points in 2025, up 25.7 percent, reflecting investor confidence that AI leadership now translates directly into equity value and long-term capital returns.Q2. Why is TSMC’s 2-nanometer milestone also a capital signal?The 2-nanometer node delivers 10 to 15 percent higher performance or 25 to 30 percent lower power use, validating massive upfront capex and reinforcing investor belief in sustained returns from advanced manufacturing.Q3. Why are AI and chip IPOs accelerating in Asian capital markets?More than six technology firms sought roughly US$2.15 billion in a single week, showing how regional markets are being used to fund AI R&D and scale without relying on US capital.Q4. Why do US licensing rules matter for AI capital flows?Annual licenses for advanced equipment reduce supply chain shocks while increasing regulatory uncertainty, forcing memory and logic players to rethink long-term capital allocation and geographic diversification.Q5. Why is AI talent now one of the largest capital expenditures?Leading AI firms report average stock-based compensation around US$1.5 million per employee, indicating that talent acquisition has become a balance-sheet decision, not just an HR issue.Q6. Why will revenue-driving AI models outperform cost-saving AI in 2026?AI that expands revenue potential attracts capital more efficiently than AI that only cuts costs, shifting investor focus toward business models that multiply growth rather than optimize expenses.If capital is now the true bottleneck, who will deploy it most intelligently in the AI race?Listen to the full episode of Inside Taiwan for a capital-first view of the world’s most valuable supply chain.
Why Is the AI Gold Rush Forcing a Global Reckoning on Profits, Power, and Payback?Inside Taiwan examines how the AI boom is entering a new phase of financial discipline. From trillion-dollar data center bets and rising debt to high-stakes M&A and next-generation chips, this episode explains why investors are shifting from hype to hard questions about cash flow, returns, and control across the global AI supply chain.Q: Why are investors questioning the AI gold rush now?A: Hyperscalers added about USD 121 billion in new debt this year, roughly four times the five-year average, according to Yardeni Research. AI infrastructure spending is outrunning near-term cash flow, forcing markets to demand clearer paths to profit.Q: Why is infrastructure suddenly the focus of AI capital?A: Companies are racing to secure existing data center capacity instead of waiting years to build. SoftBank is buying infrastructure assets to accelerate deployment, while firms like BlackRock, Microsoft, Blackstone, and Amazon are locking up capacity to control the foundation of AI growth.Q: What does recent M&A activity say about AI payback pressure?A: Large deals are no longer moving stock prices. When Meta agreed to acquire Manus for roughly USD 2 to 3 billion, its shares barely reacted. Markets now want evidence of monetization, not just user growth or ambition.Q: How is Nvidia using capital to defend its position?A: NVIDIA is spending aggressively across layers. It announced a USD 20 billion acquisition to strengthen AI inference and took a USD 5 billion stake in Intel to secure optional future manufacturing capacity beyond Taiwan.Q: How are governments reshaping the money flow in chips?A: China now requires at least 50 percent domestic equipment in new fab expansions, accelerating investment into local suppliers. At the same time, the United States is shifting to annual licenses for memory makers like Samsung and SK Hynix, increasing uncertainty and compliance costs.Q: Why does TSMC still anchor the economics of AI?A: TSMC confirmed its 2-nanometer fab in Kaohsiung will enter volume production in late 2025. The new nanosheet architecture delivers up to 15 percent performance gains or 30 percent power savings, reinforcing TSMC’s pricing power and its role as the most trusted supplier for advanced AI chips.The AI revolution is no longer just about bigger models. It is about who controls capital, infrastructure, and margins across the stack. The next winners will be those who can prove returns while scaling power, chips, and distribution at the same time.Listen to the full episode of Inside Taiwan to understand where the money is moving next in the world’s most valuable supply chain.
Why Is the AI Revolution Rewriting the World From Silicon to Power Grids and Reshaping Global Capital?Inside Taiwan examines a structural transformation reshaping the global economy. From advanced chip pricing at TSMC to the rise of AI agents and the rebuilding of physical infrastructure, this episode explains why AI is not just software innovation but an end to end reconstruction of the industrial stack.Q: Why is TSMC at the center of the current AI transformation?A: TSMC’s advanced 3nm and 5nm capacity is nearly fully booked. Reports indicate a series of price increases starting in 2026. This marks a shift from falling chip prices to a new phase where both volume and pricing rise together.Q: What is driving TSMC’s new pricing power?A: Demand from AI leaders like NVIDIA and AMD has exceeded supply. Access to advanced chips now determines who can compete in next generation AI development.Q: What does a major earthquake reveal about the AI supply chain?A: A 7.0 magnitude quake briefly halted fabrication but production resumed within hours. This highlights both Taiwan’s resilience and the global risk of concentrated semiconductor manufacturing.Q: What is the electro industrial stack?A: Coined by an a16z investor, it refers to the physical layer powering AI. Batteries, power electronics, motors, cooling systems, transmission infrastructure, and robotics that allow software to operate in the real world.Q: Why are investors shifting away from power generation stocks?A: Electricity alone is not the bottleneck. Transmission, cooling, backup power, and grid infrastructure are now the critical constraints in scaling AI.Q: What talent bottleneck is emerging in Taiwan?A: Government data shows a green jobs gap of nearly 30,000 roles. About 21 percent are in semiconductor and tech sectors requiring skills in carbon accounting and renewable energy integration.Q: How is geopolitics complicating AI development?A: While advanced chips face export restrictions, developers increasingly adopt lower cost open source AI models. Hardware can be controlled but software adoption remains fluid.Q: What is NVIDIA’s long term advantage?A: Since 2016, NVIDIA has improved AI energy efficiency by roughly 10,000 times. Its strategy focuses on flexible architectures that allow developers to explore new ideas rather than optimizing for a single model.Q: What does this mean for individuals and organizations?A: AI lowers the barrier to expertise. Everyone now has access to a personal tutor and execution partner, fundamentally changing how work, learning, and decision making scale.The AI revolution is not a single breakthrough. It is a full stack reconstruction from silicon to software, from grids to talent. The winners will master the entire system, not just the algorithm.Listen to the full episode of Inside Taiwan to understand how this transformation is unfolding in real time.
Why Is Taiwan the Center of a New AI Power Struggle Between the US and China?Inside Taiwan this week tracks how the global AI race is reshaping geopolitics, capital flows, and supply chains. From a new US-led silicon alliance to China’s EUV push, surging AI IPOs, record Taiwan exports, and rising climate costs, this episode connects the dots behind the numbers that now define the world’s most critical technology hub.Q. Why is the US building a new tech alliance around semiconductors?A. The US and allies launched Pax Silica to secure the full AI supply chain, from minerals to chips to infrastructure, aiming to reduce coercive dependencies and protect compute power as a strategic asset.Q. Why is Taiwan considered essential to this alliance?A. US officials called Taiwan essential to the global AI supply chain, noting its dominance in advanced logic chips and packaging that underpin AI compute.Q. Why is chipmaking equipment entering a new upcycle?A. Global semiconductor equipment sales are projected to hit 126 billion dollars in 2026 and 135 billion in 2027, reflecting a multi-year AI-driven capacity expansion.Listen to the full episode of Inside Taiwan to understand how this AI power struggle is reshaping markets, geopolitics, and the future of the supply chain.
Inside Taiwan examines how the AI race is reshaping chips, geopolitics, energy, and corporate returns. From China’s pivot away from compliant GPUs to talent wars, energy deals, and uneven AI profitability, this episode explains why Taiwan remains central to the global AI supply chain.Q: Why is China discouraging purchases of compliant AI chips?A: China is prioritizing long-term self-sufficiency. Reports show approvals for compliant accelerators are being restricted while roughly $70 billion is redirected to domestic semiconductor development.Q: What does this shift mean for global GPU suppliers?A: It risks losing a market once estimated near $10 billion annually, accelerating the split between US-led and China-led AI hardware ecosystems.Q: Why are energy companies signing decades-long AI power deals?A: AI data centers need guaranteed, massive electricity supply. Long-term renewable contracts provide price stability for operators and secure returns for energy producers.Q: Are companies already making money from AI?A: Not widely. Surveys indicate only 15 percent see margin improvement, and just 5 percent report significant value today, highlighting the gap between hype and execution.Listen to the full episode of Inside Taiwan to understand the forces reshaping the world’s most valuable supply chain.



