Discover
The Data Center Frontier Show
The Data Center Frontier Show
Author: Endeavor Business Media
Subscribed: 69Played: 1,363Subscribe
Share
Copyright Data Center Frontier LLC © 2019
Description
Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
194 Episodes
Reverse
Subzero Engineering is pleased to announce the acquisition of the Dissolvable Air Barrier (DAB) Panels product line from Cambridge R&D, further expanding Subzero’s portfolio of data center containment solutions and reinforcing its commitment to safety, performance, and turnkey system delivery.
DAB Panels are a unique overhead containment solution designed to provide effective airflow separation during normal data center operation while dissolving within seconds when exposed to water during sprinkler activation. This dissolvable design helps eliminate falling panel hazards and supports safer fire suppression outcomes—addressing a critical challenge found in traditional rigid overhead containment systems.
“With this acquisition, we’re strengthening our ability to deliver truly integrated, safety-driven containment solutions,” said Shane Kilfoil, President of Subzero Engineering. “DAB Panels complement our existing containment portfolio and give our customers another proven option to address airflow management and fire safety without compromise.”
DAB Panels are engineered for both hot aisle and cold aisle containment applications and offer a combination of airflow performance, safety, and installation flexibility. Made from EPA-certified, plant-based cellulose materials, the panels achieve Class A fire and smoke performance, producing low heat and minimal smoke while maintaining visibility for emergency personnel.
Despite their dissolvable design, DAB Panels remain durable during normal operation—withstanding high static air pressure and maintaining airflow separation where it matters most. Panels can be easily modified in the field to accommodate varying cabinet heights and existing infrastructure, eliminating the need to relocate sprinkler heads and reducing installation time and cost.
DAB Panels integrate seamlessly across Subzero’s full portfolio of data center containment products, including aisle frames, doors, roofs, and airflow management systems. This unified approach enables Subzero to deliver turnkey containment solutions engineered for performance, safety, and long-term scalability—backed by a single partner and a coordinated system designed to work together.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Michael Siteman, President of Prodigious Proclivities and a long-time leader and board member within 7x24 Exchange International, about how data center development is being reshaped by AI, power scarcity, network strategy, and community resistance.
Siteman explains how site selection has evolved from a traditional real estate exercise into a far more complex infrastructure challenge.
“The business used to be a pure real estate play,” Siteman says. “Now it’s a systems engineering problem. It’s power, network topology, the real estate itself, and political risk.”
The conversation explores the growing dominance of power in development strategy, including the rapid rise of behind-the-meter generation as utilities struggle to keep pace with demand. Siteman notes that attitudes toward onsite generation have shifted dramatically in just the past few months.
“Six months ago, people would say, ‘If you don’t have grid interconnection, we’re not interested,’” he says. “In the last 30 days, it’s completely different.”
Vincent and Siteman also discuss the balance between network access and power access, the risks of pre-leasing capacity before buildings are completed, and the growing importance of local politics and government relations in getting projects approved.
The episode closes with a look at the widening gap between traditional hyperscale facilities and AI factories, the question of whether AI infrastructure is heading toward a bubble, and the industry’s urgent workforce shortage.
“Data centers don’t run themselves,” Siteman says. “We simply don’t have enough people to build and operate the infrastructure that’s coming.”
This is a grounded, field-level conversation about what is really driving data center development in the AI era, and what the industry will need to solve next.
The AI infrastructure boom is rapidly reshaping how the data center industry thinks about power. What was once a relatively straightforward utility procurement exercise is evolving into a complex strategy spanning onsite generation, fuel logistics, financing, and system architecture.
That reality framed a recent special edition of The Data Center Frontier Show Podcast, which recast and updated a pivotal DCF Trends Summit 2025 session: From Grid to Onsite Powering: Optimizing Energy Behind the Meter for Data Centers.
Moderated by Fengrong Li, Senior Managing Director at FTI Consulting, the panel explored how operators are responding as interconnection timelines stretch and AI workloads surge. Li’s framing emphasized a core shift: onsite power is moving from contingency planning to critical-path infrastructure.
From the OEM perspective, David Blank of Siemens Energy noted that behind-the-meter deployments have accelerated sharply over the past year as developers confront multi-year waits for firm utility capacity.
“Everyone would prefer grid power,” Blank said. “But in many cases, reliable access isn’t available for five, ten, even ten-plus years.”
Panelists agreed that AI’s scale and speed are driving a structural rethink. Brian Gitt of Oklo described the moment as a return to industrial roots, with large loads once again building dedicated generation to meet growth timelines.
At the same time, new technical pressures are emerging. AI clusters can produce sharp load swings, forcing developers to deploy fast-response buffering technologies such as batteries, flywheels, and supercapacitors to maintain stability.
Despite differing technology paths—including gas turbines, hydrogen fuel cells, and advanced nuclear—the panel aligned on one common theme: modularity. Phased power blocks increasingly mirror how AI campuses are actually built and financed.
The discussion also highlighted the growing importance of contract structures. Long-term offtake commitments, capacity reservations, and credit support are increasingly required to unlock equipment queues and fuel supply.
Other panelists included Marty Trivette of AlphaStruxure and Yuval Bachar of ECL. The event was hosted by Data Center Frontier’s Matt Vincent.
The takeaway was clear: in the AI era, energy strategy has moved to the critical path—and for many operators, that path now runs behind the meter.
The data center industry is racing into the AI era with bigger campuses, tighter timelines, and unprecedented infrastructure complexity. But in this episode of The Data Center Frontier Show Podcast, 7x24 Exchange International founding member and Mission Critical Global Alliance (MCGA) board member Dennis Cronin argues the industry’s biggest constraint may be the one it talks about least: people.
Cronin’s message is direct: the “talent cliff” isn’t coming; it’s already here. Based on recent research into open roles, he estimates 467,000 to 498,000 openings in core data center positions (facilities and ops leadership, electrical, generator/UPS, HVAC, controls), plus another ~514,000 emerging roles tied to AI infrastructure, sustainability, and cyber-physical security—bringing the total to roughly one million jobs the industry needs to fill.
A major driver is what Cronin calls the “five-year experience trap”: employers require five years of experience even for entry-level roles, but newcomers can’t get experience without being hired. The result is widespread talent poaching, involving workers jumping from site to site for 10–20% raises, without expanding the overall labor pool.
Cronin also highlights a frequently missed reality in public policy debates: the job multiplier effect. While data centers may have lean direct staffing, they support a much larger ecosystem of contractors, service providers, and manufacturers, from generator and UPS technicians to security integrators and the electrical/mechanical supply chain, many of whom are already scrambling to hire.
On training, Cronin explains why company-run programs and commercial training aren’t enough on their own. Internal academies often produce siloed specialists trained for a single operator’s environment, while commercial courses, often ~$1,000 per day per person, are typically designed to upskill people already in the industry, not onboard new entrants.
MCGA’s strategy focuses on community colleges as the most scalable on-ramp: affordable programs, scholarships, and hands-on labs that can produce strong technicians in two-year degrees. Cronin cites programs at Cleveland Community College (NC), Northern Virginia Community College, and Southside Community College (VA), noting that dozens of schools are exploring data center curricula but funding remains a barrier.
Cronin’s proposed solution is a true workforce ecosystem: outreach, standardized curriculum, certification labs, structured apprenticeships, and employer commitments. He also advocates replacing the “five years” requirement with an entry-level certification that proves foundational knowledge, i.e. acronyms and language, reading one-lines, SOPs/MOPs, and crucially, safety and situational awareness in electrical and mechanical environments.
Finally, Cronin tackles the money question. With $60B in data centers announced this year, he says the industry needs a major, shared investment across operators, vendors, contractors, and manufacturers to fund training and scholarships at scale. The stakes are operational: in an era of gigawatt AI facilities and shrinking margins for error, workforce readiness is now a mission-critical issue.
In the latest episode of The DCF Show Podcast, Data Center Frontier founder Rich Miller joins present DCF Editor in Chief Matt Vincent and Senior Editor David Chernicoff to examine where the data center industry stands as AI infrastructure moves from announcement to execution.
Miller also discusses his new Data Center Richness podcast and Substack project, which explores how data center professionals consume content and learn about the rapidly evolving industry. With information overload now a reality, Miller’s goal is to distill the most important signals shaping infrastructure decisions.
The conversation then turns to what defines 2026 for data centers: execution. After a year filled with megaproject announcements, the industry now faces the harder task of actually delivering campuses at AI scale—often under severe power constraints.
With utilities struggling to keep pace, on-site generation is shifting from temporary solution to long-term strategy, as developers seek reliable ways to power projects while easing community concerns about grid impacts.
Public resistance has also become a major factor. Miller notes that community opposition is now delaying or halting billions of dollars in projects, forcing operators to rethink how they engage with local stakeholders. Issues like power pricing and water usage are increasingly central to project approval.
On the technology front, Nvidia’s roadmap continues to reshape infrastructure planning, with rack densities rising sharply, liquid cooling becoming standard, and new power distribution models emerging to support AI factories. At the same time, Miller expects the market to stratify, with some operators specializing in AI factories while others serve cloud and enterprise demand.
The discussion also touches on nuclear power’s future role, with data centers positioning themselves as anchor customers, though meaningful SMR deployment remains years away.
Ultimately, Miller argues that the industry is moving faster than ever, and 2026 will reveal how well today’s massive investments translate into real deployments.
As he concludes: the next phase belongs to those who can deliver.
In this installment of Nomads at the Frontier, Data Center Frontier Editor-in-Chief Matt Vincent checks in with Nomad Futurist founders Nabeel Mahmood and Phillip Koblence for on-the-ground reflections from PTC 2026 in Hawaii, and a clear signal that the digital infrastructure market is shifting from hype to delivery.
Mahmood says PTC 2026 reaffirmed the move toward integrated digital infrastructure, with attendance continuing to grow and conversations increasingly translating into real progress. But the defining theme across AI, investment, and deployments was power. As Koblence puts it, “all of those questions are power”—and unlike prior years, the tone has moved from speculative site talk to “show me the money, show me the power,” with real timelines and secured capacity.
The episode digs into the industry’s evolving stance on behind-the-meter generation, which is increasingly treated as the most viable medium-term path to getting online as grid bureaucracy and interconnection delays become the “long pole in the tent.” The discussion also tackles the sustainability tension in that shift: why the industry often kicks the can down the road, what alternative options (fuel cells, hydrogen) may offer, and why nuclear timelines don’t solve the near-term gap.
Mahmood and Koblence also emphasize that the buildout isn’t just a power story; it’s a people and community story. Workforce shortages remain structural and long-lived, and community acceptance is now central to the industry’s “license to build.” Nomad Futurist’s mission, they argue, is becoming a bridge between digital infrastructure and the public, demystifying what the industry is, why it matters, and how the next generation can enter it.
Finally, the conversation pressures-tests the AI boom: Mahmood predicts the “mega-scale AI factory” bubble will burst within three to five years, with growth shifting toward inferencing closer to users, but he still expects the sector to normalize into sustained double-digit expansion. And on Nvidia’s roadmap, both founders call for realism: megawatt racks may be coming, but as Koblence notes, “there are zero facilities” today that can support a 1–1.5 MW rack at scale.
In the latest episode of the Data Center Frontier Show Podcast, Editor in Chief Matt Vincent speaks with Sailesh Krishnamurthy, VP of Engineering for Databases at Google Cloud, about the real challenge facing enterprise AI: connecting powerful models to real-world operational data.
While large language models continue to advance rapidly, many organizations still struggle to combine unstructured data (i.e. documents, images, and logs) with structured operational systems like customer databases and transaction platforms. Krishnamurthy explains how vector search and hybrid database approaches are helping bridge this gap, allowing enterprises to query structured and unstructured data together without creating new silos.
The conversation highlights a growing shift in mindset: modern data teams must think more like search engineers, optimizing for relevance and usefulness rather than simply exact database results. At the same time, governance and trust are becoming foundational requirements, ensuring AI systems access accurate data while respecting strict security controls.
Operating at Google scale also reinforces the need for reliability, low latency, and correctness, pushing infrastructure toward unified storage layers rather than fragmented systems that add complexity and delay.
Looking toward 2026, Krishnamurthy argues the top priority for CIOs and data leaders is organizing and governing data effectively, because AI systems are only as strong as the data foundations supporting them.
The takeaway: AI success depends not just on smarter models, but on smarter data infrastructure.
🎧 Listen to the full episode to explore how enterprises can operationalize AI at scale.
The data center industry is changing faster than ever. Artificial intelligence, cloud expansion, and high-density workloads are driving record-breaking energy and cooling demands. But behind every megawatt of compute capacity lies an equally critical resource: water.
As data halls evolve from static infrastructure to dynamic, service-driven ecosystems, cooling has emerged as one of the most powerful levers for efficiency, reliability, and sustainability. In this episode, Ecolab explores how Cooling as a Service (CaaS) is transforming data center operations, shifting cooling from a capital expense to a measurable, performance-based service that drives uptime, reliability, and environmental stewardship.
Tune in to hear experts discuss how data centers can future-proof their operations through a smarter, service-oriented approach to thermal management. From proactive analytics to commissioning best practices, this conversation dives into the technologies, partnerships, and business models redefining how cooling is managed and measured across the world’s most advanced digital infrastructure.
Applied Digital CEO Wes Cummins joins Data Center Frontier Editor-in-Chief Matt Vincent to break down what it takes to build AI data centers that can keep pace with Nvidia-era infrastructure demands and actually deliver on schedule.
Cummins explains Applied Digital’s “maximum flexibility” design philosophy, including higher-voltage delivery, mixed density options, and even more floor space to future-proof facilities as power and cooling requirements evolve.
The conversation digs into the execution reality behind the AI boom: long-lead power gear, utility timelines, and the tight MEP supply chain that will cause many projects to slip in 2026–2027.
Cummins outlines how Applied Digital locked in key components 18–24 months ago and scaled from a single 100 MW “field of dreams” building to roughly 700 MW under construction, using fourth-generation designs and extensive off-site MEP assembly—“LEGO brick” skids—to boost speed and reduce on-site labor risk.
On cooling, Cummins pulls back the curtain on operating direct-to-chip liquid cooling at scale in Ellendale, North Dakota, including the extra redundancy layers—pumps, chillers, dual loops, and thermal storage—required to protect GPUs and hit five-nines reliability.
He also discusses aligning infrastructure with Nvidia’s roadmap (from 415V toward 800V and eventually DC), the customer demand surge pushing capacity planning into 2028, and partnerships with ABB and Corintis aimed at next-gen power distribution and liquid cooling performance.
In this episode of the Data Center Frontier Show, Matt Vincent is joined by Liam Weld, Head of Data Centers for Meter to discuss why connectivity for data centers is often forgotten about.
AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast.
On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works.
Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world.
Cadence is addressing this with what it calls “DC elements”: digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built.
A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond.
Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA.
Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design.
Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early.
Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center.
Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck.
The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models.
Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.
AI is reshaping the data center industry faster than any prior wave of demand. Power needs are rising, communities are paying closer attention, and grid timelines are stretching. On the latest episode of The Data Center Frontier Show, Page Haun of Cologix explains what sustainability really looks like in the AI era, and why it has become a core design requirement, not a side initiative.
Haun describes today’s moment as a “perfect storm,” where AI-driven growth meets grid constraints, community scrutiny, and regulatory pressure. The industry is responding through closer collaboration among operators, utilities, and governments, sharing long-term load forecasts and infrastructure plans. But one challenge remains: communication. Data centers still struggle to explain their essential role in the digital economy, from healthcare and education to entertainment and AI services.
Cologix’s Montreal 8 facility, which recently achieved LEED Gold certification, shows how sustainable design is becoming standard practice. The project focused on energy efficiency, water conservation, responsible materials, and reduced waste, lowering both environmental impact and operating costs. Those lessons now shape how Cologix approaches future builds.
High-density AI changes everything inside the building. Liquid cooling is becoming central because it delivers tighter thermal control with better efficiency, but flexibility is the real priority. Facilities must support multiple cooling approaches so they don’t become obsolete as hardware evolves. Water stewardship is just as critical. Cologix uses closed-loop systems that dramatically reduce consumption, achieving an average WUE of 0.203, far below the industry norm.
Sustainability also starts with where you build. In Canada, Cologix leverages hydropower in Montreal and deep lake water cooling in Toronto. In California, natural air cooling cuts energy use. Where geography doesn’t help, partnerships do. In Ohio, Cologix is deploying onsite fuel cells to operate while new transmission lines are built, covering the full cost so other utility customers aren’t burdened.
Community relationships now shape whether projects move forward. Cologix treats communities as long-term partners, not transactions, by holding town meetings, working with local leaders, and supporting programs like STEM education, food drives, and disaster relief.
Transparency ties it all together. In its 2024 ESG report, Cologix reported 65% carbon-free energy use, strong PUE and WUE performance, and expanded environmental certifications. As AI scales, openness about impact is becoming a competitive advantage.
Haun closed with three non-negotiables for AI-era data centers: flexible power and cooling design, holistic resource management, and a real plan for renewable energy, backed by strong community engagement. In the age of AI, sustainability isn’t a differentiator anymore. It’s the baseline.
In this episode of the Data Center Frontier Show, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Axel Bokiba, General Manager Data Center Cooling for MOOG, about what is takes to deliver liquid cooling reliably at hyperscale.
In this episode of The Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Kevin Ooley, CFO of DataBank, about how the operator is structuring capital to support disciplined growth amid accelerating AI and enterprise demand.
Ooley explains the rationale behind DataBank’s expansion of its development credit facility from $725 million to $1.6 billion, describing it as a strong signal of lender confidence in data centers as long-duration, mission-critical real estate assets.
Central to that strategy is DataBank’s “Devco facility,” a pooled, revolving financing vehicle designed to support multiple projects at different stages of development; from land and site work through construction, leasing, and commissioning.
The conversation explores how DataBank translates capital into concrete expansion across priority U.S. markets, including Northern Virginia, Dallas, and Atlanta, with nearly 20 projects underway through 2025 and 2026. Ooley details how recent deployments, including fully pre-leased capacity, feed a development pipeline supported by both debt and roughly $2 billion in equity raised in late 2024.
Vincent and Ooley also dig into how DataBank balances rapid growth with prudent leverage, managing interest-rate volatility through hedging and refinancing stabilized assets into fixed-rate securitizations.
In the AI era, Ooley emphasizes DataBank’s focus on “NFL cities,” serving enterprise and hyperscale customers that need proximity, reliability, and scale while Databank delivers power, buildings, and uptime, and customers source their own GPUs.
The episode closes with a look at Databank’s long-term sponsorship by DigitalBridge, its deep banking relationships, and the market signals—pricing, absorption, and customer demand—that will ultimately dictate the pace of growth.
DCF Trends Summit 2025 Session Recap
As the data center industry accelerates into an AI-driven expansion cycle, the fundamentals of site selection and investment are being rewritten. In this session from the Data Center Frontier Trends Summit 2025, Ed Socia of datacenterHawk moderated a discussion with Denitza Arguirova of Provident Data Centers, Karen Petersburg of PowerHouse Data Centers, Brian Winterhalter of DLA Piper, Phill Lawson-Shanks of Aligned Data Centers, and Fred Bayles of Cologix on how power scarcity, entitlement complexity, and community scrutiny are reshaping where—and how—data centers get built.
A central theme of the conversation was that power, not land, now drives site selection. Panelists described how traditional assumptions around transmission timelines and flat electricity pricing no longer apply, pushing developers toward Tier 2 and Tier 3 markets, power-first strategies, and closer partnerships with utilities. On-site generation, particularly natural gas, was discussed as a short-term bridge rather than a permanent substitute for grid interconnection.
The group also explored how entitlement processes in mature markets have become more demanding. Economic development benefits alone are no longer sufficient; jurisdictions increasingly expect higher-quality design, sensitivity to surrounding communities, and tangible off-site investments. Panelists emphasized that credibility—earned through experience, transparency, and demonstrated follow-through—has become essential to securing approvals.
Sustainability and ESG considerations remain critical, but the discussion took a pragmatic view of scale. Meeting projected data center demand will require a mix of energy sources, with renewables complemented by transitional solutions and evolving PPA structures. Community engagement was highlighted as equally important, extending beyond environmental metrics to include workforce development, education, and long-term social investment.
Artificial intelligence added another layer of complexity. While large AI training workloads can operate in remote locations, monetized AI applications increasingly demand proximity to users. Rapid hardware cycles, megawatt-scale racks, and liquid-cooling requirements are driving more modular, adaptable designs—often within existing data center portfolios.
The session closed with a look at regional opportunity and investor expectations, with markets such as Pennsylvania, Alabama, Ohio, and Oklahoma cited for their utility relationships and development readiness. The overarching conclusion was clear: the traditional data center blueprint still matters—but power strategy, flexibility, and authentic community integration now define success.
As the data center industry enters the AI era in earnest, incremental upgrades are no longer enough. That was the central message of the Data Center Frontier Trends Summit 2025 session “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” where operators and infrastructure leaders made the case that AI is no longer a specialty workload; it is redefining the data center itself.
Panelists described the AI factory as a new infrastructure archetype: purpose-built, power-intensive, liquid-cooled, and designed for constant change. Rack densities that once hovered in the low teens have now surged past 50 kilowatts and, in some cases, toward megawatt-scale configurations. Facilities designed for yesterday’s assumptions simply cannot keep up.
Ken Patchett of Lambda framed AI factories as inherently multi-density environments, capable of supporting everything from traditional enterprise racks to extreme GPU deployments within the same campus. These facilities are not replacements for conventional data centers, he noted, but essential additions; and they must be designed for rapid iteration as chip architectures evolve every few months.
Wes Cummins of Applied Digital extended the conversation to campus scale and geography. AI demand is pushing developers toward tertiary markets where power is abundant but historically underutilized. Training and inference workloads now require hundreds of megawatts at single sites, delivered in timelines that have shrunk from years to little more than a year. Cost efficiency, ultra-low PUE, and flexible shells are becoming decisive competitive advantages.
Liquid cooling emerged as a foundational requirement rather than an optimization. Patrick Pedroso of Equus Compute Solutions compared the shift to the automotive industry’s move away from air-cooled engines. From rear-door heat exchangers to direct-to-chip and immersion systems, cooling strategies must now accommodate fluctuating AI workloads while enabling energy recovery—even at the edge.
For Kenneth Moreano of Scott Data Center, the AI factory is as much a service model as a physical asset. By abstracting infrastructure complexity and controlling the full stack in-house, his company enables enterprise customers to move from AI experimentation to production at scale, without managing the underlying technical detail.
Across the discussion, panelists agreed that the industry’s traditional design and financing playbook is obsolete. AI infrastructure cannot be treated as a 25-year depreciable asset when hardware cycles move in months. Instead, data centers must be built as adaptable, elemental systems: capable of evolving as power, cooling, and compute requirements continue to shift.
The session concluded with one obvious takeaway: AI is not a future state to prepare for. It is already shaping how data centers are built, where they are located, and how they generate value. The AI factory is no longer theoretical—and the industry is racing to build it fast enough.
As AI workloads push data center infrastructure in both centralized and distributed directions, the industry is rethinking where compute lives, how data moves, and who controls the networks in between. This episode captures highlights from The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, a panel discussion from the 2025 Data Center Frontier Trends Summit.
Moderated by Scott Bergs of Dark Fiber and Infrastructure, the panel brought together leaders from DartPoints, 1623 Farnam, Duos Edge AI, ValorC3 Data Centers, and 365 Data Centers to examine how edge facilities, interconnection hubs, and regional data centers are adapting to rising power densities, AI inference workloads, and mounting connectivity constraints.
Panelists discussed the rapid shift from legacy 4–6 kW rack designs to environments supporting 20–60 kW and beyond, while noting that many AI inference applications can be deployed effectively at moderate densities when paired with the right connectivity. Hospitals, regional enterprises, and public-sector use cases are emerging as key drivers of distributed AI infrastructure, particularly in tier 3 and tier 4 markets.
The conversation also highlighted connectivity as a defining bottleneck. Permitting delays, middle-mile fiber constraints, and the need for early carrier engagement are increasingly shaping site selection and time-to-market outcomes. As data centers evolve into network-centric platforms, operators are balancing neutrality, fiber ownership, and long-term upgradability to ensure today’s builds remain relevant in a rapidly changing AI landscape.
In this episode of the Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Uptime Institute research analyst Max Smolaks about the infrastructure forces reshaping AI data centers from power and racks to cooling, economics, and the question of whether the boom is sustainable.
Smolaks unpacks a surprising on-ramp to today’s AI buildout: former cryptocurrency mining operators that “discovered” underutilized pockets of power in nontraditional locations—and are now pivoting into AI campuses as GPU demand strains conventional markets. The conversation then turns to what OCP 2025 revealed about rack-scale AI: heavier, taller, more specialized racks; disaggregated “compute/power/network” rack groupings; and a white space that increasingly looks purpose-built for extreme density.
From there, Vincent and Smolaks explore why liquid cooling is both inevitable and still resisted by many operators—along with the software, digital twins, CFD modeling, and new commissioning approaches emerging to manage the added complexity. On the power side, they discuss the industry’s growing alignment around 800V DC distribution and what it signals about Nvidia’s outsized influence on next-gen data center design.
Finally, the conversation widens into load volatility and the economics of AI infrastructure: why “spiky” AI power profiles are driving changes in UPS systems and rack-level smoothing, and why long-term growth may hinge less on demand (which remains strong) than on whether AI profits broaden beyond a few major buyers—especially as GPU hardware depreciates far faster than the long-lived fiber built during past tech booms.
A sharp, grounded look at the AI factory era—and the engineering and business realities behind the headlines.
In this Data Center Frontier Trends Summit 2025 session—moderated by Stu Dyer (CBRE) with panelists Aad den Elzen (Solar Turbines/Caterpillar), Creede Williams (Exigent Energy Partners), and Adam Michaelis (PointOne Data Centers)—the conversation centered on a hard truth of the AI buildout: power is now the limiting factor, and the grid isn’t keeping pace.
Dyer framed how quickly the market has escalated, from “big” 48MW campuses a decade ago to today’s expectations of 500MW-to-gigawatt-scale capacity. With utility timelines stretched and interconnection uncertainty rising, the panel argued that natural gas has moved from taboo to toolkit—often the fastest route to firm power at meaningful scale.
Williams, speaking from the IPP perspective, emphasized that speed-to-power requires firm fuel and financeable infrastructure, warning that “interruptible” gas or unclear supply economics can undermine both reliability and underwriting. Den Elzen noted that gas is already a proven solution across data center deployments, and in many cases is evolving from a “bridge” to a durable complement to the grid—especially when modular approaches improve resiliency and enable phased buildouts. Michaelis described how operators are building internal “power plant literacy,” hiring specialists and partnering with experienced power developers because data center teams can’t assume they can self-perform generation projects.
The panel also “de-mystified” key technology choices—reciprocating engines vs. turbines—as tradeoffs among lead time, footprint, ramp speed, fuel flexibility, efficiency, staffing, and long-term futureproofing. On AI-era operations, the group underscored that extreme load swings can’t be handled by rotating generation alone, requiring system-level design with controls, batteries, capacitors, and close coordination with tenant load profiles.
Audience questions pushed into public policy and perception: rate impacts, permitting, and the long-term mix of gas, grid, and emerging options like SMRs. The panel’s consensus: behind-the-meter generation can help shield ratepayers from grid-upgrade costs, but permitting remains locally driven and politically sensitive—making industry communication and advocacy increasingly important.
Bottom line: in the new data center reality, natural gas is here—often not as a perfect answer, but as the one that matches the industry’s near-term demands for speed, scale, and firm power.
In this episode, we crack open the world of ILA (In-Line Amplifier) huts, those unassuming shelters are quietly powering fiber connectivity. Like mini utility substations of the fiber world, these small, secure, and distributed facilities keep internet, voice, and data networks running reliably, especially over long distances or in developing areas. From the analog roots of signal amplification to today’s digital optical technologies, this conversation explores how ILAs are redefining long-haul fiber transport.
We’ll discuss how these compact, often rural, mini data centers are engineered and built to boost light signals across vast distances. But it’s not just about the tech. There are real-world challenges to deploying ILAs: from acquiring land in varied environments, to coordinating civil construction often built in isolation. You’ll learn why site selection is as much about geology and permitting as it is about signal loss, and what factors can make or break an ILA deployment.
We also explore the growing role of hyperscalers and colocation providers in driving ILA expansion, adjacent revenue opportunities, and what ILA facilities can mean for the future of rural connectivity.
Tune in to find out how the pulse of long-haul fiber is beating louder than ever.




