Discover
The Data Center Frontier Show
The Data Center Frontier Show
Author: Endeavor Business Media
Subscribed: 66Played: 1,280Subscribe
Share
Copyright Data Center Frontier LLC © 2019
Description
Welcome to The Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
185 Episodes
Reverse
In this episode of the Data Center Frontier Show, Matt Vincent is joined by Liam Weld, Head of Data Centers for Meter to discuss why connectivity for data centers is often forgotten about.
AI data centers are no longer just buildings full of racks. They are tightly coupled systems where power, cooling, IT, and operations all depend on each other, and where bad assumptions get expensive fast.
On the latest episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent talks with Sherman Ikemoto of Cadence about what it now takes to design an “AI factory” that actually works.
Ikemoto explains that data center design has always been fragmented. Servers, cooling, and power are designed by different suppliers, and only at the end does the operator try to integrate everything into one system. That final integration phase has long relied on basic tools and rules of thumb, which is risky in today’s GPU-dense world.
Cadence is addressing this with what it calls “DC elements”: digitally validated building blocks that represent real systems, such as NVIDIA’s DGX SuperPOD with GB200 GPUs. These are not just drawings; they model how systems really behave in terms of power, heat, airflow, and liquid cooling. Operators can assemble these elements in a digital twin and see how an AI factory will actually perform before it is built.
A key shift is designing directly to service-level agreements. Traditional uncertainty forced engineers to add large safety margins, driving up cost and wasting power. With more accurate simulation, designers can shrink those margins while still hitting uptime and performance targets, critical as rack densities move from 10–20 kW to 50–100 kW and beyond.
Cadence validates its digital elements using a star system. The highest level, five stars, requires deep validation and supplier sign-off. The GB200 DGX SuperPOD model reached that level through close collaboration with NVIDIA.
Ikemoto says the biggest bottleneck in AI data center buildouts is not just utilities or equipment; it is knowledge. The industry is moving too fast for old design habits. Physical prototyping is slow and expensive, so virtual prototyping through simulation is becoming essential, much like in aerospace and automotive design.
Cadence’s Reality Digital Twin platform uses a custom CFD engine built specifically for data centers, capable of modeling both air and liquid cooling and how they interact. It supports “extreme co-design,” where power, cooling, IT layout, and operations are designed together rather than in silos. Integration with NVIDIA Omniverse is aimed at letting multiple design tools share data and catch conflicts early.
Digital twins also extend beyond commissioning. Many operators now use them in live operations, connected to monitoring systems. They test upgrades, maintenance, and layout changes in the twin before touching the real facility. Over time, the digital twin becomes the operating platform for the data center.
Running real AI and machine-learning workloads through these models reveals surprises. Some applications create short, sharp power spikes in specific areas. To be safe, facilities often over-provision power by 20–30%, leaving valuable capacity unused most of the time. By linking application behavior to hardware and facility power systems, simulation can reduce that waste, crucial in an era where power is the main bottleneck.
The episode also looks at Cadence’s new billion-cycle power analysis tools, which allow massive chip designs to be profiled with near-real accuracy, feeding better system- and facility-level models.
Cadence and NVIDIA have worked together for decades at the chip level. Now that collaboration has expanded to servers, racks, and entire AI factories. As Ikemoto puts it, the data center is the ultimate system—where everything finally comes together—and it now needs to be designed with the same rigor as the silicon inside it.
AI is reshaping the data center industry faster than any prior wave of demand. Power needs are rising, communities are paying closer attention, and grid timelines are stretching. On the latest episode of The Data Center Frontier Show, Page Haun of Cologix explains what sustainability really looks like in the AI era, and why it has become a core design requirement, not a side initiative.
Haun describes today’s moment as a “perfect storm,” where AI-driven growth meets grid constraints, community scrutiny, and regulatory pressure. The industry is responding through closer collaboration among operators, utilities, and governments, sharing long-term load forecasts and infrastructure plans. But one challenge remains: communication. Data centers still struggle to explain their essential role in the digital economy, from healthcare and education to entertainment and AI services.
Cologix’s Montreal 8 facility, which recently achieved LEED Gold certification, shows how sustainable design is becoming standard practice. The project focused on energy efficiency, water conservation, responsible materials, and reduced waste, lowering both environmental impact and operating costs. Those lessons now shape how Cologix approaches future builds.
High-density AI changes everything inside the building. Liquid cooling is becoming central because it delivers tighter thermal control with better efficiency, but flexibility is the real priority. Facilities must support multiple cooling approaches so they don’t become obsolete as hardware evolves. Water stewardship is just as critical. Cologix uses closed-loop systems that dramatically reduce consumption, achieving an average WUE of 0.203, far below the industry norm.
Sustainability also starts with where you build. In Canada, Cologix leverages hydropower in Montreal and deep lake water cooling in Toronto. In California, natural air cooling cuts energy use. Where geography doesn’t help, partnerships do. In Ohio, Cologix is deploying onsite fuel cells to operate while new transmission lines are built, covering the full cost so other utility customers aren’t burdened.
Community relationships now shape whether projects move forward. Cologix treats communities as long-term partners, not transactions, by holding town meetings, working with local leaders, and supporting programs like STEM education, food drives, and disaster relief.
Transparency ties it all together. In its 2024 ESG report, Cologix reported 65% carbon-free energy use, strong PUE and WUE performance, and expanded environmental certifications. As AI scales, openness about impact is becoming a competitive advantage.
Haun closed with three non-negotiables for AI-era data centers: flexible power and cooling design, holistic resource management, and a real plan for renewable energy, backed by strong community engagement. In the age of AI, sustainability isn’t a differentiator anymore. It’s the baseline.
In this episode of the Data Center Frontier Show, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Axel Bokiba, General Manager Data Center Cooling for MOOG, about what is takes to deliver liquid cooling reliably at hyperscale.
In this episode of The Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Kevin Ooley, CFO of DataBank, about how the operator is structuring capital to support disciplined growth amid accelerating AI and enterprise demand.
Ooley explains the rationale behind DataBank’s expansion of its development credit facility from $725 million to $1.6 billion, describing it as a strong signal of lender confidence in data centers as long-duration, mission-critical real estate assets.
Central to that strategy is DataBank’s “Devco facility,” a pooled, revolving financing vehicle designed to support multiple projects at different stages of development; from land and site work through construction, leasing, and commissioning.
The conversation explores how DataBank translates capital into concrete expansion across priority U.S. markets, including Northern Virginia, Dallas, and Atlanta, with nearly 20 projects underway through 2025 and 2026. Ooley details how recent deployments, including fully pre-leased capacity, feed a development pipeline supported by both debt and roughly $2 billion in equity raised in late 2024.
Vincent and Ooley also dig into how DataBank balances rapid growth with prudent leverage, managing interest-rate volatility through hedging and refinancing stabilized assets into fixed-rate securitizations.
In the AI era, Ooley emphasizes DataBank’s focus on “NFL cities,” serving enterprise and hyperscale customers that need proximity, reliability, and scale while Databank delivers power, buildings, and uptime, and customers source their own GPUs.
The episode closes with a look at Databank’s long-term sponsorship by DigitalBridge, its deep banking relationships, and the market signals—pricing, absorption, and customer demand—that will ultimately dictate the pace of growth.
DCF Trends Summit 2025 Session Recap
As the data center industry accelerates into an AI-driven expansion cycle, the fundamentals of site selection and investment are being rewritten. In this session from the Data Center Frontier Trends Summit 2025, Ed Socia of datacenterHawk moderated a discussion with Denitza Arguirova of Provident Data Centers, Karen Petersburg of PowerHouse Data Centers, Brian Winterhalter of DLA Piper, Phill Lawson-Shanks of Aligned Data Centers, and Fred Bayles of Cologix on how power scarcity, entitlement complexity, and community scrutiny are reshaping where—and how—data centers get built.
A central theme of the conversation was that power, not land, now drives site selection. Panelists described how traditional assumptions around transmission timelines and flat electricity pricing no longer apply, pushing developers toward Tier 2 and Tier 3 markets, power-first strategies, and closer partnerships with utilities. On-site generation, particularly natural gas, was discussed as a short-term bridge rather than a permanent substitute for grid interconnection.
The group also explored how entitlement processes in mature markets have become more demanding. Economic development benefits alone are no longer sufficient; jurisdictions increasingly expect higher-quality design, sensitivity to surrounding communities, and tangible off-site investments. Panelists emphasized that credibility—earned through experience, transparency, and demonstrated follow-through—has become essential to securing approvals.
Sustainability and ESG considerations remain critical, but the discussion took a pragmatic view of scale. Meeting projected data center demand will require a mix of energy sources, with renewables complemented by transitional solutions and evolving PPA structures. Community engagement was highlighted as equally important, extending beyond environmental metrics to include workforce development, education, and long-term social investment.
Artificial intelligence added another layer of complexity. While large AI training workloads can operate in remote locations, monetized AI applications increasingly demand proximity to users. Rapid hardware cycles, megawatt-scale racks, and liquid-cooling requirements are driving more modular, adaptable designs—often within existing data center portfolios.
The session closed with a look at regional opportunity and investor expectations, with markets such as Pennsylvania, Alabama, Ohio, and Oklahoma cited for their utility relationships and development readiness. The overarching conclusion was clear: the traditional data center blueprint still matters—but power strategy, flexibility, and authentic community integration now define success.
As the data center industry enters the AI era in earnest, incremental upgrades are no longer enough. That was the central message of the Data Center Frontier Trends Summit 2025 session “AI Is the New Normal: Building the AI Factory for Power, Profit, and Scale,” where operators and infrastructure leaders made the case that AI is no longer a specialty workload; it is redefining the data center itself.
Panelists described the AI factory as a new infrastructure archetype: purpose-built, power-intensive, liquid-cooled, and designed for constant change. Rack densities that once hovered in the low teens have now surged past 50 kilowatts and, in some cases, toward megawatt-scale configurations. Facilities designed for yesterday’s assumptions simply cannot keep up.
Ken Patchett of Lambda framed AI factories as inherently multi-density environments, capable of supporting everything from traditional enterprise racks to extreme GPU deployments within the same campus. These facilities are not replacements for conventional data centers, he noted, but essential additions; and they must be designed for rapid iteration as chip architectures evolve every few months.
Wes Cummins of Applied Digital extended the conversation to campus scale and geography. AI demand is pushing developers toward tertiary markets where power is abundant but historically underutilized. Training and inference workloads now require hundreds of megawatts at single sites, delivered in timelines that have shrunk from years to little more than a year. Cost efficiency, ultra-low PUE, and flexible shells are becoming decisive competitive advantages.
Liquid cooling emerged as a foundational requirement rather than an optimization. Patrick Pedroso of Equus Compute Solutions compared the shift to the automotive industry’s move away from air-cooled engines. From rear-door heat exchangers to direct-to-chip and immersion systems, cooling strategies must now accommodate fluctuating AI workloads while enabling energy recovery—even at the edge.
For Kenneth Moreano of Scott Data Center, the AI factory is as much a service model as a physical asset. By abstracting infrastructure complexity and controlling the full stack in-house, his company enables enterprise customers to move from AI experimentation to production at scale, without managing the underlying technical detail.
Across the discussion, panelists agreed that the industry’s traditional design and financing playbook is obsolete. AI infrastructure cannot be treated as a 25-year depreciable asset when hardware cycles move in months. Instead, data centers must be built as adaptable, elemental systems: capable of evolving as power, cooling, and compute requirements continue to shift.
The session concluded with one obvious takeaway: AI is not a future state to prepare for. It is already shaping how data centers are built, where they are located, and how they generate value. The AI factory is no longer theoretical—and the industry is racing to build it fast enough.
As AI workloads push data center infrastructure in both centralized and distributed directions, the industry is rethinking where compute lives, how data moves, and who controls the networks in between. This episode captures highlights from The Distributed Data Frontier: Edge, Interconnection, and the Future of Digital Infrastructure, a panel discussion from the 2025 Data Center Frontier Trends Summit.
Moderated by Scott Bergs of Dark Fiber and Infrastructure, the panel brought together leaders from DartPoints, 1623 Farnam, Duos Edge AI, ValorC3 Data Centers, and 365 Data Centers to examine how edge facilities, interconnection hubs, and regional data centers are adapting to rising power densities, AI inference workloads, and mounting connectivity constraints.
Panelists discussed the rapid shift from legacy 4–6 kW rack designs to environments supporting 20–60 kW and beyond, while noting that many AI inference applications can be deployed effectively at moderate densities when paired with the right connectivity. Hospitals, regional enterprises, and public-sector use cases are emerging as key drivers of distributed AI infrastructure, particularly in tier 3 and tier 4 markets.
The conversation also highlighted connectivity as a defining bottleneck. Permitting delays, middle-mile fiber constraints, and the need for early carrier engagement are increasingly shaping site selection and time-to-market outcomes. As data centers evolve into network-centric platforms, operators are balancing neutrality, fiber ownership, and long-term upgradability to ensure today’s builds remain relevant in a rapidly changing AI landscape.
In this episode of the Data Center Frontier Show, DCF Editor in Chief Matt Vincent speaks with Uptime Institute research analyst Max Smolaks about the infrastructure forces reshaping AI data centers from power and racks to cooling, economics, and the question of whether the boom is sustainable.
Smolaks unpacks a surprising on-ramp to today’s AI buildout: former cryptocurrency mining operators that “discovered” underutilized pockets of power in nontraditional locations—and are now pivoting into AI campuses as GPU demand strains conventional markets. The conversation then turns to what OCP 2025 revealed about rack-scale AI: heavier, taller, more specialized racks; disaggregated “compute/power/network” rack groupings; and a white space that increasingly looks purpose-built for extreme density.
From there, Vincent and Smolaks explore why liquid cooling is both inevitable and still resisted by many operators—along with the software, digital twins, CFD modeling, and new commissioning approaches emerging to manage the added complexity. On the power side, they discuss the industry’s growing alignment around 800V DC distribution and what it signals about Nvidia’s outsized influence on next-gen data center design.
Finally, the conversation widens into load volatility and the economics of AI infrastructure: why “spiky” AI power profiles are driving changes in UPS systems and rack-level smoothing, and why long-term growth may hinge less on demand (which remains strong) than on whether AI profits broaden beyond a few major buyers—especially as GPU hardware depreciates far faster than the long-lived fiber built during past tech booms.
A sharp, grounded look at the AI factory era—and the engineering and business realities behind the headlines.
In this Data Center Frontier Trends Summit 2025 session—moderated by Stu Dyer (CBRE) with panelists Aad den Elzen (Solar Turbines/Caterpillar), Creede Williams (Exigent Energy Partners), and Adam Michaelis (PointOne Data Centers)—the conversation centered on a hard truth of the AI buildout: power is now the limiting factor, and the grid isn’t keeping pace.
Dyer framed how quickly the market has escalated, from “big” 48MW campuses a decade ago to today’s expectations of 500MW-to-gigawatt-scale capacity. With utility timelines stretched and interconnection uncertainty rising, the panel argued that natural gas has moved from taboo to toolkit—often the fastest route to firm power at meaningful scale.
Williams, speaking from the IPP perspective, emphasized that speed-to-power requires firm fuel and financeable infrastructure, warning that “interruptible” gas or unclear supply economics can undermine both reliability and underwriting. Den Elzen noted that gas is already a proven solution across data center deployments, and in many cases is evolving from a “bridge” to a durable complement to the grid—especially when modular approaches improve resiliency and enable phased buildouts. Michaelis described how operators are building internal “power plant literacy,” hiring specialists and partnering with experienced power developers because data center teams can’t assume they can self-perform generation projects.
The panel also “de-mystified” key technology choices—reciprocating engines vs. turbines—as tradeoffs among lead time, footprint, ramp speed, fuel flexibility, efficiency, staffing, and long-term futureproofing. On AI-era operations, the group underscored that extreme load swings can’t be handled by rotating generation alone, requiring system-level design with controls, batteries, capacitors, and close coordination with tenant load profiles.
Audience questions pushed into public policy and perception: rate impacts, permitting, and the long-term mix of gas, grid, and emerging options like SMRs. The panel’s consensus: behind-the-meter generation can help shield ratepayers from grid-upgrade costs, but permitting remains locally driven and politically sensitive—making industry communication and advocacy increasingly important.
Bottom line: in the new data center reality, natural gas is here—often not as a perfect answer, but as the one that matches the industry’s near-term demands for speed, scale, and firm power.
In this episode, we crack open the world of ILA (In-Line Amplifier) huts, those unassuming shelters are quietly powering fiber connectivity. Like mini utility substations of the fiber world, these small, secure, and distributed facilities keep internet, voice, and data networks running reliably, especially over long distances or in developing areas. From the analog roots of signal amplification to today’s digital optical technologies, this conversation explores how ILAs are redefining long-haul fiber transport.
We’ll discuss how these compact, often rural, mini data centers are engineered and built to boost light signals across vast distances. But it’s not just about the tech. There are real-world challenges to deploying ILAs: from acquiring land in varied environments, to coordinating civil construction often built in isolation. You’ll learn why site selection is as much about geology and permitting as it is about signal loss, and what factors can make or break an ILA deployment.
We also explore the growing role of hyperscalers and colocation providers in driving ILA expansion, adjacent revenue opportunities, and what ILA facilities can mean for the future of rural connectivity.
Tune in to find out how the pulse of long-haul fiber is beating louder than ever.
In this panel session from the 2025 Data Center Frontier Trends Summit (Aug. 26-28) in Reston, Va., JLL’s Sean Farney moderates a high-energy panel on how the industry is fast-tracking AI capacity in a world of power constraints, grid delays, and record-low vacancy.
Under the banner “Scaling AI: The Role of Adaptive Reuse and Power-Rich Sites in GPU Deployment,” the discussion dives into why U.S. colocation vacancy is hovering near 2%, how power has become the ultimate limiter on AI revenue, and what it really takes to stand up GPU-heavy infrastructure at speed.
Schneider Electric’s Lovisa Tedestedt, Aligned Data Centers’ Phill Lawson-Shanks, and Sapphire Gas Solutions’ Scott Johns unpack the real-world strategies they’re deploying today—from adaptive reuse of industrial sites and factory-built modular systems, to behind-the-fence natural gas, microgrids, and emerging hydrogen and RNG pathways. Along the way, they explore the coming “AI inference edge,” the rebirth of the enterprise data center, and how AI is already being used to optimize data center design and operations.
During this talk, you’ll learn:
* Why record-low vacancy and long interconnection queues are reshaping AI deployment strategy.
* How adaptive reuse of legacy industrial and commercial real estate can unlock gigawatt-scale capacity and community benefits.
* The growing role of liquid cooling, modular skids, and grid-to-chip efficiency in getting more power to GPUs.
* How behind-the-meter gas, virtual pipelines, and microgrids are bridging multi-year grid delays.
* Why many experts expect a renaissance of enterprise data centers for AI inference at the edge.
Moderator:
Sean Farney, VP, Data Centers, Jones Lang LaSalle (JLL)
Panelists:
Tony Grayson, General Manager, Northstar
Lovisa Tedestedt, Strategic Account Executive – Cloud & Service Providers, Schneider Electric
Phill Lawson-Shanks, Chief Innovation Officer, Aligned Data Centers
Scott Johns, Chief Commercial Officer, Sapphire Gas Solutions
Recorded live at the 2025 Data Center Frontier Trends Summit in Reston, VA, this panel brings together leading voices from the utility, IPP, and data center worlds to tackle one of the defining issues of the AI era: power.
Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, the session features:
Jeff Barber, VP Global Data Centers, Bloom Energy
Bob Kinscherf, VP National Accounts, Constellation
Stan Blackwell, Director, Data Center Practice, Dominion Energy
Joel Jansen, SVP Regulated Commercial Operations, American Electric Power
David McCall, VP of Innovation, QTS Data Centers
Together they explore how hyperscale and AI workloads are stressing today’s grid, why transmission has become the critical bottleneck, and how on-site and behind-the-meter solutions are evolving from “bridge power” into strategic infrastructure.
The panel dives into the role of gas-fired generation and fuel cells, emerging options like SMRs and geothermal, the realities of demand response and curtailment, and what it will take to recruit the next generation of engineers into this rapidly changing ecosystem.
If you want a grounded, candid look at how energy providers and data center operators are working together to unlock new capacity for AI campuses, this conversation is a must-listen.
Live from the Data Center Frontier Trends Summit 2025 – Reston, VA
In this episode, we bring you a featured panel from the Data Center Frontier Trends Summit 2025 (Aug. 26-28), sponsored by Schneider Electric. DCF Editor in Chief Matt Vincent moderates a fast-paced, highly practical conversation on what “AI for good” really looks like inside the modern data center—both in how we build for AI workloads and how we use AI to run facilities more intelligently.
Expert panelists included:
Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric
Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters
Andrew Whitmore, VP of Sales, Motivair
Together they unpack:
How AI is driving unprecedented scale—from megawatt data halls to gigawatt AI “factories” and 100–600 kW rack roadmaps
What Schneider and NVIDIA are learning from real-world testing of Blackwell and NVL72-class reference designs
Why liquid cooling is no longer optional for high-density AI, and how to retrofit thousands of brownfield, air-cooled sites
How Compass is using AI, predictive analytics, and condition-based maintenance to cut manual interventions and OPEX
The shift from “constructing” to assembling data centers via modular, prefab approaches
The role of AI in grid-aware operations, energy storage, and more sustainable build and operations practices
Where power architectures, 800V DC, and industry standards will take us over the next five years
If you want a grounded, operator-level view into how AI is reshaping data center design, cooling, power, and operations—beyond the hype—this DCF Trends Summit session is a must-listen.
On this episode of The Data Center Frontier Show, Editor in Chief Matt Vincent sits down with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, to unpack Flex’s bold new integrated data center platform as unveiled at the 2025 OCP Global Summit.
Flex says the AI era has broken traditional data center models, pushing power, cooling, and compute to the point where they can no longer be engineered separately. Their answer is a globally manufactured, pre-engineered platform that unifies these components into modular pods and skids, designed to cut deployment timelines by up to 30 percent and support gigawatt-scale AI campuses.
Rob and Chris explain how Flex is blending JetCool’s chip-level liquid cooling with scalable rack-level CDUs; how higher-voltage DC architectures (400V today, 800V next) will reshape power delivery; and why Flex’s 110-site global manufacturing footprint gives it a unique advantage in speed and resilience.
They also explore Flex’s lifecycle intelligence strategy, the company’s circular-economy approach to modular design, and their view of the “data center of 2030”—a landscape defined by converged power and IT, liquid cooling as default, and modular units capable of being deployed in 30–60 days.
It’s a deep look at how one of the world’s largest manufacturers plans to redefine AI-scale infrastructure.
Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy.
We’re already seeing a sharp rise in total power consumption across the industry, but what’s even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That’s a huge jump — and it’s forcing everyone in the industry to rethink power delivery, cooling, and overall site design.
At those levels, traditional AC power distribution starts to reach its limits. That’s why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future.
But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn’t enough; we need to make sure they’re sustainable and accepted by the public.
The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you’ve got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site.
That’s where ComAp’s systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%.
Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it’s no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability.
And while today’s discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We’ve built our solutions to be flexible enough for that transition — so operators don’t have to wait for the technology to catch up.
In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure.
That’s the space where ComAp is making a real difference.
In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam’s strategy and rise in the Midwest.
Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond.
The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles.
Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure.
As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest’s most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.
In this episode, Matt Vincent, Editor in Chief at Data Frontier is joined by Rob Macchi, Vice President Data Center Solutions at Wesco and they explore how companies can stay ahead of the curve with smarter, more resilient construction strategies. From site selection to integrating emerging technologies, Wesco helps organizations build data centers that are not only efficient but future-ready. Listen now to learn more!
In this episode of the Data Center Frontier Show, we sit down with Ryan Mallory, the newly appointed CEO of Flexential, following a coordinated leadership transition in October from Chris Downie.
Mallory outlines Flexential's strategic focus on the AI-driven future, positioning the company at the critical "inference edge" where enterprise CPU meets AI GPU. He breaks down the AI infrastructure boom into a clear three-stage build cycle and explains why the enterprise "killer app"—Agentic AI—plays directly into Flexential's strengths in interconnection and multi-tenant solutions.
We also dive into:
Power Strategy: How Flexential's modular, 36-72 MW build strategy avoids community strain and wins utility favor.
Product Roadmap: The evolution to Gen 5 and Gen 6 data centers, blending air and liquid cooling for mixed-density AI workloads.
The Bold Bet: Mallory's vision for the next 2-3 years, which involves "bending the physics curve" with geospatial energy and transmission to overcome terrestrial limits.
Tune in for a insightful conversation on power, planning, and the future of data center infrastructure.
On this episode of the Data Center Frontier Show, DartPoints CEO Scott Willis joins Editor in Chief Matt Vincent to discuss why regional data centers are becoming central to the future of AI and digital infrastructure. Fresh off his appearance on the Distributed Edge panel at the 2025 DCF Trends Summit, Willis breaks down how DartPoints is positioning itself in non-tier-one markets across the Midwest, Southeast, and South Central regions—locations he believes will play an increasingly critical role as AI workloads move closer to users.
Willis explains that DartPoints’ strategy hinges on a deeply interconnected regional footprint built around carrier-rich facilities and strong fiber connectivity. This fabric is already supporting latency-sensitive workloads such as AI inference and specialized healthcare applications, and Willis expects that demand to accelerate as enterprises seek performance closer to population centers.
Following a recent recapitalization with NOVA Infrastructure and Orion Infrastructure Capital, DartPoints has launched four new expansion sites designed from the ground up for higher-density, AI-oriented workloads. These facilities target rack densities from 30 kW to 120 kW and are sized in the 10–50 MW range—large enough for meaningful HPC and AI deployments but nimble enough to move faster than hyperscale builds constrained by long power queues.
Speed to market is a defining advantage for DartPoints. Willis emphasizes the company’s focus on brownfield opportunities where utility infrastructure already exists, reducing deployment timelines dramatically. For cooling, DartPoints is designing flexible environments that leverage advanced air systems for 30–40 kW racks and liquid cooling for higher densities, ensuring the ability to support the full spectrum of enterprise, HPC, and edge-adjacent AI needs.
Willis also highlights the importance of community partnership. DartPoints’ facilities have smaller footprints and lower power impact than hyperscale campuses, allowing the company to serve as a local economic catalyst while minimizing noise and aesthetic concerns.
Looking ahead to 2026, Willis sees the industry entering a phase where AI demand becomes broader and more distributed, making regional markets indispensable. DartPoints plans to continue expanding through organic growth and targeted M&A while maintaining its focus on interconnection, high-density readiness, and rapid, community-aligned deployment.
Tune in to hear how DartPoints is shaping the next chapter of distributed digital infrastructure—and why the market is finally moving toward the regional edge model Willis has championed.





