Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy. We’re already seeing a sharp rise in total power consumption across the industry, but what’s even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That’s a huge jump — and it’s forcing everyone in the industry to rethink power delivery, cooling, and overall site design. At those levels, traditional AC power distribution starts to reach its limits. That’s why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future. But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn’t enough; we need to make sure they’re sustainable and accepted by the public. The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you’ve got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site. That’s where ComAp’s systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%. Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it’s no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability. And while today’s discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We’ve built our solutions to be flexible enough for that transition — so operators don’t have to wait for the technology to catch up. In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure. That’s the space where ComAp is making a real difference.
In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam’s strategy and rise in the Midwest. Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond. The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles. Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure. As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest’s most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.
In this episode, Matt Vincent, Editor in Chief at Data Frontier is joined by Rob Macchi, Vice President Data Center Solutions at Wesco and they explore how companies can stay ahead of the curve with smarter, more resilient construction strategies. From site selection to integrating emerging technologies, Wesco helps organizations build data centers that are not only efficient but future-ready. Listen now to learn more!
In this episode of the Data Center Frontier Show, we sit down with Ryan Mallory, the newly appointed CEO of Flexential, following a coordinated leadership transition in October from Chris Downie. Mallory outlines Flexential's strategic focus on the AI-driven future, positioning the company at the critical "inference edge" where enterprise CPU meets AI GPU. He breaks down the AI infrastructure boom into a clear three-stage build cycle and explains why the enterprise "killer app"—Agentic AI—plays directly into Flexential's strengths in interconnection and multi-tenant solutions. We also dive into: Power Strategy: How Flexential's modular, 36-72 MW build strategy avoids community strain and wins utility favor. Product Roadmap: The evolution to Gen 5 and Gen 6 data centers, blending air and liquid cooling for mixed-density AI workloads. The Bold Bet: Mallory's vision for the next 2-3 years, which involves "bending the physics curve" with geospatial energy and transmission to overcome terrestrial limits. Tune in for a insightful conversation on power, planning, and the future of data center infrastructure.
On this episode of the Data Center Frontier Show, DartPoints CEO Scott Willis joins Editor in Chief Matt Vincent to discuss why regional data centers are becoming central to the future of AI and digital infrastructure. Fresh off his appearance on the Distributed Edge panel at the 2025 DCF Trends Summit, Willis breaks down how DartPoints is positioning itself in non-tier-one markets across the Midwest, Southeast, and South Central regions—locations he believes will play an increasingly critical role as AI workloads move closer to users. Willis explains that DartPoints’ strategy hinges on a deeply interconnected regional footprint built around carrier-rich facilities and strong fiber connectivity. This fabric is already supporting latency-sensitive workloads such as AI inference and specialized healthcare applications, and Willis expects that demand to accelerate as enterprises seek performance closer to population centers. Following a recent recapitalization with NOVA Infrastructure and Orion Infrastructure Capital, DartPoints has launched four new expansion sites designed from the ground up for higher-density, AI-oriented workloads. These facilities target rack densities from 30 kW to 120 kW and are sized in the 10–50 MW range—large enough for meaningful HPC and AI deployments but nimble enough to move faster than hyperscale builds constrained by long power queues. Speed to market is a defining advantage for DartPoints. Willis emphasizes the company’s focus on brownfield opportunities where utility infrastructure already exists, reducing deployment timelines dramatically. For cooling, DartPoints is designing flexible environments that leverage advanced air systems for 30–40 kW racks and liquid cooling for higher densities, ensuring the ability to support the full spectrum of enterprise, HPC, and edge-adjacent AI needs. Willis also highlights the importance of community partnership. DartPoints’ facilities have smaller footprints and lower power impact than hyperscale campuses, allowing the company to serve as a local economic catalyst while minimizing noise and aesthetic concerns. Looking ahead to 2026, Willis sees the industry entering a phase where AI demand becomes broader and more distributed, making regional markets indispensable. DartPoints plans to continue expanding through organic growth and targeted M&A while maintaining its focus on interconnection, high-density readiness, and rapid, community-aligned deployment. Tune in to hear how DartPoints is shaping the next chapter of distributed digital infrastructure—and why the market is finally moving toward the regional edge model Willis has championed.
In this episode of the Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent speaks with Ed Nichols, President and CEO of Expanse Energy / RRPT Hydro, and Gregory Tarver, Chief Electrical Engineer, about a new kind of hydropower built for the AI era. RRPT Hydro’s piston-driven gravity and buoyancy system generates electricity without dams or flowing rivers—using the downward pull of gravity and the upward lift of buoyancy in sealed cylinders. Once started, the system runs self-sufficiently, producing predictable, zero-emission power. Designed for modular, scalable deployment—from 15 kW to 1 GW—the technology can be installed underground or above ground, enabling data centers to power themselves behind the meter while reducing grid strain and even selling excess energy back to communities. At an estimated Levelized Cost of Energy of $3.50/MWh, RRPT Hydro could dramatically undercut traditional renewables and fossil power. The company is advancing toward commercial readiness (TRL 7–9) and aims to build a 1 MW pilot plant within 12–15 months. Nichols and Tarver describe this moonshot innovation, introduced at the 2025 DCF Trends Summit, as a “Wright Brothers moment” for hydropower—one that could redefine sustainable baseload energy for data centers and beyond. Listen now to explore how RRPT Hydro’s patented piston-driven system could reshape the physics, economics, and deployment model of clean energy.
At this year’s Data Center Frontier Trends Summit, Honghai Song, founder of Canyon Magnet Energy, presented his company’s breakthrough superconducting magnet technology during the “6 Moonshot Trends for the 2026 Data Center Frontier” panel—showcasing how high-temperature superconductors (HTS) could reshape both fusion energy and AI data-center power systems. In this episode of the Data Center Frontier Show, Editor in Chief Matt Vincent speaks with Song about how Canyon Magnet Energy—founded in 2023 and based in New Jersey and Stony Brook University—is bridging fusion research and AI infrastructure through next-generation magnet and energy-storage technology. Song explains how HTS magnets, made from REBCO (Rare Earth Barium Copper Oxide), operate at 77 Kelvin with zero electrical resistance, opening the door to new kinds of super-efficient power transmission, storage, and distribution. The company’s SMASH (Superconducting Magnetic Storage Hybrid) system is designed to deliver instant bursts of energy—within milliseconds—to stabilize GPU-driven AI workloads that traditional batteries and grids can’t respond to fast enough. Canyon Magnet Energy is currently developing small-scale demonstration projects pairing SMES systems with AI racks, exploring integration with DC power architectures and liquid-cooling infrastructure. The long-term roadmap envisions multi-mile superconducting DC lines connecting renewables to data centers—and ultimately, fusion power plants providing virtually unlimited clean energy. Supported by an NG Accelerate grant from New Jersey, the company is now seeking data-center partners and investors to bring these technologies from the lab into the field.
Who is Packet Power? Since 2008, Packet Power has been at the forefront of energy and environmental monitoring, pioneering wireless solutions that helped define the modern Internet of Things (IoT). Built on the belief that energy is the new cost frontier of computation, Packet Power enables organizations to understand exactly where, when, and how energy is used—and at what cost. As AI-driven workloads push energy demand to record levels, Packet Power’s mission of complete energy traceability has never been more critical. Their systems are trusted worldwide for providing secure, out-of-band monitoring that remains fully independent of operational data networks. Introducing the All-New High-Density Power Monitor Packet Power’s newest innovation, the High-Density Power Monitor, is redefining what’s possible in energy monitoring. At just under 6 cubic inches, it’s the smallest and most scalable multi-circuit power monitoring system on the market, capable of tracking 120 circuits in a space smaller than what’s inside a standard light switch. The High-Density Power Monitor eliminates bulky hardware, complex wiring, and lengthy installations. It’s plug-and-play simple, seamlessly integrates with Packet Power’s EMX software or any third-party monitoring platform, and supports both wired and wireless connectivity—including secure, air-gapped environments. Solving the Challenges of Modern Power Monitoring The High-Density Power Monitor is engineered for the next generation of high-performance systems and facilities. It tackles five key challenges: Power Density: Monitors high-load environments with unmatched precision. Circuit Density: Tracks more circuits per module than any competitor. Physical Density: Fits anywhere, from PDUs to sub-panels to embedded devices. Installation Simplicity: Snaps into place—no tools, no complexity. Connection Flexibility: Wireless, wired, LAN, cloud, or cellular—you can mix and match freely. Whether managing a single rack or thousands of devices, Packet Power ensures monitoring 1 device is as easy as monitoring 1,000. Why It Matters Now Today’s computing environments are experiencing an energy density arms race—with systems consuming megawatts of power in a single cabinet. New cooling methods, extreme power densities, and evolving form factors demand monitoring solutions that can keep up. Packet Power’s new High-Density Power Monitor meets that challenge head-on, offering the scalability, adaptability, and visibility needed to manage energy use in the AI era. Perfect for Any Application This solution is ideal for: High-density servers and compute cabinets Distribution panels, PDUs, and busway components Embedded monitoring in OEM systems Large-scale deployments requiring fleet-level simplicity + more! Whether new installations or retrofitting existing buildings, Packet Power systems deliver vendor-agnostic integration and proven scalability with unmatched turn times and products Made in the USA for BABA compliance. Learn More! Discover the true meaning of small & mighty: 👉 Visit PacketPower.com/high-density-power-monitor 📧 Contact sales@packetpower.com
In this episode of The Data Center Frontier Show, DCF Editor-in-Chief Matt Vincent talks with Yuval Boger, Chief Commercial Officer at QuEra Computing, about the fast-evolving intersection of quantum and AI-accelerated supercomputing. QuEra, a Boston-based pioneer in neutral-atom quantum computers, recently expanded its $230 million funding round with new investment from NVentures (NVIDIA’s venture arm) and announced a Nature-published breakthrough in algorithmic fault tolerance that dramatically cuts runtime overhead for error-corrected quantum algorithms. Boger explains how QuEra’s systems, operating at room temperature and using identical rubidium atoms as qubits, offer scalable, power-efficient performance for HPC and cloud environments. He details the company's collaborations with NVIDIA, AWS, and global supercomputing centers integrating quantum processors alongside GPUs, and outlines why neutral-atom architectures could soon deliver practical, fault-tolerant quantum advantage. Listen as Boger discusses QuEra’s technology roadmap, market position, and the coming inflection point where hybrid quantum-classical systems move from the lab into the data center mainstream.
Matt Vincent, Editor-in-Chief of Data Center Frontier, sits down with Angela Capon, Vice President of Marketing at EdgeConneX, to discuss the groundbreaking collaboration between EdgeConneX and the Duke of Edinburgh's International Award Program.
Charting the Future of AI Storage Infrastructure In this episode, Solidigm Director of Strategic Planning Brian Jacobosky guides listeners through a tech-forward conversation on how storage infrastructure is helping redefine the AI-era data center. The discussion frames storage as more than just a cost factor; it's also a strategic building block for performance, efficiency, and savings. Storage Moves to the Center of AI Data Infrastructure Jacobosky explains how, in the AI-driven era, storage is being elevated from a forgotten metric like “dollars per gigabyte” to a core priority: maximizing GPU utilization, managing soaring power draw, and unlocking space savings. He illustrates how every watt and every square inch counts. As GPU compute scales dramatically, storage efficiency is being engineered to enable maximum density and throughput. High-Capacity SSDs as a Game-Changer Jacobosky spotlights Solidigm D5-P5336 122TB SSDs as emblematic of the shift. Rather than a simple technical refresh, these drives represent a tectonic realignment in how data centers are being designed for huge capacity and optimized performance. With all-flash deployments offering up to nine times the space savings compared to hybrid architectures, Jacobosky underscores how SSD density can enable more GPU scale within fixed power and space budgets. This could even unlock achieving a 1‑petabyte SSD by the end of the decade. Embedded Efficiency The episode brings environmental considerations to the forefront. Jacobosky shares how an “all‑SSD” strategy can dramatically slash physical footprints as well as energy consumption. From data center buildout through end of lifecycle drive retirement, efficiency is driving both operational cost savings and ESG benefits — helping reduce concrete and steel usage, power draw, and e‑waste. Pioneering Storage Architectures and Cooling Innovation Listeners learn how AI-first innovators like Neo Cloud-style providers and sovereign AI operators lead the charge in deploying next-generation storage. Jacobosky also previews the Solidigm PS-1010 E1.S form factor, an NVIDIA fanless server solution that enables direct‑to‑chip Cold-Plate-Cooled SSDs integrated into GPU servers. He predicts that this systems-level integration will become a standard for high-density AI infrastructure. Storage as a Strategic Investment Solidigm challenges the notion that high-capacity storage is cost prohibitive. Within the framework of the AI token economy, Jacobosky explains that the true measure becomes minimizing cost per token and time to first token and, when storage is optimized for performance, capacity, and efficiency, the total cost of ownership (TCO) will often prove favorable after the first evaluation. Looking Ahead: Memory Wall, Inference Workloads, Liquid Cooling Jacobosky ends with a look ahead to where storage innovation will lead in the next five years. As AI models grow in size and complexity, he argues, storage is increasingly acting as an extension of memory, breaking through the “memory wall” for large inference workloads. Companies will design infrastructure from the ground up with liquid-cooling, future-scalable storage, and storage that supports massive model deployments without compromising latency. This episode is essential listening for data center architects, AI infrastructure strategists, and sustainability leaders looking to understand how storage is fast-becoming a defining factor in AI-ready data centers of the future.
Florida is emerging as one of the most promising new frontiers for data center growth — combining power availability, policy alignment, and strategic geography in ways that mirror the early success of Northern Virginia. In this episode of The Data Center Frontier Show, Editor-in-Chief Matt Vincent sits down with Buddy Rizer, Executive Director of Loudoun County Economic Development, and Lila Jaber, Founder of the Florida’s Women in Energy Leadership Forum and former Chair of the Florida Public Service Commission. Together, they explore how Florida is building the foundation for large-scale digital infrastructure and AI data center investment. Episode Highlights: Energy Advantage: While Loudoun County faces a 600-megawatt deficit and rising demand, Florida enjoys excess generation capacity, proactive utilities, and growing renewable integration. Utilities like FPL and Duke Energy are preparing for hyperscale and AI-driven loads with new tariff structures and grid-hardening investments. Tax Incentives & Workforce: Florida’s extended data center sales tax exemption through 2037 and its raised 100-megawatt IT load threshold signal a commitment to hyperscale development. The state’s universities and workforce programs are aligned with this tech growth, producing top talent in engineering and applied sciences. Strategic Location: As a digital gateway to Latin America and the Caribbean, Florida’s connectivity advantage—especially around Miami—is attracting hyperscale and AI operators looking to expand globally. Market Outlook: Industry insiders predict that within the next year, a major data center player will establish a significant footprint in Florida. Multiple campuses are expected to follow, driven by the state’s power resilience, policy stability, and collaborative approach between utilities, developers, and government leaders. Why It Matters: Florida’s combination of energy abundance, policy foresight, and strategic geography positions it as the next great growth market for digital infrastructure and AI-ready data centers in North America.
This podcast explores the rapidly evolving thermal and water challenges facing today’s data centers as AI workloads push rack densities to unprecedented levels. The discussion highlights the risks and opportunities tied to liquid cooling—from pre-commissioning practices and real-time monitoring to system integration and water stewardship. Ecolab’s innovative approaches to thermal management can not only solve operational constraints but also deliver competitive advantage by improving efficiency, reducing resource consumption, and strengthening sustainability commitments.
Join Bill Tierney of The Data Center Construction Alliance, as he discusses some of the emerging challenges facing data center development today. Topics will include how increasing collaboration between OEMs, owners, contractors, and sub-contractors is leading to some exciting and innovative solutions in the design and construction of data centers. He will also share some examples of how collaboration has led to new ideas and methodologies in the field.
AI networks are driving dramatic changes in data center design, especially around power, cooling, and connectivity. Modern GPU-powered AI data centers require far more energy and generate much more heat than traditional CPU-based setups, pushing cabinets to new power densities and necessitating advanced cooling solutions like liquid direct-to-chip cooling. These environments also demand significantly more fiber cabling to handle increased data flows, with deeper cabinets and complex layouts that make traditional rear-access cabling impractical.
In this DCF Trends-Nomads at the Summit Podcast episode, the hosts from Data Center Frontier and Nomad Futurist sit down with Adrienne Pierce, CEO of New Sun Road, to explore the emerging frontier of sovereign and renewable energy solutions for modular data center deployment. With over 1,500 microgrids under management via the company’s Stellar platform, Pierce brings a field-tested perspective on how flexible, AI-driven energy controls can empower edge and sub-10 MW data center systems—especially in regions where traditional grid infrastructure can’t keep up with AI-era demands. This discussion dives into the real-world opportunities for modular, microgrid-powered data centers to unlock new markets, reduce energy costs, and create more resilient and autonomous compute infrastructure at the edge and beyond. Expect sharp insights into what it means to decouple data center growth from utility bottlenecks—and how the right energy intelligence can accelerate both sustainability and scalability.
In this DCF Trends-Nomads at the Summit Podcast episode, the hosts of Data Center Frontier and Nomad Futurist sit down with UVA Darden MBA candidates Tosin Fashola and Albert Odum for an energizing conversation about next-generation data infrastructure—and why they believe Africa is poised to be its future epicenter. With professional backgrounds spanning data center strategy at KPMG and government-led implementations in Ghana, Tosin and Albert bring fresh, globally-minded perspectives on AI infrastructure, regional power strategy, and the role of connectivity in economic transformation. Expect a wide-ranging dialogue on the untapped potential of African markets, the roadmap to building sovereign cloud capacity and IXPs, and how a new generation of leaders is preparing to close the global digital divide—one hyperscale project at a time.
In this DCF Trends-Nomads at the Summit Podcast episode, Data Center Frontier editors and Nomad Futurist hosts sit down with Greg Stover, Vertiv’s Global Director, Hi-Tech Development. The discussion delves into Stover’s work at the intersection of advanced cooling technologies, hyperscale growth, and AI-driven infrastructure design. Drawing on his experience guiding Vertiv’s strategy for high-density deployments, liquid cooling adoption, and close collaboration with hyperscalers and chipmakers, Stover offers a forward-looking perspective on how evolving compute architectures, thermal management innovations, and market forces are redefining the competitive edge in the data center industry.
In this DCF Trends-Nomads at the Summit Podcast episode, the ever-curious, future-focused podcast hosts from Data Center Frontier and Nomad Futurist reunite with Infrastructure Masons CEO Santiago Suinaga for a timely, in-depth follow-up to his impactful debut on the DCF Show. With AI infrastructure growth hitting warp speed, the conversation will dig deeper into Suinaga’s vision for how the digital infrastructure community can scale responsibly—without losing sight of net zero goals, workforce development, or supply chain accountability. Expect a candid, high-level exchange on emerging regulatory pressures, the embodied carbon challenge, and why flexible cooling and modular design must be table stakes for the AI-powered data center of the future. Suinaga will also share the latest on iMasons' Climate Accord, job-matching platform, and new cross-sector partnerships—all aimed at fostering sustainability, equity, and innovation in an industry racing to keep pace with exponential demand.
In this DCF Trends-Nomads at the Summit Podcast episode, Chris James, CEO of NoesisAI, delivers a sweeping, insight-rich overview of how different classes of AI models—from LLMs and RAG to vision AI and scientific workloads—are driving a new wave of infrastructure decisions across the data center landscape. With a sharp focus on the diverging needs of training vs. inference, James breaks down what it takes to support today’s AI—from GPU-intensive clusters with high-speed interconnects and liquid cooling to inference-optimized, edge-deployed accelerators. He also explores the rapidly shifting hardware ecosystem, including the rise of custom silicon, heterogeneous computing, and where the battle between NVIDIA, AMD, Intel, and hyperscaler-designed chips is headed. Whether you're designing for scalability, sustainability, or the bleeding edge, this conversation offers a field guide to the infrastructure behind intelligent computing.