DiscoverThe Data Center Frontier Show
The Data Center Frontier Show
Claim Ownership

The Data Center Frontier Show

Author: Endeavor Business Media

Subscribed: 38Played: 384
Share

Description

Welcome to the Data Center Frontier Show podcast, telling the story of the data center industry and its future. Our podcast is hosted by the editors of Data Center Frontier, who are your guide to the ongoing digital transformation, explaining how next-generation technologies are changing our world, and the critical role the data center industry plays in creating this extraordinary future.
57 Episodes
Reverse
For this episode of the Data Center Frontier Show Podcast, DCF Editor in Chief Matt Vincent sits down for an instructive chat with Phillip Koblence, a strategic executive and ubiquitous thought leader in the data center and network space.  Koblence co-founded NYI in 1996 and has successfully navigated through an ever-shifting infrastructure landscape, growing the company from a single data center in Lower Manhattan to a robust network with executional capabilities in key national and international markets.  His leadership, focus on customer experience, and ability to cut through complexity and hype, has positioned NYI as an industry leader in high-touch infrastructure solutions. Koblence is also CEO of Critical Ventures, a consulting agency offering a range of services to help clients, owners and investors optimize the value of critical infrastructure assets. Koblence sits on the DE-CIX North America Advisory Board as well as on the Board of OIX (formerly Open-IX). He is co-founder of the Nomad Futurist Foundation and podcast, designed to demystify the world of critical infrastructure and inspire younger generations to join the industry.  The interview begins with a discussion of NYI's entry into 60 Hudson Street and the challenges of retrofitting legacy buildings for modern data center needs, while emphasizing the importance of connectivity and collaboration in the digital infrastructure industry, and highlighted the rapid pace of technological advancements such as AI. Here's a timeline of the podcast's highlights: 2:03 - Koblence discusses NYI's entry into Manhattan's historic colocation and interconnection hub, 60 Hudson Street, emphasizing the importance of connectivity in New York City's digital infrastructure evolution. 6:20 - Koblence elaborates on the challenges and considerations when retrofitting legacy buildings like 60 Hudson for modern data center needs, highlighting the importance of creative solutions and understanding the nuances of different deployments. 11:38 - The discussion turns to an exploration of deploying data centers in skyscrapers, the evolving criticality of digital infrastructure, and the need for redundancy and a "data center mindset" in reckoning with society's reliance on connectivity. 20:02 - Remarks on the rapid pace of technological advancements, specifically the increasing densities of GPUs such as Nvidia's H100, H200, Grace Hopper, and Blackwell chips. 20:32 - More on the exponential increase in densities within the digital infrastructure community and predictions of a future "flattening out" of density growth. 23:59 - Koblence emphasizes the continued relevance of legacy facilities such as 2 megawatt (MW) or 5 MW data centers in modern deployments, particularly in major connectivity hubs. The concept of the edge is also discussed in the context of facilitating connectivity with AI sites. 26:59 - Koblence elaborates on the importance of collaboration and creating cohesive solutions across various data center facilities, while emphasizing the role of NYI as a solutions facilitator and discussing partnerships with Hudson IX and other providers. 31:22 -  Koblence elaborates on the mission of the Nomad Futurist foundation to demystify the world of digital infrastructure, highlighting the simplicity of the industry beneath the technical complexities, and emphasizing transparency and accessibility in making connectivity and digital infrastructure understandable and available.   Recent DCF Show Podcast Episodes:  DCF Show: Data Center PR Practice Fosters Coalitions, Community Outreach to Reduce Development Backlash   DCF Show: Data Center Construction and Dallas Market Talk with Burns & McDonnell  DCF Show: The Top 5 Data Center Industry Stories of Q4  DCF Show: Steve Madden, Equinix VP of Digital Transformation and Segmentation Marketing  DCF Show: 8 Key Data Center Industry Themes for 2024, Part 3     
As recorded on March 22, 2024, this episode of the Data Center Frontier Show Podcast featured the following participants:  • Matt Vincent, Editor in Chief and Podcast Host, Data Center Frontier  • Ali Heydari, Technical Director and Distinguished Engineer, NVIDIA  • Marcus Hopwood, Product Management Director, Equinix  • Bernie Malouin, CEO and Founder, JetCool    The podcast discussion begins with a focus on NVIDIA's latest insights, as imparted by Heydari, in the context of products, partnerships, and trend-leadership, as revealed at the recent NVIDIA GTC 2024 AI Conference (Mar. 18-21).  The conversation opens up to look at broader implications and developments within the tech and data center industries, such as Equinix's plans to enable liquid cooling at more than 100 data centers globally, and facets of their latest partnership with NVIDIA, as characterized by Hopwood.  The discussion turns to JetCool's history of providing innovative liquid cooling solutions for high-density chipsets, underlining the critical role of cooling technologies in support of the rapid growth of AI applications in data centers.  The talk also explores ways of advancing efficiency and sustainability in high-powered clusters through warm coolants and heat reuse, considering energy efficiency directives in the EU and UK. View a timeline of the podcast's highlights and read the full article about the podcast.
For this episode of the DCF Show podcast, we interview Jason Carolan, Chief Innovation Officer at data center operator Flexential. He’s a 25-year expert in the enterprise IT industry, with experience leading companies through technological evolutions like the one we’re experiencing right now.  Carolan believes there is a bigger story to uncover from the sheer dollar amount of Nvidia’s recent blockbuster valuation. In response to Nvidia’s market dominance in AI and data centers, Carolan wanted to discuss larger trends that may follow from this specific news moment.  According to Carolan:  “Nvidia's earnings results and forecasts for a continued AI boom doesn't come as too much of a surprise with the volume of businesses that are increasingly testing and utilizing the technology. Nvidia's data center business is a combination of GPU and their network technologies, which further showcases the importance of high performance architectures that can support next generation AI demands. The company is currently forecasted to ship 4-5 times more GPUs this coming year – indicating another trend line with little competition in sight.  As inference matures, we will see more diversity in chip suppliers but that is a ways off. The bottom line is that, now with accelerating AI rollouts, companies will need more compute capacity, ultra-high bandwidth and very low latency in order to succeed.”
This January, Milldam Public Relations announced the launch of its Data Center Community Relations Service, which the company's President and Founder Adam Waitkunas claims is the first community relations service exclusively serving the data center space and the digital infrastructure sector.  In addition to tailormade communication strategies, Adam contends that data center community relations will require coalition building and garnering influence with local officials and stakeholders. He says the new service has been launched in response to the recent widespread backlash to data center development and the lack of tools to combat this within the data center industry.  Personally overseeing the new service offering, Adam is a public relations professional with nearly twenty years of data center industry experience and a background in politics and public affairs, including extensive experience in media relations, marketing strategy, business development and strategic partnerships.  Prior to founding Milldam Public Relations in 2005, Adam was the manager of Doug Stevenson's 14th Middlesex District State Representative campaign, which set a record for fundraising for a challenger in a Massachusetts State Representative race. Concord, Massachusetts-based Milldam Public Relations is a full-service public relations firm that provides competitively priced strategic communications, media-relations and event management to a diverse array of clients throughout the country.  The firm has solidified its position as the go-to public relations firm for companies in the critical infrastructure space. Clients from Boston to Los Angeles include: The Association of Information Technology Professionals-Los Angeles, OpTerra Energy Services, The Critical Facilities Summit, Hurricane Electric, Instor Solutions, Inc., and RF Code. Under Adam's direction, Milldam has helped technology clients across the country secure articles in publications such as: The Wall Street Journal, The New York Times, CFO Magazine, Data Center Knowledge, Green Tech Media, The Boston Business Journal, Mission Critical Magazine, The Silicon Valley Business Journal and Capacity Magazine, among others.  Additionally, in his career Adam has helped businesses become thought leaders in their fields and a valued resource for industry-specific media, helping them to increase sales, promote awareness and become attractive targets for M&A.    Data Center Community Relations Service The new service is premised on the reality that, for many years, the data center industry has frequently operated under the radar, but has become more visible within the last few years. Certain communities throughout North America have taken notice and have started pushing back municipally against proposed developments, most notably in Virginia and Arizona.  For example, in recent months, a number of Virginia environmental groups formed a coalition calling s for more oversight of the data center industry. And in January, King George County, Virginia officials voted to renegotiate a prior agreement for a large cloud provider's $6B Virginia data center campus.  The reversal is partly due to growing local political opposition to data center development. With the launch of Milldam's Data Center Community Relations Service, Waitkunas contends that the digital infrastructure sector now has access to an offering that will equip them with the tools necessary to articulate the benefits of data centers to the local community while proactively addressing local concerns such as traffic infrastructure management and noise, helping to ensure a smoother path to success for the development.  Critical infrastructure plays a predominant role in most people's daily lives throughout North America, driving the need for data center operators. Waitkunas points out that strong community engagement is essential for data centers to properly communicate their value and successfully navigate the complexity of community relations.  To help data center developers achieve their goals, Milldam's community relations practice offers the following services:  •    Establishing partnerships with third-party organizations such as Chambers of Commerce. •    Communicating the numerous benefits of data centers in the community, including economic development, infrastructure improvements, and job creation. •    Developing and providing key talking points. •    Ensuring that local decision-makers hear the client's messages. •    Implementing a wide variety of grassroots campaigns and community outreach.  •    Enabling local supporters to serve as ambassadors and equipping them with the tools to communicate the benefits of proposed developments.  •    Building coalitions. •    Garnering the pulse of public opinion. "If the industry fails to properly engage with localities, years of industry progress will be in jeopardy," said Waitkunas. "It's imperative that developers and operators implement community relations to help ensure a seamless development process." Here's a timeline of key discussion points on the podcast: 2:35 - Adam explains that the idea for the practice came from his background in public affairs and politics, and that it involves building coalitions and partnerships with third party organizations to help data centers overcome obstacles they face when moving into suburban areas. 4:41 - Adam discusses the importance of having individual community members form coalitions with data center developers to speak on their behalf and push issues forward. 8:09 - Adam reveals that the firm is currently working with two developers and has proposals out to other organizations since launching the practice in mid-January. 9:16 - On the importance of timing in getting ahead of community concerns and identifying cheerleaders for data center projects. 10:37 - The PR practice wants the local community to be the main cheerleader for data center projects and will help manage the coalition. 13:01 - Adam notes there is still a lot of community education needed on data centers regarding the ins and outs of countering noise and environmental concerns. 15:10 - Adam explains how the PR practice has been doing outreach to large players in the data center industry and tailoring campaigns for each community's concerns. 23:18 - On the necessity for developers to put together community relations plans and crisis communications plans for their data center projects. Here are links to some related DCF articles: The NIMBY Challenge: A Way Forward for the Data Center Industry Rezoning for PW Digital Gateway Data Centers Approved By Virginia's Prince William County Supervisors Keeping Your Cool While Getting Your Work Done iMasons Sharpen Focus on the Community Impact of Data Centers Being a Good Neighbor Means Considering Community Impact During Site Selection Data Center Development Spurs More Debate in Prince William County
For this episode of the DCF Show podcast, Data Center Frontier's Editor in Chief Matt Vincent and Senior Editor David Chernicoff speak with Burns & McDonnell's Robert Bonar, PE, LEED AP, Vice President, Mission Critical Facilities, and Christine Wood, Vice President leading the firm's Dallas-Fort Worth Global Facilities practice.  Burns & McDonnell is a provider of engineering, architecture, construction, environmental and consulting solutions, who as part of its mission-critical and data center practice is brought in to help plan, design, permit, construct and manage client projects in the space. Bonar and Wood begin the podcast by providing an overview of the company and their roles there, along with their backgrounds in the industry.  An overarching theme of the discussion is how a client's selection of a data center and mission critical consultant is based on more than just an ability to meet service needs. The discussion also covers current data center industry construction trends, especially in the areas of siting and power, while probing the similiarities and differences in planning data center builds for enterprise, colocation and hyperscale clients. D-FW Data Center Market Focus Cushman & Wakefield’s 2023 Dallas-Fort Worth Data Center Report stated that the Dallas-Fort Worth data center markets saw record absorption of 386 Megawatts in 2023 -- a nearly 7x increase since 2020 -- driven by exponential growth in demand for cloud computing and AI/machine learning applications.  Cushman & Wakefield further reported the Dallas-Fort Worth market's vacancy to be at an all-time low of 3.73% last year, with colocation rents and data center land prices there continuing to rise. The commercial real estate services company added: "Despite a robust construction pipeline – 1.4 million square feet that can provide 225 MW – the vast majority of the market’s new data center supply for 2024 and 2025 has been pre-leased. Cloud providers securing large campuses through pre-leasing and AI/ML companies leasing the market’s few remaining pockets of available space are the primary drivers of DFW’s record demand." DCF asked Wood and Bonar about the D-FW data center market and Burns & McDonnell's role in it, including the firm's background and present developments there, as well as the location's future roadmap regarding power, interconnectivity, workforce factors. Here's a time line of key discussion points on the podcast: 2:27 - After introductions and table-setting, the Burns & McDonnell experts emphasize the importance of looking at data center client needs holistically and getting ahead of what they need for a given project. 4:53 - Discussion turns to the impact of generative AI on the data center industry and the uptick in demand for first-of-a-kind designs. 8:44 - Further exploration of how the rapid pace of change in the data center industry has bred increased demand in the market for qualities such as speed-to-market and first-of-a-kind design. 9:22 - DCF inquires about planning for different types of data center builds, and the differences between enterprise, colocation, and hyperscale developments, as well as the impact of AI support, are explored. 14:34 - The discussion further illuminates challenges and changes in the data center industry, including the influence of AI technology on new designs and in future-proofing facilities. 15:04 - Burns & McDonell's Wood discusses the D-FW data center market, highlighting its growth potential due to its central location, low real estate costs, and robust power availability. 20:25 - To conclude, DCF's editors circle back to the topic of renewables and solar consulting in relation to data centers, leading to a discussion on combining solar with battery storage for future data center needs. Here are links to some related DCF articles: The Current State of Power Constraints for New Data Center Construction Skybox Plans 300-Megawatt Campus South of Dallas Building Greener: Compass Seeks Sustainability in its Construction, Supply Chain Dallas Sees Record Data Center Leasing Activity in 2022 The Big City Edge: Dallas is a Hotbed for Edge Computing Power Infrastructure and Tax Incentives Drive Dallas Data Center Market  
For this episode of the Data Center Frontier Show podcast, it's financial earnings call season, so Editor in Chief Matt Vincent and Senior Editor David Chernicoff take the opportunity to discuss DCF's top 5 most popular data center and cloud computing industry stories for the fourth quarter of 2023, which were as follows:  1. Dominion: Virginia’s Data Center Cluster Could Double in Size Dominion Energy says it has customer contracts that could double the amount of data center capacity in Virginia by 2028 and is planning new power lines to support this growth. Virginia is already the world’s largest market for cloud computing infrastructure. Despite the current power constraints around Ashburn, the data center market in Virginia is positioned to grow much larger. The utility says it has received customer orders that could double the amount of data center capacity in Virginia by 2028, with a projected market size of 10 gigawatts by 2035. That represents a huge increase from current data center power use, which reached 2.67 gigawatts in 2022. The utility’s projections mean that Virginia will continue to experience tensions between the growth of the Internet and the infrastructure to support it. Data Center Frontier's Founder and Editor at Large, Rich Miller, reports. 2. Microsoft Unveils Custom-Designed Data Center AI Chips, Racks and Liquid Cooling At Microsoft Ignite last November, the company unveiled two custom-designed chips and integrated systems resulting from a multi-step process for meticulously testing its homegrown silicon, the fruits of a method the company's engineers have been refining in secret for years, as revealed at its Source blog. The end goal is an Azure hardware system that offers maximum flexibility and can also be optimized for power, performance, sustainability or cost, said Rani Borkar, corporate vice president for Azure Hardware Systems and Infrastructure (AHSI). “Software is our core strength, but frankly, we are a systems company. At Microsoft we are co-designing and optimizing hardware and software together so that one plus one is greater than two,” Borkar said. “We have visibility into the entire stack, and silicon is just one of the ingredients.” The newly introduced Microsoft Azure Maia AI Accelerator chip is optimized for artificial intelligence (AI) tasks and generative AI. For its part, the Microsoft Azure Cobalt CPU is an Arm-based processor chip tailored to run general purpose compute workloads on the Microsoft Cloud. Microsoft said the new chips will begin to appear by early this year in its data centers, initially powering services such as Microsoft Copilot, an AI assistant, and its Azure OpenAI Service. They will join a widening range of products from the company's industry partners geared toward customers eager to take advantage of the latest cloud and AI technology breakthroughs. 3. The Eight Trends That Will Shape the Data Center Industry in 2023 Rich Miller predicted that 2023 would be a year of dueling cross currents that could constrain or accelerate business activity in the sector. DCF's Vincent and Chernicoff briefly review last year's trends, remarking on how so many of them are still in full effect for the industry right now. Scorecard: Looking Back at Data Center Frontier’s 2023 Industry Predictions 4.  Google Is Now Reducing Data Center Energy Use During Local Power Emergencies Last October, Google shared details of a system optimized to reduce the energy use of data centers when there is a local power emergency. Core functions of the system, which has the hallmarks of a universally applicable technology, include postponing low-priority workloads, and moving others to other regions that are less constrained. Regarding the system, Michael Terrell, Google's Senior Director for Energy and Climate, explained in a LinkedIn post how the new demand response capability can temporarily reduce power consumption from Google data centers when it’s needed, and provide flexibility to the local grids that power its data center operations. Demand response helps grid operators serve their customers reliably during times of need, such as in times of supply constraints or extreme weather events. Terrell's post empasized that "demand response can be a big tool to help grids run more cost-effectively and efficiently, and it can accelerate system-wide grid decarbonization." Google’s Climate and Energy teams created the new system, which Terrell called an important development toward running the company's data centers "intelligently, efficiently and carbon-free." 5. Cloudflare Outage: There’s Plenty Of Blame To Go Around The Cloudflare outage in the first week of November drew quite a bit of attention, not only because Cloudflare’s services are extremely popular, so their failure was quickly noticed, but also because of the rapid explanation of the problem posted in the Cloudflare Blog shortly after the incident. This explanation placed a significant portion of the blame squarely on Flexential and their response to the issues with electricity provider PGE, and potential issues that PGE was having. Cloudflare was able to restore most of its services in 8 hours at its disaster recovery facility. It runs its primary services at three data centers in the Hillsboro, Oregon area, geolocated in such a way that natural disasters are unlikely to impact more than a single data center. DCF's David Chernicoff noted, "While almost all of the coverage of this incident starts off by focusing on the problems that might have been caused by Flexential, I find that I have to agree with the assessment of Cloudflare CEO Matthew Prince: To start, this never should have happened.” Here are links to some related DCF articles: DCF Show: Data Center Frontier's Rich Miller Returns For a Visit DCF Tours: Flexential Dallas-Plano Data Center, 18 MW Colocation Facility Meta Previews New Data Center Design for an AI-Powered Future For Leading Cloud Platforms, AI Presents a Major Opportunity AI Propels Cloud Growth, Digital Infrastructure Investment to New Heights  
Even in a month where Equinix very notably rolled out its fully managed private cloud service for enabling enterprises to easily acquire and manage their own NVIDIA DGX AI supercomputing infrastructure, the better to build and run custom generative AI models, there was yet another, not unrelated, announcement from the foundational provider of colocation data centers and digital transformation solutions.  It was in the context of the AI platform rollout with NVIDIA that Equinix this month also issued its annual Global Interconnection Index (GXI) 2024 Report, which uncovers digital infrastructure trends driving the decision-making of both enterprises and service providers.  The Equinix statement announcing managed services for the NVIDIA DGX AI supercomputing platform noted that the service includes the NVIDIA DGX systems, NVIDIA networking and the NVIDIA AI Enterprise software platform. For the platform offering, Equinix installs and operates each customer's privately owned NVIDIA infrastructure and can deploy services on their behalf in key locations of its International Business Exchange (IBX) data centers globally.  Equinix also emphasized that its NVIDIA DGX service offers high-speed private network access to global network service providers, enabling quick generative AI information retrieval across corporate wide area networks. In addition, the service provides private, high-bandwidth interconnections to cloud services and enterprise service providers to facilitate AI workloads while meeting data security and compliance requirements. Through its offering of NVIDIA DGX AI supercomputing infrastructure services, Equinix contends that enterprises can scale their infrastructure operations to achieve the level of AI performance needed to develop and run massive models. The company also revealed that early access companies using the service has included leaders in sectors including biopharma, financial services, software, automotive and retail, many of whom are building AI Centers of Excellence to provide a strategic foundation for a broad range of rapidly developing LLM use cases. As a related study Equinix commissions each year, the operator's GXI Report comprises a survey of global IT leaders to gather insight on what’s behind the digital economy. Based on the study's latest findings, Equinix stated its belief that the industry has hit a tipping point in resourcing decisions, vis a vis the notion that buying dedicated IT hardware now puts customers at a competitive disadvantage.  For this episode of the DCF Show podcast, Data Center Frontier editors Matt Vincent and David Chernicoff met with Steve Madden, Equinix VP of Digital Transformation and Segment Marketing, to discuss some of the GXI 2024 report's more meaningful findings related to current data center trends and predictions in digital transformation, IT and spending, including the operator's nearly concurrent AI managed services offering. For instance, the GXI report found that enterprises are growing at a 39% CAGR -- 25% faster than service providers -- reaching 12,908 Tbps of total capacity. DCF asked Madden: Since the global pandemic, how much have enterprises leaned on digital providers to focus on responding to business needs, and does Equinix expect such trends to continue going forward?   Also, the GXI report found that 80% of enterprises will design and run new digital IT infrastructure using subscription-based services by 2026. We asked Madden: What does that mean for data centers? The report also found that by 2025, 85% of global companies will have expanded multicloud access across several regions. We asked: How will data centers best be able to manage such demand?  In his remarks, Madden pointed out that Equinix has the most cloud on-ramps of any data center operator in the world, and predicted that the majority of multinational enterprises will be multi-cloud connected in multiple regions around the world in the near future. Madden noted that nowadays -- i.e. in the post-pandemic age of AI -- enterprises are looking for strategic partners, not just vendors, in composing their infrastructure, and seek to do so with a set of key providers to help them move more quickly in their digital transformations.
This month on the Data Center Frontier Show podcast, we read down site founder and Editor at Large Rich Miller's annual data center industry trends forecast. This week's article read looks at how AI is driving design updates for power and cooling, why air permitting at scale is a hot potato for the industry, and optimal site selection for Green MegaCampuses. Rich Miller has delivered his annual article containing his top data center industry forecasts, predictions and insights for the year ahead. Of chief concern among the 8 key themes forecasted to define the year is how the AI boom will ripple through the digital infrastructure sector in 2024, impacting the availability of data center space, the supply chain, and factors of pricing, cooling, power and design. Since our industry coverage at DCF throughout the year will frequently refer back to this forecast article, we've decided to enumerate all eight themes throughout several podcast episodes this month.  For this episode, we read down the article's themes 6 through 8: 6.  AI Drives Design Updates for Power and Cooling 7.  Air Permitting at Scale is a Hot Potato 8.  Site Selection Optimizes for Green MegaCampuses "Artificial intelligence is hot," writes Miller. "So hot that the AI boom is creating a resource-constrained world, driving stupendous demand for GPUs, data centers and AI expertise. All three are likely to be in short supply, but none so much as wholesale data center space. This is the trend that dominates our annual forecast." Read the full forecast: The Eight Themes That Will Shape the Data Center Industry in 2024
For this episode of the DCF Show podcast, Data Center Frontier spoke with Sam Rabinowitz, CEO of Lantana, a supplier and provider of LED luminaires for the data center industry -- especially for hyperscalers, but also for energy-efficiency retrofits in mature facilities. Key discussion points include the following: 0:15 - Lantana broke into the data center industry by working with a hyperscaler customer to design and implement rapid deployment prototypes for their initial data center builds on the interior structure, including lighting. 3:14 - Lantana's LED fixtures run cool and are energy-efficient, achieving up to 90% efficiency over nearly a decade of use. The LED lighting fixtures are UL certified for elevated ambient operating temperatures, providing operational flexibility for data centers in hot environments. 5:45 - Sam explains how Lantana's focus on energy-efficiency and materials efficiency can lead to cost savings and a positive impact on the environment. 13:26 - Sam emphasizes the importance of a "micro to macro" approach in greening data, starting with individual components, and scaling up to entire campuses and programs. 15:46 - Data Center Frontier Editor in Chief Matt Vincent asks for takes regarding the impact of AI on the data center industry. In response, Sam discusses the need for new products and approaches to designing and engineering data centers to accommodate for chip-level heat. 19:32 - Matt asks about Lantana's plans for 2024. In response, Sam describes Lantana's new products as being tailored for digital infrastructure and expansion of the hyperscalers, as well as furnishing renovations for increased energy efficiency in data centers of all sizes. 26:46 - Sam emphasizes the importance of lighting in data centers for safety and functionality, and the discussion compares it to cabling as a core, fundamental element of every data center. Visit Data Center Frontier.
This month on the Data Center Frontier Show podcast, we read down site founder and Editor at Large Rich Miller's annual data center industry trends forecast.  Since our industry coverage at DCF throughout the year will frequently refer back to this forecast, we've decided to enumerate all eight themes throughout several podcast episodes this month.  Today's read looks at how pricing for AI capacity will probably only continue to trend higher, and how data center supply chain relationships will matter more than ever in 2024. We also examine how more momentum for modular data centers' prefabricated IT ethos should take hold in the coming year. "Artificial intelligence is hot," writes Miller. "So hot that the AI boom is creating a resource-constrained world, driving stupendous demand for GPUs, data centers and AI expertise. All three are likely to be in short supply, but none so much as wholesale data center space. This is the trend that dominates our annual forecast." For this episode, we read down the article's themes 3 through 5: 3.  Pricing for AI Capacity Will Continue Higher 4.  Supply Chain: Relationships Matter More Than Ever 5.  More Momentum for Modular Read the full forecast: The Eight Themes That Will Shape the Data Center Industry in 2024
Data Center Frontier's founder and Editor at Large Rich Miller has delivered his annual article containing his top data center industry forecasts, predictions and insights for the year ahead.  Of chief concern is how the AI boom will ripple through the digital infrastructure sector in 2024, impacting the availability of data center space, the supply chain, and factors of pricing, cooling, power and design. Since our industry coverage at DCF throughout the year will frequently refer back to this forecast, we've decided to enumerate all 8 themes throughout several podcast episodes this month.  For this episode, we read down the article's first two themes: 1. The AI Boom Creates a Data Center Space Crunch 2. Rethinking Power on Every Level  Read the full forecast at Data Center Frontier: The Eight Themes That Will Shape the Data Center Industry in 2024
For this episode of the Data Center Frontier Show podcast, DCF's editors sat down with James Walker, BEng, MSc, CEng, PEng, CEO and board member of Nano Nuclear Energy Inc., and Jay Jiang Yu, Nano Nuclear Energy's founder, executive chairman and president, for a discussion regarding industry news and technology updates surrounding small modular reactor (SMR) and microreactor nuclear onsite power generation systems for data centers. James Walker is a nuclear physicist and was the project lead and manager for constructing the new Rolls-Royce Nuclear Chemical Plant; he was the UK Subject Matter Expert for the UK Nuclear Material Recovery Capabilities, and was the technical project manager for constructing the UK reactor core manufacturing facilities. Walker has extensive experience in engineering and project management, particularly within nuclear engineering, mining engineering, mechanical engineering, construction, manufacturing, engineering design, infrastructure, and safety management. He has executive experience in several public companies, as well as acquiring and re-developing the only fluorspar mine in the U.S. Jay Jiang Yu is a serial entrepreneur and has over 16 years of capital markets experience on Wall Street. He is a private investor in a multitude of companies and has advised a magnitude of private and public company executives with corporate advisory services such as capital funding, mergers and acquisitions, structured financing, IPO listings, and other business development services. He is a self-taught and private self-investor whose relentless passion for international business has helped him develop key, strategic and valuable relationships throughout the world. Yu leads the corporate structuring, capital financings, executive level recruitment, governmental relationships and international brand growth of Nano Nuclear Energy Inc. Previously, he worked as an analyst as part of the Corporate & Investment Banking Division at Deutsche Bank in New York City. Here's a timeline of key points discussed during the podcast: 0:22 - Nano Nuclear Energy Expert Introductions 1:38 - Topic Set-up Re: DCF Senior Editor David Chernicoff's recent data center microreactor and SMR explorations. 1:59 - How microreactors might impact the data center industry. (Can time-to-market hurdles be shrunk?) 2:20 - Chernicoff begins the interview with James and Jay. How the NuScale project difficulties in the SMR segment resulted in the DoD pulling back on preliminary microreactor contracts in Alaska due to market uncertainties directly related to NuScale.  3:23 - Perspectives on NuScale and nuclear power. 4:21 - James Walker on NuScale vs. microreactor prospects:  "They have a very good technology. They're still the only licensed company out there, and they probably will bounce back from this. It's not good optics when people are expecting product to come out of the market. And NuScale was to be the first, but market conditions and the structure of SPACs and the lack of us infrastructure can all complicate what they want to do. Half the reason for them taking so long is because the infrastructure was not in place to support what they wanted to do.  But even hypothetically, even if the SMR market, as an example, was to collapse, microreactors are really targeting a very different area of market. SMRs are looking to power cities and big things like that. Microreactors, you're looking at mine sites, charging stations, free vehicles, disaster relief areas, military bases, remote habitation, where they principally fund all their energy using diesel. It's kind of hitting a different market. So even if the SMR market goes away, there's still a huge, tremendous upside, potential untapped market in the microreactor space." 5:39 - DCF Editor in Chief Matt Vincent asks, "What's the pros and cons of the prospects for microreactors versus what we're commonly thinking about in terms of SMR for data centers?" 5:51 - Nano Nuclear's James Walker responds:  "I would start with the advantages of microreactors over SMR. It's smaller, it'll be cheaper, it'll be safer, it'll be more deployable, you'll have far more economies of scale of producing hundreds of these things. They're easier to decommission, remove, they're easier to take apart.  I mean, logistically, shipping these things around the world as if they were diesel generators is a very feasible prospect. Opex cost will be far lower. Personnel that need to be involved in the day to day physical operation will be negligible.  Where the disadvantage of a microreactor is, is that SMRs would provide a cheaper form of electricity. But as SMRs are providing for cities, microreactors are more for remote locations, remote industrial projects, remote data centers, those kind of things.  You're really competing with sort of the high costs of remote diesel.  As an example, we were speaking with some Canadian government officials and they were saying [with] some of their remote habitations, they can have a community of 800 people, but it still costs $10 million US in fuel alone, ignoring all of the logistical costs of bringing that fuel in on a daily basis, just to power those remote communities that have no possibility of being hooked up to a grid because it's too far.  And that would be the same for all sorts of things, like if you want a remote data center, remote or mining operations, remote industrial projects, oil and gas things, then microreactors aren't really competing with SMRs on cost." 7:33 - Data Center Frontier's David Chernicoff asks: "We're a data center publication, so that obviously is a lot of interest to us, and you pointed out how diesel is the primary methodology for backup power for data centers.  I realize no one has actually shipped a microreactor yet in this form factor. But one of the advantages, for example, that comes from Project PELE from the US DoD was the decision to standardize on Tristructural Isotropic (TRISO) fuel so that for anybody building one, now, the whole issue of building infrastructure to provide the fuel is significantly simplified.  Realistically (and obviously we're asking you to make a projection here, but), when you're able to deliver microreactors at any sort of scale, will they be competitive with diesel generators in the data center space? And I would also allow for you to say, well, diesel generators also have to deal with all the emissions issues, environmental concerns, greenhouse gases, et cetera, that are not issues with a containerized nuclear power plant. So will there be a realistic model there?" 8:45 - James Walker compares the financing costs of diesel generators vs. microreactors. 9:28 - Walker offers this forecast: "With competing with diesel generators, once the infrastructure [for nuclear] is built back up, and you have deconversion facilities and enrichment facilities able to produce High-Assay Low-Enriched Uranium (HALEU) fuel, and companies are able to source this stuff very readily, the capital costs come down markedly. And that'll be the same for people like NuScale. Then there'll be an optimization period, typically, I would expect over an eight-year period of launch. So, say microreactors launch in 2030, nearing 2040, I believe the cost will be competitive with diesel by that point. Because the optimization will kick in, the infrastructure will all be in place. And the economies of scale over which these things are being produced means that, yes, you'll essentially have a nuclear battery that can compete with diesel, that can give you 15 years of clean energy, at a cheaper rate. That's what the projections show currently." 10:31 - Discussion point clarifying that nuclear microreactors for battery backup are being positioned for replacement of diesel generation, as distinct from SMR power plant options. 12:00 - Walker explains how the power range of microreactors can vary. SMRs will give you 100 MW of power for enormous data centers and AI, but microreactors allow for data centers to be sited anywhere. If more power for a larger facility is needed, multiple microreactors can serve into the microgrid at the location. 12:50 - Nano Nuclear's Jay Jiang Yu notes, "We've been contacted by Bitcoin mining companies as well, because they want to actually power their data centers in cold environments like Alaska. We've been contacted many times, actually, and there is like a trending topic on 'Bitcoin nuclear.'"  13:28 - Regarding microreactors' being employed in conjunction with microgrids, DCF's Chernicoff asks: "Do you see this being eventually being sort of a package deal -- not just for data centers (obviously data centers will be a big consumer of this) -- but for deployable microgrids where you have battery power, microreactors providing primary power sources, integrating the microgrid with the local utility grids to allow for providing power back to the grid in times of need, pull power from the grid when it's cheap, that kind of whole microgrid active partner model?" 14:19 - Walker holds forth on nuclear investment stakes, and where microreactor and microgrid technology fits in. 16:16 - On the compactness of microreactors, occupying less than an acre. 17:33 - Asking again about the US DoD's Project PELE, how microreactors were instrumental, and what the project's implications might be for data centers. 18:14 - Walker explains how Project PELE was a microreactor program developed by the  US DoD to create a 1.5 megawatt electric microactor to serve the US military in wider capacity in remote areas such as Iraq or Afghanistan forced to rely entirely on diesel power generation.  Walker adds, "Project PELE, even though it began as a military thing, is probably going to have enormous benefits for the wider microreactor market, because there's a lot of development work that can go into fees and inform commercial and civil designs." 19:58 - DCF's Chernicoff notes: "I presume that one of the biggest factors that PELE brought
For this episode of the Data Center Frontier Show podcast, we sit down with Brian Kennedy, Director of Business Development and Marketing at Natron Energy. As recounted by Kennedy in the course of our talk, Colin Wessells founded Natron Energy as a Stanford PhD student in 2012. His vision in building the company, which started in a garage in Palo Alto, was to deliver ultra-safe, high-power batteries.  As stated on the company's website, "After countless hours of development with an ever expanding team of scientists and engineers, Natron now operates a state of the art pilot production line for sodium-ion batteries in Santa Clara, California." The company notes that most industrial power utilizes decades-old, more environmentally hazardous battery technology such as lead-acid and lithium-ion.  In contract, Natron says its "revolutionary sodium-ion battery leverages Prussian Blue electrode materials to deliver a high power, high cycle life, completely fire-safe battery solution without toxic materials, rare earth elements, or conflict minerals." In 2020, Natron became the world’s first sodium-ion battery to achieve a UL 1973 listing for its battery product, and commercial shipments to customers in the data center, forklift, and EV fast-charging markets soon began.  Natron notes that its technology leverages standard, existing li-ion manufacturing techniques, allowing the company to scale quickly. With U.S. and Western-based supply chain and factory agreements in place, Natron says it saw its manufacturing capacity increase 200x in 2022.  In the course of the podcast discussion, Natron's Kennedy provides an update on Natron's data center industry doings this year and into next year. Here's a timeline of key points discussed: :29 - 7x24 Fall Conference Memories :51 - Teeing Up Sodium Ion 1:18 - Talking Pros and Cons, Sustainability 2:15 - Handing It Over to Brian 2:30 - Background on Natron Energy and founder/CEO Colin Wessells 2:55 - Background on Sodium Ion Technology 3:11 - Perfecting a New Sodium Ion Chemistry and Manufacturing with 34 International Patents In Play 3:28 - The Prominent Feature of Sodium-Ion Technology Is Its Inherent Safety; Eliminates Risk of Thermal Runaway 3:51 - U.S. Government ARPA-E Advanced Technology Grants Have Been Pivotal Funding for Natron 4:13 - Sodium Ion Battery Technology Comparison and Value Proposition 5:28 - How Often Is A Data Centers Battery Punctured? Ever Seen a Forklift Driven Through One? 6:10 - On The Science of the Natron Cell's Extremely High Power Density, Fast Discharge and Recharge 6:55 - Comparing Sodium-Ion to Most of the Lithium Chemistries 7:25 - The Meaning of UL Tests 8:00 - Natron Has Published Unredacted UL Test Results 8:35 - On the Longevity of Sodium Ion Batteries 9:51 - "There's No Maintenance Involved." 10:18 - Natron Blue Rack: Applications 10:52 - How Natron Is In the Process of Launching Three Standard Battery Cabinets 11:20 - Performance Enhancements Will Take Standard Data Center Cabinets "Well North" of 250 kW 11:45 - Though Data Centers are Its Largest Market, Natron Also Serves the Oil and Gas Peak Load Shaving and Industrial Spaces  12:21 - Sustainability Advantages 12:51 - ESG Is About More Than Just Direct Emissions 13:15 - The Importance of Considering the Sourcing and Mining of Battery Elements 14:09 - "That Fact That You May Be Pushing [Certain] Atrocities Up the Supply Chain Where You Can't See Them, Doesn't Make It OK" 14:34 - Notes On Supply Chain Security with Secure, U.S.-Based Manufacturing 15:45 - Wrapping Up: Global UPS Manufacturer Selects Natron Battery Cabinet; Looking Ahead to 2024. Here are links to some related DCF articles: Will Battery Storage Solutions Replace Generators? New NFPA Battery Standard Could Impact Data Center UPS Designs Microsoft Taps UPS Batteries to Help Add Wind Power to Ireland’s Grid Data Center of the Future: Equinix Test-Drives New Power, Cooling Solutions Corscale Will Use Nickel-Zinc Batteries in New Data Center Campus
In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, and Steven Carlini, Vice President of Innovation and Data Centers for Schneider Electric, break down the challenges of AI for each physical infrastructure category including power, cooling, racks, and software management.
For this episode of the Data Center Frontier Show podcast, DCF's Editor in Chief Matt Vincent chats with Brian Green, EVP Operations, Engineering and Project Management, for EdgeConneX. The discussion touches on data center operations, sustainable implementations/deployments, renewable power strategies, and ways to operationalize renewables in the data center. Under Brian’s leadership, the EdgeConneX Houston data center completed a year-long project measuring the viability of 24/7 carbon-free energy utilizing AI-enabled technology. With this approach, EdgeConneX ensured the data center is powered with 100% renewable electricity, and proved that even if the power grid operates on fossil-fueled electricity generation, real-time hourly increments can be applied to new and existing data centers. As a result, for every given hour, EdgeConneX and its customers can operate throughout the year without emitting any CO2 with zero reliance on fossil standby generation during dark or cloudy periods. This innovative program will be duplicated at other EdgeConneX facilities globally. Another real-world example discussed is related to a facility where the local community complained about the noise of the fans. Brian's team worked to improve the noise level by changing fan speeds, and as a result, the data center and the local community realized multiple benefits, including enhanced community relations by removing the noise disturbance, increased efficiencies, and reducing amount of power used, a big cost-saver for the data center. Along the way, Brian explains how he, along with EdgeConneX team, are big believers in the company's motto: Together, we can innovate for good.
For this special episode of the DCF Show podcast, Data Center Frontier's founder and present Editor at Large, Rich Miller, returns for a visit. Tune in to hear Rich engage with the site's daily editors, Matt Vincent and David Chernicoff, in a discussion covering a range of current data center industry news and views. Topics include: Dominion Energy's transmission line expansion in Virginia; Aligned Data Centers' market exit in Maryland over a rejected plan for backup diesel generators; an update on issues surrounding Virginia's proposed Prince William Digital Gateway project; Rich's take on the recent Flexential/Cloudflare outages in Hillsboro, Oregon; and more. Here's a timeline of key points discussed on the podcast: :10 - For those concerned that the inmates might be running the asylum, the doctor is now in: Rich discusses his latest beat as DCF Editor at Large. 1:30 -  We look at the power situation in No. Virginia as explained by one of Rich's latest articles, vis a vis what's going to be required to support growth already in the pipeline, in the form of contracts that Dominion Energy has for power. "Of course, the big issue there is transmission lines," adds Miller. "That's the real constraint on data center power delivery right now. You can build local lines and even substations much more quickly than you can transmission at the regional level. That's really where the bottlenecks are right now." 3:00 - Senior Editor David Chernicoff asks for Rich's take on Aligned Data Centers' recent market exit in Maryland, related to its rejected plan for backup diesel generators. "Is this really going to be the future of how large-scale data center projects are going to have to be approached, with more focus put on dealing with permission to build?" wonders Chernicoff, adding, "And are we going to see a more structured data center lobbying effort on the local level beyond what, say, the DCC [Data Center Coalition] currently does?" 5:19 - In the course of his reponse, Rich says he thinks we'll see just about every data center company realizing the importance of doing their research on the full range of permissions required to build these megascale campuses, which are only getting bigger. 6:12 - Rich adds that he thinks the situation in Maryland illustrates how it's important for data center developers to step back for a strategic discussion regarding depth of planning. "The first thing to know," he points out, "is that Maryand was eager to have the data center industry. They specifically passed incentives that would make them more competitive with Virginia. They saw that Northern Virginia was getting super crowded...and they thought, we've got lots of resources up here in Frederick County, let's see if we can bring some of these folks across the river. And based on that, the Quantum Loophole team found this site." 8:20 - Rich goes on to note how "the key element for a lot of data centers is fiber, and a key component, both strategically and from an investment perspective [in Maryland] is that Quantum Loophole needed to have a connection to the Northern Virginia data center cluster in Ashburn, in Data Center Alley - which is not that far as the crow flies, but to get fiber there, they wound up boring a tunnel underneath the Potomac River, an expensive and time-consuming project that they're in the late stages of now. That's a big investment, and all that was done with the expectation that Maryland wanted data centers." 10:26 - Rich summarizes how the final ruling for Aligned in Maryland "was, effectively, that you can have up to 70 MW but beyond that, you have to follow this other process [where] you're more like a power plant than a data center with backup energy." He adds, "I think one of the issues was [in determining], will all of this capacity ever be turned on all at once? Obviously with diesel generators, that's a lot of emissions. So the air quality boards are wrestling with, on the one hand, having a large company that wants to bring in a lot of investment, a lot of jobs; the flip side is, it's a lot of diesel at a time when we're starting to see the growing effects of climate change, and everybody's trying to think about how we deal with fossil fuel generation. The bottom line is, Aligned pulled out and said, this is just not working. The Governor of Maryland, understanding the issues at stake and the amount of investment that has already been brought there, says that he is working with the legislature to try to 'create some regulatory predictability' for the data center industry. Because it used to be that 70 MW was a lot of capacity, but with the way the industry is going right now, that's not so much." 12:06 - In response to David's reiterated question as to whether the data center industry will now increasingly have to rethink it's whole approach to permitting prior to starting construction, Rich notes, "There's a lot of factors that go into site selection, you're looking at land, fiber, power. The regulatory environment around it, whether there's going to be local resistance, has also become part of the conversation, and rightfully so. One of the things that's definitely going to happen is that data centers have to think hard about their impact on the communities where they're locating, and try to develop sensible policies about how they, for lack of a better term, can be good neighbors, and fit into the communities where they're operating." 14:20 - Taking the discussion back across state lines, Editor in Chief Matt Vincent asks for an update on Rich's thoughts surrounding contentious plans by QTS and Compass Datacenters for a proposed new campus development, dubbed the Prince William Digital Gateway, near a Civil War historic site in Prince William County, Virginia. "This is one of the most unique proposals in the history of the data center industry," explains Miller. "It would be the largest data center project ever proposed. And of course, it's become an enormous political hot potato. It's the first time where we've really seen data centers on the ballot in local elections." 20:41 - After hearing some analysis of the business and political angles in Prince William County, Vincent asks whether Miller thinks the PW Digital Gateway project's future is in doubt, or if it's just that we don't know what's going to happen? 22:50 - Vincent asks Miller for his take on the recent data center outage affecting Flexential and Cloudflare, as written up for DCF by Chernicoff, particularly in the area of incident reports and their usefulness. In the course of responding to a follow-on point by David, Rich says, "I think the question for both levels of providers is, are you delivering on your promises, and what do you need to do to ensure that you can? Let's face it, stuff breaks, stuff happens. The data center industry, I think, is fascinating because people really think about failure modes and what happens, and customers need to do the same." 32:14 - To conclude, Vincent asks for Miller's thoughts on the AI implications of Microsoft's cloud-based supercomputer, running Nvidia H100 GPUs, ranking third on the world's top 500 supercomputers list, as highlighed at the recently ongoing SC23 show in Denver. Here are links to some related DCF articles: -- Dominion: Virginia’s Data Center Cluster Could Double in Size -- Dominion Resumes New Connections, But Loudoun Faces Lengthy Power Constraints -- DCF Show: Data Center Diesel Backup Generators In the News -- Cloudflare Outage: There’s Plenty Of Blame To Go Around -- Microsoft Unveils Custom-Designed Data Center AI Chips, Racks and Liquid Cooling
Ten years into the fourth industrial revolution, we now live in a “datacentered” world where data has become the currency of both business and personal value. In fact, the value proposition for every Fortune 500 company involves data. And now, seemingly out of nowhere, artificial intelligence has come along and is looking to be one of the most disruptive changes to digital infrastructure that we’ve ever seen. In this episode of the Data Center Frontier Show podcast, Matt Vincent, Editor-in-Chief of Data Center Frontier, talks to Sean Farney, Vice President for Data Center Strategy for JLL Americas, about how AI will impact data centers.
The Legend Energy Advisors (Legend EA) vision of energy usage is one in which all companies have real-time visibility into related processes and factors such as equipment efficiency, labor intensity, and consumption of power and other energy resources across their operations. During this episode of the Data Center Frontier Show podcast, the company's CEO and founder, Dan Crosby, and his associate, Ralph Rodriguez, RCDD, discussed the Legend Analytics platform, which offers commodity risk assessment infrastructure services, and real-time metering for energy usage and efficiency. The firm contends that only through such "total transparency" will their clients be able to "radically impact" energy and resource consumption intensity at every stage of their businesses. "My background was in construction and energy brokerage for a number of years before founding Legend," said Crosby. "The basis of it was helping customers understand how they're using energy, and how to use it better so that they can actually interact with markets more proactively and intelligently." "That helps reduce your carbon footprint in the process," he added. "Our mantra is: it doesn't matter whether you're trying to save money or save the environment, you're going to do both of those things through efficiency -- which will also let you navigate markets more efficiently." Legend EA's technology empowers the firm's clients to integrate all interrelated energy components of their businesses, while enabling clear, coherent communication across them. This process drives transparency and accountability on “both sides of the meter,” as reckoned by the company, the better to eliminate physical and financial waste. As stated on the firm's website, "This transparency drives change from the bottom up, enabling legitimate and demonstrable changes in enterprises’ environmental and financial sustainability." Legend Analytics is offered as a software as a service (SaaS) platform, with consulting services tailored to the needs of individual customers, who include industrial firms and data center operators, in navigating the power market. Additionally, the Ledge device, a network interface card (NIC), was recently introduced by Legend EA as a way to securely gather energy consumption data from any system in an organization and bring it to the cloud in real-time. Here's a timeline of key points discussed on the podcast: 1:15 - Crosby details the three interconnected parts of his firm's service: commodity risk assessment, infrastructure services, and the Legend Analytics platform for understanding energy usage and efficiency. 2:39 - Crosby explains how the Legend Analytics platform works in the case of data center customers, by providing capabilities such as real-time metering at various levels of a facility, as well as automated carbon reporting. 4:46 - The discussion unpacks how the platform is offered as a SaaS, and includes consulting services tailored to each customer's needs. 7:49 - Notes on how the Legend Analytics platform can gather data from disparate systems and consolidate it into one dashboard, allowing for AI analysis and identification of previously unknown issues. 10:25 - Crosby reviews the importance of accurate and real-time emissions tracking for ESG reporting, and provides examples of how the Legend Analytics platform has helped identify errors and save costs for clients. 12:23 - Crosby explains how the company's new, proprietary NIC device, dubbed the Ledge, can securely gather data from any system and bring it to their cloud in real time, lowering costs and improving efficiency. 23:54 - Crosby touches on issues including challenges with power availability; trends in building fiber to power; utilizing power capacity from industrial plants; and on-site generation for enabling stable voltage. Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn, following us on X/Twitter and Facebook, and signing up for our weekly newsletters.
For this episode of the Data Center Frontier Show Podcast, we sat down for a chat with Andy Pernsteiner, Field CTO of VAST Data. The VAST Data Platform embodies a revolutionary approach to data-intensive AI computing which the company says serves as "the comprehensive software infrastructure required to capture, catalog, refine, enrich, and preserve data" through real-time deep data analysis and deep learning. In September, VAST Data announced a strategic partnership with CoreWeave, whereby CoreWeave will employ the VAST Data Platform to build a global, NVIDIA-powered accelerated computing cloud for deploying, managing and securing hundreds of petabytes of data for generative AI, high performance computing (HPC) and visual effects (VFX) workloads. That announcement followed news in August that Core42 (formerly G42 Cloud), a leading cloud provider in the UAE and VAST Data had joined forces in an ambitious strategic partnership to build a central data foundation for a global network of AI supercomputers that will store and learn from hundreds of petabytes of data. This week, VAST Data has announced another strategic partnership with Lambda, a, Infrastructure-as-a-Service and compute provider for public and private NVIDIA GPU infrastructure, that will enable a hybrid cloud dedicated to AI and deep learning workloads. The partners will build an NVIDIA GPU-powered accelerated computing platform for Generative AI across both public and private clouds. Lambda selected the VAST Data Platform to power its On-Demand GPU Cloud, providing customer GPU deployments for LLM training and inference workloads. The Lambda, CoreWeave and Core42 announcements represent three burgeoning AI cloud providers within the short space of three months who've chosen to standardize with VAST Data as the scalable data platform behind their respective clouds. Such key partnerships position VAST Data to innovate through a new category of data infrastructure that will build the next-generation public cloud, the company contends As Field CTO at VAST Data, Andy Pernsteiner is helping the company's customers to build, deploy, and scale some of the world’s largest and most demanding computing environments. Andy spent the past 15 years focused on supporting and building large scale, high performance data platform solutions. As recounted by his biographical statement, from his humble beginnings as an escalations engineer at pre-IPO Isilon, to leading a team of technical ninjas at MapR, Andy has consistently been on the frontlines of solving some of the toughest challenges that customers face when implementing big data analytics and new-generation AI technologies. Here's a timeline of key points discussed on the podcast: 0:00 - 4:12 - Introducing the VAST Data Platform; recapping VAST Data's latest news announcements; and introducing VAST Data's Field CTO, Andy Pernsteiner. 4:45 - History of the VAST Data Platform. Observations on the growing "stratification" of AI computing practices. 5:34 - Notes on implementing the evolving VAST Data managed platform, both now and in the future. 6:32 - Andy Pernsteiner: "It won't be for everybody...but we're trying to build something that the vast majority of customers and enterprises can use for AI/ML and deep learning." 07:13 - Reading the room, when very few inside that have heard of "a GPU..." or know what its purpose and role is inside AI/ML infrastructure. 07:56 - Andy Pernsteiner: "The fact that CoreWeave exists at all is proof that the market doesn't yet have a way of solving for this big gap between where we are right now, and where we need to get tom in terms of generative AI and in terms of deep learning." 08:17 - How VAST started as a data storage platform, and was extended to include an ambitious database geared for large-scale AI training and inference. 09:02 - How another aspect of VAST is consolidation, "considering what you'd have to do to stitch together a generative AI practice in the cloud." 09:57 - On how the biggest customer bottleneck now is partly the necessary infrastructure, but also partly the necessary expertise. 10:25 - "We think that AI shouldn't just be for hyperscalers to deploy" - and how CoreWeave fits that model. 11:15 - Additional classifications of VAST Data customers are reviewed. 12:02 - Andy Pernsteiner: "One of the unique things that CoreWeave does is they make it easy to get started with GPUs, but also have the breadth and scale to achieve a production state - versus deploying at scale in the public cloud." 13:15 - VAST Data sees themselves bridging the gap between on-prem and in the cloud. 13:35 - Can we talk about NVIDIA for a minute? 14:13 - Notes on NVIDIA's GPU Direct Storage, which VAST Data is one of only a few vendors to enable. 15:10 - More on VAST Data's "strong, fruitful" years-long partnership with NVIDIA. 15:38 - DCF asks about the implications of recent reports that NVIDIA has asked about leasing data center space for its DGX Cloud service. 16:39 - Bottom line: NVIDIA wants to give customers an easy way to use their GPUs. 18:13 - Is VAST Data being positioned as a universally adopted AI computing platform? 19:22 - Andy Pernsteiner: "The goal was always to evolve into a company and into a product line that would allow the customer to do more than just store the data." 20:24 - Andy Pernsteiner: "I think that in the space that we're putting much of our energy into, there isn't really a competitor." 21:12 - How VAST Data is unique in its support of both structured and unstructured data. 22:08 - Andy Pernsteiner: "In many ways, what sets companies like CoreWeave apart from some of the public cloud providers is they focused on saying, we need something extremely high performance for AI and deep learning. The public cloud was never optimized for that - they were optimized for general purpose. We're optimized for AI and deep learning, because we started from a place where performance, cost and efficiency were the most important things." 23:03 - Andy Pernsteiner: "We're unique in this aspect: we've developed a platform from scratch that's optimized for massive scale, performance and efficiency, and it marries very well with the deep learning concept." 24:20 - DCF revisits the question of bridging the perceptible gap in industry knowledge surrounding AI infrastructure readiness. 25:01 - Comments on the necessity of VAST partnering with organizations to build out infrastructure. 26:12 - Andy Pernsteiner: "It's very fortunate that Nvidia acquired Mellanox in many ways, because it gives them the ability to be authoritative on the networking space as well. Because something that's often overlooked when building out AI and deep learning architectures is that you have GPUs and you have storage, but in order to feed it, you need a network that's very high speed and very robust, and that hasn't been the design for most data centers in the past." 27:43 - Andy Pernsteiner: "One of the unique things that we do, is we can bridge the gap between the high performance networks and the enterprise networks." 28:07 - Andy Pernsteiner: "No longer do people have to have separate silos for high performance and AI and for enterprise workloads. They can have it in one place, even if they keep the segmentation for their applications, for security and other purposes. We're the only vendor that I'm aware of that can bridge the gaps between those two worlds, and do so in a way that lets customers get the full value out of all their data." 28:58 - DCF asks: Armed with VAST Data, is a company like CoreWeave ready to go toe-to-toe with the big hyperscale clouds -  or is that not what it's about? 30:38 - Andy Pernsteiner: "We have an engineering organization that's extremely large now that is dedicated to building lots of new applications and services. And our focus on enabling these GPU cloud providers is one of the top priorities for the company right now." 32:26 - DCF asks: Does a platform like VAST Data's address the power availability dilemma that's going to be involved with data centers' widespread uptake of AI computing? Here are some links to some recent related DCF articles: Nvidia is Seeking to Redefine Data Center Acceleration Summer of AI: Hyperscale, Colocation Data Center Infrastructure Focus Tilts Slightly Away From Cloud AI and HPC Drive Demand for Higher Density Data Centers, New As-a-Service Offerings How Intel, AMD and Nvidia are Approaching the AI Arms Race Nvidia is All-In on Generative AI
For the latest episode of the Data Center Frontier Show Podcast, editors Matt Vincent and David Chernicoff sat down with Mike Jackson, Global Director of Product, Data Center and Distributed IT Software for Eaton. The purpose of the talk was to learn about the company's newly launched BrightLayer Data Centers suite, and how it covers the traditional DCIM use case - and a lot more. According to Eaton, the BrightLayer Data Centers suite's digital toolset enables facilities to efficiently manage an increasingly complex ecosystem of IT and OT assets, while providing full system visibility into data center white space, grey space and/or distributed infrastructure environments. "We're looking at a holistic view of the data center and understanding the concepts of space, power, cooling, network fiber," said Jackson. "It starts with the assets and capacity, and understanding: what do you have, and how is it used?" Here's a timeline of points discussed on the podcast: 0:39 - Inquiring about the BrightLayer platform and its relevance to facets of energy, sustainability, and design in data centers. 7:57 - Explaining the platform's "three legs of the stool":  Data center performance management, electrical power monitoring, and distributed IT performance management. Jackson describes how all three elements are part of one code base. 10:42 - Jackson recounts the BrightLayer Data Center suite's beta launch in June and the product's official, commercial launch in September; whereby, out of the gate, over 30 customers are already actively using the platform across different use cases. 13:02 - Jackson explains how the BrightLayer Data Center suite's focus on performance management and sustainability is meant to differentiate the platform from other DCIM systems, in attracting both existing and new Eaton customers. 17:16 - Jackson observes that many customers are being regulated or pushed into sustainability goals, and how the first step for facilities in this situation is measuring and tracking data center consumption. He further contends that the BrightLayer tools can help reduce data center cooling challenges while optimizing workload placement for sustainability, and cost savings. 20:11 - Jackson talks about the importance of integration with other software and data center processes, and the finer points of open API layers and out-of-the-box integrations. 22:26 - In terms of associated hardware, Jackson reviews the Eaton EnergyAware UPS series' ability to proactively manage a data center's power drop via handling utility and battery sources at the same time. He further notes that many customers are now expressing interest in microgrid technology and use of alternative energy sources. 27:21 - Jackson discusses the potential for multitenant data centers to use smart hardware and software to offset costs and improve efficiency, while offering new services to customers and managed service providers. Keep pace with the fast-moving world of data centers and cloud computing by connecting with Data Center Frontier on LinkedIn.
loading
Comments 
Download from Google Play
Download from App Store