Cloud Data Centers: Core Concepts - Part 3
Description
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
Lois: Hello and welcome to the Oracle University Podcast! I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Team Lead: Editorial Services.
Nikita: Hi everyone! For the last two weeks, we’ve been talking about different aspects of cloud data centers. In this episode, Orlando Gentil, Principal OCI Instructor at Oracle University, joins us once again to discuss how virtualization, through hypervisors, virtual machines, and containers, has transformed data centers.
Lois: That’s right, Niki. We’ll begin with a quick look at the history of virtualization and why it became so widely adopted. Orlando, what can you tell us about that?
Orlando: To truly grasp the power of virtualization, it's helpful to understand its journey from its humble beginnings with mainframes to its pivotal role in today's cloud computing landscape. It might surprise you, but virtualization isn't a new concept. Its roots go back to the 1960s with mainframes.
In those early days, the primary goal was to isolate workloads on a single powerful mainframe, allowing different applications to run without interfering with each other. As we moved into the 1990s, the challenge shifted to underutilized physical servers.
Organizations often had numerous dedicated servers, each running a single application, leading to significant waste of computing resources. This led to the emergence of virtualization as we know it today, primarily from the 1990s to the 2000s.
The core idea here was to run multiple isolated operating systems on a single physical server. This innovation dramatically improved the resource utilization and laid the technical foundation for cloud computing, enabling the scalable and flexible environments we rely on today.
Nikita: Interesting. So, from an economic standpoint, what pushed traditional data centers to change and opened the door to virtualization?
Orlando: In the past, running applications often meant running them on dedicated physical servers. This led to a few significant challenges.
First, more hardware purchases. Every new application, every new project often required its own dedicated server. This meant constantly buying new physical hardware, which quickly escalated capital expenditure.
Secondly, and hand-in-hand with more servers came higher power and cooling costs. Each physical server consumed power and generated heat, necessitating significant investment in electricity and cooling infrastructure. The more servers, the higher these operational expenses became.
And finally, a major problem was unused capacity. Despite investing heavily in these physical servers, it was common for them to run well below their full capacity. Applications typically didn't need 100% of server's resources all the time.
This meant we were wasting valuable compute power, memory, and storage, effectively wasting resources and diminishing the return of investment from those expensive hardware purchases. These economic pressures became a powerful incentive to find more efficient ways to utilize data center resources, setting the stage for technologies like virtualization.
Lois: I guess we can assume virtualization emerged as a financial game-changer. So, what kind of economic efficiencies did virtualization bring to the table?
Orlando: From a CapEx or capital expenditure perspective, companies spent less on servers and data center expansion. From an OpEx or operational expenditure perspective, fewer machines meant lower electricity, cooling, and maintenance costs.
It also sped up provisioning. Spinning a new VM took minutes, not days or weeks. That improved agility and reduced the operational workload on IT teams. It also created a more scalable, cost-efficient foundation which made virtualization not just a technical improvement, but a financial turning point for data centers.
This economic efficiency is exactly what cloud providers like Oracle Cloud Infrastructure are built on, using virtualization to deliver scalable pay as you go infrastructure.
Nikita: Ok, Orlando. Let’s get into the core components of virtualization. To start, what exactly is a hypervisor?
Orlando: A hypervisor is a piece of software, firmware, or hardware that creates and runs virtual machines, also known as VMs.
Its core function is to allow multiple virtual machines to run concurrently on a single physical host server. It acts as virtualization layer, abstracting the physical hardware resources like CPU, memory, and storage, and allocating them to each virtual machine as needed, ensuring they can operate independently and securely.
Lois: And are there types of hypervisors?
Orlando: There are two primary types of hypervisors. The type 1 hypervisors, often called bare metal hypervisors, run directly on the host server's hardware.
This means they interact directly with the physical resources offering high performance and security. Examples include VMware ESXi, Oracle VM Server, and KVM on Linux. They are commonly used in enterprise data centers and cloud environments.
In contrast, type 2 hypervisors, also known as hosted hypervisors, run on top of an existing operating system like Windows or macOS. They act as an application within that operating system. Popular examples include VirtualBox, VMware Workstation, and Parallels. These are typically used for personal computing or development purposes, where you might run multiple operating systems on your laptop or desktop.
Nikita: We’ve spoken about the foundation provided by hypervisors. So, can we now talk about the virtual entities they manage: virtual machines? What exactly is a virtual machine and what are its fundamental characteristics?
Orlando: A virtual machine is essentially a software-based virtual computer system that runs on a physical host computer. The magic happens with the hypervisor. The hypervisor's job is to create and manage these virtual environments, abstracting the physical hardware so that multiple VMs can share the same underlying resources without interfering with each other.
Each VM operates like a completely independent computer with its own operating system and applications.
Lois: What are the benefits of this?
Orlando: Each VM is isolated from the others. If one VM crashes or encounters an issue, it doesn't affect the other VMs running on the same physical host. This greatly enhances stability and security.
A powerful feature is the ability to run different operating systems side-by-side on the very same physical host. You could have a Windows VM, a Linux VM, and even other specialized OS, all operating simultaneously.
Consolidate workloads directly addresses the unused capacity problem. Instead of one application per physical server, you can now run multiple workloads, each in its own VM on a single powerful physical server. This dramatically improves hardware utilization, reducing the need of constant new hardware purchases and lowering power and cooling costs.
And by consolidating workloads, virtualization makes it possible for cloud providers to dynamically create and manage vast pools of computing resources. This allows users to quickly provision and scale virtual servers on demand, tapping into these shared pools of CPU, memory, and storage as needed, rather than being tied to a single physical machine.
Oracle University’s Race to Certification 2025 is your ticket to free training and certification in today’s hottest technology. Whether you’re starting with Artificial Intelligence, Oracle Cloud Infrastructure, Multicloud, or Oracle Data Platform, this challenge covers it all! Learn more about your chance to win prizes and see your name on the Leaderboard by visiting education.oracle.com/race-to-certification-2025. That’s education.oracle.com/race-to-certification-2025.
Nikita: