Hybrid Columnar Compression & Fast Ingest
Description
Welcome to the Oracle University Podcast, the first stop on your cloud journey. During this series of informative podcasts, we’ll bring you foundational training on the most popular Oracle technologies. Let’s get started!
Lois: Hello and welcome to the Oracle University Podcast. I’m Lois Houston, Director of Innovation Programs with Oracle University, and with me is Nikita Abraham, Principal Technical Editor.
Nikita: Hi everyone! In our last episode, we spoke about the 23ai improvements in time and data handling and data storage with Senior Principal Instructor Serge Moiseev. Today, we’re going to discuss the enhancements that have been made to the performance of Hybrid Columnar Compression. We'll look at how Hybrid Columnar Compression was prior to 23ai, learn about the changes that have been made, talk about how to use this compression in 23ai, and look at some performance factors. After that, we’ll move on to Fast Ingest, the improvements in 23ai, and how it is managed.
01:15
Lois: Yeah, this is a packed episode and to take us through all this, we have Bill Millar back on the podcast. Bill is a Senior Principal Database & MySQL Instructor with Oracle University. Hi Bill! Thanks for joining us. So, let’s start with how Hybrid Columnar Compression was prior to 23ai. What can you tell us about it?
Bill: We support all kinds of platforms from the Database Enterprise Edition on up to the high engineered systems for that and even the Exadata Cloud at the Customer. We have four different levels of compression. One is considered the warehouse compression where we do a COLUMN STORE COMPRESS FOR QUERY LOW and COLUMN STORE COMPRESS FOR QUERY HIGH. The COLUMN STORE COMPRESS FOR QUERY HIGH is the default, unless another compression level is specified. With the archive compression, we have the COLUMN STORE COMPRESSED FOR ARCHIVE LOW and also COLUMN STORE COMPRESS FOR ARCHIVE HIGH.
With the Hybrid Columnar Compression warehouse and archive, the array inserts are compressed immediately. But, however, some conditions have to be met. It has to be a locally-- to use these, it has to be a locally managed tablespace, the automatic segment space management. And compatibility level, at least 12 too or higher when these values have been introduced. There are different compressors that are used for the compression hidden from the customer. It just depends on what is selected as to what is going to be the compression that's going to be used for-- notice that with the COLUMN STORE FOR QUERY HIGH and for ARCHIVE LOW, the zlib compression method is used, whereas if you select the ARCHIVE HIGH, the Bzip2. And in 19C, we added the Zstandard. And it's available for the MEMORY COMPRESS FOR CAPACITY HIGH.
03:30
Nikita: So, what’s happened in 23ai?
Bill: When in 23c, to take advantage of the changes in compression, the compatibility level has to be set at least to 23.0.0 or higher.
When a table is created or altered with the hybrid column compression, the Zstandard will automatically be selected. So it doesn't matter which one of the four you select, that will be the one that is selected. It is internally set transparent to the user. There is no new SQL format that has to be used in order for the Zstandard compression to be applied.
And the Database Compatibility Mode has to be at least at 23.0.0 or higher. Only then can the format of the Hybrid Column Compression storage use that Zstandard compression. If we already have compressed data blocks in existing tables, they're going to remain in their original format.
04:31
Lois: And are the objects regenerated?
Bill: If the objects are-- they might be regenerated if they were deleted in another operation. If you want to completely take advantage of the new compression, all you have to do is alter table move. And that's going to go ahead and trigger the recompression of that, whereas any newly created tables that are created will use the Zstandard by default.
05:00
Nikita: What are the performance factors we need to think about, Bill?
Bill: There are some performance factors that we do need to consider, the ratio, the amount of space reduction in storage that we're going to achieve, the time spent compressing the data, the CPU cost to compress that data, and also, is there any decompression rate, time spent decompressing the data when we're doing queries on it?
05:24
Lois: And not all tables are equal, are they?
Bill: Not all tables are equal. Some might get better performance by different compression level than others for that. So how we can basically have to test our results, there is a compression advisor that's available, that you can use to give you a recommendation on what compression to use. But only through testing can we really see the availability, the benefits of using that compression for an application.
So best compression, just as in previous versions, the higher the compression levels, the more CPU it's going to use. The higher the compression level, the more space savings that we're going to achieve for that as we are doing those direct path inserts. So there's always that cost.
06:20
Did you know that the Oracle University Learning Community regularly holds live events hosted by Oracle expert instructors. Find out how to prepare for your certification exams. Learn about the latest technology advances and features. Ask questions in real time and learn from an Oracle subject matter expert. From Ask Me Anything about certification to Ask the Instructor coaching sessions, you’ll be able to achieve your learning goals for 2024 in no time. Join a live event today and witness firsthand the transformative power of the Oracle University Learning Community. Visit mylearn.oracle.com to get started.
07:01
Nikita: Welcome back! Let’s now move on to the enhancements that have been made to fast ingest. We’ll begin with an overview of fast ingest, how to use it, and the improvements and benefits. And then we’ll look at some features for managing fast ingest. Bill, why don’t you start by defining fast ingest for us?
Bill: Traditionally the fast ingest, also referred to as deferred inserts, is faster than processing a single row at a time. It can support high-volume transactions like from the Internet of Things applications, where you have hundreds of thousands of items coming in trying to write to the database.
They are faster, because the inserts don't use the traditional buffer cache. They use a pool that will size out of the large pool. And then they're later written to disk using the SMCO, the space management coordinator. Instead of using the buffer cache, they're going to write into an area of the large pool.
The space management coordinator, it has these helper threads, however many-- that's just a number for that-- that will buffer. And as buffer is filled based off size of that algorithm, it will then write those deferred inserts into the database itself.
08:24
Lois: So, do deferred inserts support constraints?
Bill: Deferred writes do support constraints in index just as for regular inserts. However, performance benchmarks that have been done recommend that you disable constraints, if you're going to use the fast ingest.
08:41
Lois: Can you tell us a bit about the streaming and ingest mechanism?
Bill: We declare a table with the memoptimize for write. We can do that in the create table statement, or we can alter the table for that. The data is written to the large pool, unlike traditionally writing items to the buffer cache. It's going to write to the ingest buffer, the large pool. And it's going to be drained. It's going to be written from that area by using those background processes to write to the actual database itself.
So the very high throughput, since drainers issues deferred writes in large batches. So we're not having to wait especially for the buffer cache. OK, I need space. OK, I need to write. I need to free up blocks. Very ideal for these streaming inserts, sensor readings, alarms, door locks. Those type of things.
09:33
Nikita: How does performance improve with this?
Bill: With the benchmarks we have done, we have found that the performance can be up to