Hello friends. Before starting this new article I want to thank you for the nice words and support you have shown for the blog. Today I bring you a topic that I particularly find really interesting. Maybe it is a bit long, but it is worth it. So make yourself a coffee and enjoy!
Database CPU Management – The context
Historically and with traditional nonCDB architecture there were two families of thought as to how to distribute a server’s database CPU allocation among all it’s hosted database instances :
- “Limited”: We’d allocate a fixed amount of the server’s CPU to each database (via defining an explicit CPU_COUNT), so that the total CPU allocated to the server’s database is distributed among the different database instances. We don’t give out more than what is there.
- “Unlimited”: We’d not assign fixed CPU values per instance, and allow any instance of the server to access the total server’s CPU. This means allowing overallocation. E.G.- Server has 10 CPUs. Database “A” can use up to 10 CPUs and Database “B” can use up to 10 CPUs. So up to 20 CPUs may be needed concurrently at any given time. Then, the OS scheduler will struggle to prioritize the access, as we have only 10 CPUs in the server.
The “unlimited” approach is almost always unacceptable in a production environment for obvious reasons; at any time one database can hijack all the server’s resources, affecting the performance of the rest of the databases. On the other hand, it is an interesting approach in terms of resource utilization, since it reduces the possibility of idle CPU and/or probability of having CPU-starving databases at any given time. It makes much more use of CPU and therefore reduces CapEx.
The “limited” approach is the one that used to be implemented in nonCDB systems for production environments, since the load suffered by one instance does not affect the rest of the instances, achieving a less volatile behaviour and a more predictable and reliable performance. On the other hand, it makes almost impossible to take the most of the server’s CPU, as we are compartmentalising it. The more we compartmentalise, the more we isolate system resources.
The solution
One of the most critical and essential capabilities of multitenant (consolidation of several user PDBs into a single CDB) is its ability to dynamically control and distribute the CPU resource between different databases. All the PDBs that coexist in the same container are operating within the scope of the same software. This implies :
- Any PDB is aware of the existence of the other PDBs . As opposed to a scenario of nonCDBs, where each database instance is an independent program, unaware of the existence of other databases running within the same server.
- Any PDB is able to access and use the idle CPU – not used by other databases – as a shared CDB resource.
Now, isn’t this the same situation we had with the “unlimited” scenario, where all databases had access to all system resources? Well, it is not, thanks to the first point; The CDB is aware of the load that each PDB has at any given time. This means that now we are able to define the criteria by which this shared CPU is distributed among the different PDBs. And this is implemented by the CDB resource manager (implemented in CDB$ROOT).
There are two ways in which the CDB resource manager can control how the CPU is allocated among the different PDBs at a given time (they are both compatible and coexist):
- By defining guaranteed minimums : Implemented by defining shares. As soon as the CDB has no more free CPU, each of the PDBs will receive a guaranteed minimum CPU. This definition is key for taking the most of our system resources.
- By defining maximum usage caps: Implemented by defining a maximum % of utilization of CDB’s CPU; utilization limit. No PDB will be able to exceed its allocated CPU limit, regardless of the load on the CDB. This definition should only be specified in very specific circumstances; don’t define it for fear that CPU rebalancing could be slow at the time it occurs. It will not happen. It does not work the same way as the OS process scheduler; there is no queue. The minimum CPU shares are immediately effective when necessary. Oracle processes will immediately give more CPU quantum time to honor those minimums.

This feature resolves the dilemma we discussed at the beginning of the article. We no longer have to decide between getting the most out of our system’s CPU, and achieving stable and predictable behaviour of our databases. We can finally consolidate several databases in a single system without having to make compromise decisions, guaranteeing stable behaviour and allowing us to readjust allocations on the fly if we need to.
Room for improvement
You have probably heard or read that as of a certain “RU” of release 19c, there is a new CPU control mechanism in multitenant CDBs called “CPU Dynamic scaling”…
Wait! But didn’t we just say that the CBD resource manager works extremely well?. Is it true there is a new configuration method?. Why do we need a new approach?.
Yes, it is true since 19.4. This is the reason behind the new approach:
Working with “shares” can be laborious in certain scenarios. Especially in those where PDBs are being added and/or removed frequently. Using shares, a database priority / importance is relative to the importance of the other databases with which it coexists at a moment of time. This means that by introducing a new PDB (with it’s n shares) into the CDB, I am inevitably changing the amount of vCPUs that the other PDBs in that same CDB will get (under their same previously assigned shares). In other words, what might have been a sufficient amount of shares before, may not be a sufficient amount of shares now, as they are equal to a different amount of vCPUs. Thus, managing CPU allocation in a CDB where PDBs are created/deleted frequently may require more administration effort, as the DBA will need to re-analyse share distribution each time a PDB is included/excluded; to revisit whether each PDB ultimately has the number of vCPUs it needs.
CPU Dynamic scaling
How can we make this CPU resource manager definition and administration easier?. By directly specifying vCPUs instead of shares. Specific units vs relative units. Thus, adding or removing a PDB in the same container will not change the previous definitions and meanings in the CDB resource manager plan.
And how is it implemented?. With two system parameters defined at PDB level (and you will probably know one of them):
- CPU_COUNT : No change in the behavior of this well know parameter. It will be the hard limit or maximum vCPUs that this PDB can take.
- CPU_MIN_COUNT : The new parameter. It will define the minimum guaranteed CPU assigned to this PDB. For this reason, the sum of the CPU_MIN_COUNT of all the PDBs hosted in the same CDB cannot be greater than the CPU_COUNT of the whole CDB (cpu_count in CDB$ROOT).
As was already the case with the CPU_COUNT parameter, which was only really effective when we had a resource manager plan activated, the CPU_MIN_COUNT parameter will also require that we have any plan specified in the “resource_manager_plan” parameter. That’s all the configuration.
Some additional asnwers you’d may need :
- If you set CPU_MIN_COUNT and still have a resource manager plan with shares directives, those directives will take precedence and CPU_MIN_COUNT will be ignored.
- CPU_COUNT will always be honored as a hard limit as soon as it is defined and there is an active resource manager plan.
- CPU management is performed in both cases by CDB resource manager (aka DBRM). There is no different or alternative internal mechanism implementing CPU dynamic scaling. We are just using a clearer and simpler approach.
Are any of you familiar with this behaviour?… did I hear Autonomous Database Autoscaling?
YES! Just set the PDB CPU_COUNT value to 3x times the value of CPU_MIN_COUNT, and you will be reproducing the autonomous database CPU current autoscaling configuration.
(Picture credits: Chris Liverani )

