This is the second part of the article series on asymmetric physical dataguard with multitenant. You can find the first part in this link.
We begin this second part with a clear understanding of two concepts that we have learned when creating PDBs and performing UNPLUG/PLUG operations in a physical DG primary CDB:
- PDB operations reach the standby through the REDO and are replayed there.
- The standby has perfect ability to discriminate which REDO information belongs to which PDB.
To these two concepts, we add a third one that we have learned in past articles: since release 12.2, PDBs are created with local UNDO. So having PDB’s REDO (and we know how to discriminate it within the main redo stream and/or the archive logs), together with its own local UNDO tablespace, we know that we will have the capability to recover the PDB without affecting the rest of the existing PDBs within the same container. We could do a PITR or even a PDB flashback without disturbing the other PDBs.
Well, if we put these three concepts together we can understand that the physical standby recovery process has everything it needs to exclude a specific PDB. Is this going to affect the CDB$ROOT? no, it has its own tablespaces and data dictionary. Is it going to affect the rest of the PDBs? no, they also have their own tablespaces and data dictionary.
Then considering that – as we said in the post about the limitations of a hot clone PDB in a primary CDB in 19c – it is not possible to reproduce the hot cloning in the standby CDB; will a hot clone affect the standby? A very relevant question. If the standby ongoing recovery could not distinguish between PDBs, as soon as a hot clone was made in the primary DB (19c), the dataguard apply services would stop, since it would not be able to introduce the new cloned PDB in the recovery process, and since the dataguard must always guarantee that there is no data loss.
So again: will the hot clone affect the standby ongoing recovery? No, it will not. Multitenant architecture is thought with this in mind, so that when we execute a PDB hot clone in the primary CDB (PDB opened in read-write), the same command is replayed internally in the standby CDB with an additional clause : “standbys=none”. And what does the “standbys=none” clause mean? It means that the clone operation in the standby CDB will register the new PDB metadata, but it will not introduce the PDB in it’s main recovery process. That way we avoid the recovery process to stop and therefore allowing the rest of the PDBs to keep in sync. Let’s do a simple test with a synchronized dataguard with “maximum availability” protection mode:
-- Primary database
-- Feedback disabled, encryption clauses removed for clarity, and usual common commands removed
DGMGRL> show configuration verbose
Configuration - CDBA_g2r_fra_CDBA_d7d_fra
Protection Mode: MaxAvailability
Members:
CDBA_g2r_fra - Primary database
CDBA_d7d_fra - Physical standby database
DGMGRL> show database verbose CDBA_d7d_fra
Database - CDBA_d7d_fra
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: 0 seconds (computed 0 seconds ago)
Apply Lag: 0 seconds (computed 0 seconds ago)
Average Apply Rate: 394.00 KByte/s
Active Apply Rate: 0 Byte/s
Maximum Apply Rate: 0 Byte/s
Real Time Query: ON
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
4 PDB1 READ WRITE NO
5 PDBA READ WRITE NO
SQL> create pluggable database HOTCLONE from PDBA;
SQL> alter pluggable database HOTCLONE open;
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 HOTCLONE READ WRITE NO
4 PDB1 READ WRITE NO
5 PDBA READ WRITE NO
Now, lets see how the standby CDB has behaved in the alert log:
Recovery created pluggable database HOTCLONE
HOTCLONE(3):Tablespace-SYSTEM during PDB create skipped since source is in r/w mode or this is a refresh clone
HOTCLONE(3):File #88 added to control file as 'UNNAMED00088'. Originally created as:
HOTCLONE(3):'+DATA/CDBA_G2R_FRA/092AB21A4D58D4ACE0632814010ADA90/DATAFILE/system.334.1151839747'
HOTCLONE(3):because the pluggable database was created with nostandby
HOTCLONE(3):or the tablespace belonging to the pluggable database is offline.
HOTCLONE(3):Tablespace-SYSAUX during PDB create skipped since source is in r/w mode or this is a refresh clone
HOTCLONE(3):File #89 added to control file as 'UNNAMED00089'. Originally created as:
HOTCLONE(3):'+DATA/CDBA_G2R_FRA/092AB21A4D58D4ACE0632814010ADA90/DATAFILE/sysaux.336.1151839747'
HOTCLONE(3):because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline.
HOTCLONE(3):Tablespace-UNDOTBS1 during PDB create skipped since source is in r/w mode or this is a refresh clone
HOTCLONE(3):File #90 added to control file as 'UNNAMED00090'. Originally created as:
HOTCLONE(3):'+DATA/CDBA_G2R_FRA/092AB21A4D58D4ACE0632814010ADA90/DATAFILE/undotbs1.335.1151839747'
HOTCLONE(3):because the pluggable database was created with nostandby or the tablespace belonging to the pluggable database is offline.
HOTCLONE(3):Tablespace-TEMP during PDB create skipped since source is in r/w mode or this is a refresh clone
From the standby CDB “alert.log” file:
- The standby confirms that the clone has been created with “standby=none” (nonstandby) because the source PDB was open in READ/WRITE.
- It also confirms that it has not copied the datafiles for that reason, although it will leave them registered in the standby’s controlfile with the prefix “UNNAMEDnnnnn“.
- It has not automatically created the temporary tablespace for the same reason.
If we connect again to the standby, we can indeed verify that the cloned PDB seem to be actually created, but thats at the metadata level, not at the data level. Lets check that within the standby DB and from within the ASM diskgroup (under grid os user):
-- Standby database
-- irrelevant commands/output removed again
SQL> show pdbs
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 HOTCLONE MOUNTED
4 PDB1 READ ONLY NO
5 PDBA READ ONLY NO
SQL> select guid from v$pdbs where name='HOTCLONE';
GUID
--------------------------------
092AB21A4D58D4ACE0632814010ADA90
-- from grid os account:
ASMCMD> find +DATA 092AB21A4D58D4ACE0632814010ADA90
ASMCMD> find +RECO 092AB21A4D58D4ACE0632814010ADA90
ASMCMD> -- so not found
In summary: the RDBMS has a clause that allows the new PDB to be excluded from the main dataguard recovery. In this case we’ve witnessed it because of an inherent limitation of the 19c and PDB hot cloning.
WAIT. Is that only Oracle can use that clause? What if I create, plug-in or cold clone a PDB and I do not want that PDB to be synced in the standby?
… and eh, and don’t think I forgot about the hotcloning limitation
Yes, we can also use that clause. In fact we have 2 ways to do it. And the second one is even more interesting than the first one:
- A) We can manually include “standbys=none” in the PDB command we execute in the primary CDB. We’ve already reviewed how it works.
- B) Take advantage of “ENABLED_PDBS_ON_STANDBY” system parameter: We define this system parameter in the standby CDB (and eventually also in the primary CDB – in order to maintain the parameterization symmetry in case of a possible switchover/failover). We will use this parameter to specify which PDBs are part of the dataguard synchronization. That is, it is a proactive way to indicate which PDBs are going to be synchronized, so that if I run a PDB operation in the primary CDB and I forget to specify the clause “standbys=none” nothing bad will happen. No new PDB is in the list unless I add it manually. Also, having this list is a very convenient way to swiftly check which PDBs are part of the synchronization, while if we use the “standbys=none” clause we would have to explicitly check the dynamic views in order to see each PDB status. Beyond that, the effect of working with the ENABLED_PDBS_ON_STANDBY parameter is similar to that of using “standbys=none”; PDBSs not listed in that parameter’s listing will not be synchronized.
OK. What if the opposite has happened to me? What if I have mistakenly excluded a PDB, but I do want to keep it synchronized in standby?
ehh… and regarding the PDB hot clone… How can I make a hot clone of a PDB on a 19c and have that PDB be part of the standby?
That is a great question and we will explain it in detail in part 3. We will also explain other alternative-proactive strategies for making a PDB hot clone in a primary CDB in 19c.
Summary so far: working with an asymmetric dataguard
We previously concluded that there are no technical reasons why the recovery process cannot leave aside a PDB, and there are no reasons why this would impact in any way on the other PDBs when planned and implemented properly.
So, what are the consequences of having a greater number of PDBs in the primary CDB with respect to a potential switchover or failover? Essentially, the PDB will only be available there where it exists. That is, if I have PDB1 and PDB2 in the primary “CDBA”, and only PDB1 in the standby “CDBB”, and I execute a switchover, I will only be able to open the PDB1 in the new primary “CDBB”. The new primary “CDBB” will keep PDB1 synchronized with the new standby “CDBA”. And meanwhile, PDB2 will remain intact on the new standby “CDBA” – just as it was before the switchover.
So what happens if I do a switchback? “CDBA” becomes the primary again, PDB1 will be opened with the last synchronized changes, and PDB2 can be opened to the same state as it was before the switchover. PDB2 datafiles have remained unopened since then.
In other words, nothing has happened to PDB2. It is still available where it was and in the state it was in. In the case of a failover, as we saw earlier, it is clear that we will not be able to work with PDB2 in the new standby and therefore there will be no modifications to synchronize, but we can make a reinstate of the old primary so that it takes the standby role and we can switchback to it again.
There are interesting use cases where we can take advantage of this asymmetry. For example, you can make a duplicate of the production database every week for testing, or to offload certain tasks, without consuming space on the standby site. Or if you want to test a PDB operation insitu before the actual operation.
This concludes Part 2. In part 3 we will talk about how to properly plan a PDB operation in a DG environment, how to fix wrong implementations and other alternatives to PDB hot cloning. And we will have nice diagrams.

