Monday, December 12, 2022

Webinar: Oracle Data pump enhancements in Oracle 21c

This Webinar helps you to understand Oracle data pump new features in Oracle 21c and take an advantage of these features.

Date and time: Dec 21st 2022 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)

This Webinar covers following Topics.
  • Introduction to Oracle Data Pump
  • CHECKSUM, CHECKSUM_ALGORITHM
  • VERIFY_ONLY and VERIFY_CHECKSUM
  • INCLUDE and EXCLUDE in the Same Operation
  • Index Compression
  • Transportable Tablespace Enhancements
  • JSON Data Type Support
  • Export/Import from Cloud Object Store
  • Q&A
To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com
Please use Zoom Link to join the Webinar.  

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For presentation link Click here 

Thanks & Regards,
http://www.oracleracexpert.com

Tuesday, November 8, 2022

Physical Standby Recovery or Sync Using Recover from Service from 18c

The RECOVER FROM SERVICE is used to recover data guard database which is out of sync or lagging standby. Before 12c the user needs to determine the current SCN, take a backup and apply, start managed recovery of standby and but in 12c they automated these steps but user still needs to perform few manual steps such as control file update. 

This feature has been improved in Oracle 18c and covers all the steps that DBA used to perform manually

Primary : DBPRI
Standby: DBSTB

Steps if the database is Oracle 18c or above

1. Check the Data Guard Broker configuration.

DGMGRL> show configuration
Configuration – dg_config
Protection Mode: MaxPerformance

Members:
dbpri - Primary database
Error: ORA-16724: cannot resolve gap for one or more members

  dbstb - Physical standby database
  Warning: ORA-16809: multiple warnings detected for the member

Fast-Start Failover: Disabled

Configuration Status:
ERROR (status updated 5 seconds ago)

We can see that both Primary and Physical standby are out of sync

2. Stop Recovery Managed process on the standby

DGMGRL> edit database DBSTB set state=APPLY-OFF;

Or

RMAN> alter database recover managed standby database cancel;

3. Run roll forward command

$rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Thu Mar 11 12:12:48 202
Version 19.5.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: DBSTB (DBID=xxxxxxxx)
RMAN> recover standby database from service DBPRI
Starting recover at 11-MAR-22
using target database control file instead of recovery catalog
Oracle instance started…….

media recovery complete, elapsed time: 00:00:00
Finished recover at 05-MAR-22
Executing: alter system set standby_file_management=auto
Finished recover at 05-MAR-22

4. Enable the Recovery Managed process on the standby

DGMGRL> edit database DBSTB set state=APPLY-ON;

Or

RMAN> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;


5. Verify the Data Guard configuration status

DGMGRL> show configuration

Configuration - dg_config
Protection Mode: MaxPerformance

Members:
dbpri - Primary database
  dbstb - Physical standby database

Fast-Start Failover: Disabled

Configuration Status:
SUCCESS (status updated 56 seconds ago)

Now standby Database has been recovered and in sync with the primary.

Thanks & Regards

Monday, September 19, 2022

BEA-002936 maximum thread constraint or BEA-000337 [STUCK] ExecuteThread in WebLogic

I come across an issue and found below error message in server log file. As per oracle note Doc ID 1356278.1 this message is only informational and can be safely ignored. But our planning application crashed and we had to restart the services.

####<Sep 8, 2022 11:06:22 AM PDT> <Info> <WorkManager> <SCLWPA2188> <Planning0> <[ACTIVE] ExecuteThread: '29' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1662660382282> <BEA-002936> <maximum thread constraint ASYNC_REP_FLUSH_WM is reached>
####<Sep 8, 2022 11:08:39 AM PDT> <Info> <WorkManager> <SCLWPA2188> <Planning0> <[ACTIVE] ExecuteThread: '31' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1662660519981> <BEA-002936> <maximum thread constraint ASYNC_REP_FLUSH_WM is reached>


In many cases the error may clear after some time. After research I found that WebLogic Server associates some internal work managers whenever a JMS server is created and below are the parameters

weblogic.jms.<JMSServer_NAME>.AsyncPush
Min threads = 1, max threads = unlimited, high fair share

weblogic.jms.<JMSServer_NAME>.Limited
Min threads = 1, max threads = 8, normal fair share.

weblogic.jms.<JMSServer_NAME>.System
Min threads = 1, max threads = unlimited, high fair share.

The work managers will expand up to the max limit and will shrink back when the load returns to normal, however shrinking doesn’t start unless there are few idle state threads.

As you see the work managers have a limited thread count of 8, but this limit is internal and cannot be changed without contacting Oracle support.  

Users may also see STUCK thread issues

Sep 3, 2022 11:23:31 AM EDT> <Error> <WebLogicServer> <abcsystems.orass.com> <server-cc113sp> <[ACTIVE] ExecuteThread: '33' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <45443435326649> <BEA-000337> <[STUCK] ExecuteThread: 45 for queue: 'weblogic.kernel.Default (self-tuning)' has been busy for “550" seconds working on the request "Workmanager: default, Version: 0, Scheduled=true, Started=true, Started time: 678511 ms

The root cause Thread gets stuck if it is continually working (not idle) for a set period of time. This time is called the StuckThreadMaxTime, and has a default value of 600 seconds. The stuck thread cannot complete its current work or accept any new work the server logs the message. Note that if all threads gets stuck then server health state will change to either “warning” or “critical”

The stuck thread errors may disappear after some time if you see these errors frequently then you should start tuning to avoid issues. Note that restart will clear these errors.

I would recommend you go to WebLogic configuration and finetune your parameters based upon the current settings such as “Stuck thread max time” and “Stuck Thread Timer Interval”

To fine tune the parameter go to weblogic console à Domain à Environment à Server à <Managed Server> à Configuration > Tuning



Make sure you apply changes and reboot the server for new changes to take into effect. You can also refer below WLS documentation and Metalink note for more details.

WLS documentation : https://docs.oracle.com/cd/E23943_01/apirefs.1111/e13952/taskhelp/tuning/TuningExecuteThreads.html
Metalink Note: Note 1302472.1 - Information about STUCK Threads Condition and Tuning in Weblogic Server

Thanks & Regards
http://oracleracexpert.com, Oracle ACE Pro

Thursday, September 8, 2022

RMAN-04009: warning from auxiliary database: ORA-28002: the password will expire within 6 days

I have come across an issue when trying to duplicate a database by connection to Recovery catalog database. You will come across this warning when the user profile has reached PASSWORD_LIFE_TIME value limit and you have entered into PASSWORD_GRACE_TIME value

RMAN-04009: warning from auxiliary database: ORA-28002: the password will expire within 6 days

First check the profile that user has been assigned.

SQL> select username , profile from dba_users where username=’RMAN’;
username profile
------------- ---------------
RMAN DEFAULT

Check the current status of the user
SQL> select USERNAME, ACCOUNT_STATUS from dba_users where USERNAME = ‘RMAN’;
USERNAME ACCOUNT_STATUS
------------- ----------------- --------------------------------
RMAN EXPIRED(GRACE)

Check the RESOUCE_NAME, LIMIT for the Default profile.
SQL> select RESOURCE_NAME,LIMIT from dba_profiles where PROFILE='DEFAULT';

RESOURCE_NAME LIMIT
---------------------------  -------------------------------------------------
COMPOSITE_LIMIT UNLIMITED
SESSIONS_PER_USER UNLIMITED
CPU_PER_SESSION UNLIMITED
CPU_PER_CALL UNLIMITED
LOGICAL_READS_PER_SESSION UNLIMITED
LOGICAL_READS_PER_CALL UNLIMITED
IDLE_TIME UNLIMITED
CONNECT_TIME UNLIMITED
PRIVATE_SGA UNLIMITED
FAILED_LOGIN_ATTEMPTS 10
PASSWORD_LIFE_TIME 180
RESOURCE_NAME LIMIT
PASSWORD_REUSE_TIME UNLIMITED
PASSWORD_REUSE_MAX UNLIMITED
PASSWORD_VERIFY_FUNCTION NULL
PASSWORD_LOCK_TIME 1
PASSWORD_GRACE_TIME 7
INACTIVE_ACCOUNT_TIME UNLIMITED

As we see password PASSWORD_LIFE_TIME set to 180 days

First you need to change the password for the issue to resolve.
SQL> Alter user rman identified by rman_new_password;

As current password life time set 180 days, You can change the password life time to avoid this issue in future. Either you can set to specific number of days or UNLIMITED.

SQL> ALTER PROFILE DEFAULT LIMIT PASSWORD_LIFE_TIME UNLIMITED;
Profile altered.

You can check the modified resource limit for PASSWORD_LIFE_TIME
SQL> select RESOURCE_NAME,LIMIT from dba_profiles where PROFILE='DEFAULT' and RESOURCE_NAME='PASSWORD_LIFE_TIME';

RESOURCE_NAME   LIMIT
---------------------------  -------------------------------------------------
PASSWORD_LIFE_TIME UNLIMITED

Hope this helps

Thanks & Regards
http://oracleracexpert.com, Oracle ACE Pro

Tuesday, September 6, 2022

Automatic Indexing in Oracle 19c & 21c

The DBA’s are responsible for Index management, which includes monitor index, add, change and remove based upon workload and ad-hoc manner. It will be difficult to decide type of index, when to create, change or drop and measure the impact, so there will be both positive and negative effects.

In Oracle 19c, the automatic indexing feature introduced and it automatically creates, rebuilds, and drops indexes to improve the performance based upon application workload and changes. This helps to optimize the database and improve performance without any user intervention.

The DBMS_AUTO_INDEX package is used for managing the automatic indexing and user can find out the current automatic index configuration by querying CDB_AUTO_INDEX_CONFIG view.

  • CONFIGURE Procedure - Configures automatic indexing.
  • DROP_AUTO_INDEXES Procedure - Drop the automatically created indexes manually by overriding the retention parameter.
  • DROP_SECONDARY_INDEXES Procedure -Deletes all the indexes, except the ones used for constraints
  • REPORT_ACTIVITY Function - Generate Report of the automatic indexing operations
  • REPORT_LAST_ACTIVITY Function - Generate Report of the last automatic indexing operation
AUTO_INDEX_MODE : The automatic indexing is controlled using the AUTO_INDEX_MODE property, It defines different modes of operation.

  • REPORT ONLY: Turn on automatic indexing and new indexes are invisible and not available for SQL operations
  • IMPLEMENT: Turn on automatic indexing and new indexes are visible available for SQL operations
  • OFF: Turn OFF automatic indexing but does not disable existing auto indexes.
SQL> exec dbms_auto_index.configure('AUTO_INDEX_MODE','REPORT ONLY');
SQL> exec dbms_auto_index.configure('AUTO_INDEX_MODE','IMPLEMENT');
SQL> exec dbms_auto_index.configure('AUTO_INDEX_MODE','OFF');


AUTO_INDEX_SCHEMA Procedure - When automatic indexing enabled all schemas will be used for auto indexes and you can exclude or include any schemas by using AUTO_INDEX_SCHEMA parameter

The below examples adds the SALES schema to AUTO INDEXES exclusion list

begin
dbms_auto_index.configure(
parameter_name => 'AUTO_INDEX_SCHEMA',
parameter_value => 'SALES',
allow => FALSE);
end;


The below examples removes the SALES schema from AUTO INDEXES exclusion list

begin
dbms_auto_index.configure(
parameter_name => 'AUTO_INDEX_SCHEMA',
parameter_value => 'SALES',
allow => NULL);
end;


The below example removes all the schemas from AUTO INDEXES exclusion list.

begin
dbms_auto_index.configure(
parameter_name => 'AUTO_INDEX_SCHEMA',
parameter_value => NULL,
allow => TRUE);
end;


You can also define retention period that you want auto indexes. The below example sets the retention period of auto indexes to 30 days.

begin
dbms_auto_index.configure(
parameter_name => 'AUTO_INDEX_RETENTION_FOR_AUTO',
parameter_value => '30');
end;


By default automatic indexes created in default permanent tablespace and you can change this by using AUTO_INDEX_DEFAULT_TABLESPACE

SQL> exec dbms_auto_index.configure('AUTO_INDEX_DEFAULT_TABLESPACE','AUTO_INDEX_TBS');

You can also allocate percentage of the tablespace can be used for auto indexes. In below example you can use 10%

SQL> exec dbms_auto_index.configure(‘AUTO_INDEX_SPACE_BUDGET’, ‘10’);



DROP_AUTO_INDEXES Procedure: Using this procedure you can Drop the automatically created indexes manually by overriding the retention parameter.

The below example will drop a single index in SALES schema and it allow recreate. If you do not want to recreate set the value to FALSE

begin
dbms_auto_index.drop_auto_indexes(
owner => ‘SALES’,
index_name => '”SYS_AI_45rfg54nxvjcty”',
allow_recreate => TRUE);
end;
/

The below example will drop all auto indexes owned by SALES and will not allow recreate as allow_recreate set to FALSE

begin
dbms_auto_index.drop_auto_indexes(
owner => 'SALES',
index_name => NULL
allow_recreate => FALSE);
end;
/

DROP_SECONDARY_INDEXES Procedure – This procedure deletes all the indexes, except the ones used for constraints

The below example deletes all auto indexes except the ones used for constraints

begin
dbms_auto_index.drop_secondary_indexes;
end;

The below example deletes all auto indexes from the REGION table in the SALES schema except the ones used for constraints

begin
dbms_auto_index.drop_secondary_indexes('SALES', 'REGION');
end;


The below example deletes all auto indexes in the SALES schema except the ones used for constraints

begin
dbms_auto_index.drop_secondary_indexes('SALES');
end;

REPORT_ACTIVITY Function – By using this function you can generate Report of the automatic indexing operations

The below example generates automatic indexing operations report executed in the last 24 hours.

declare
act_report clob := null;
begin
act_report := dbms_auto_index.report_activity();
end;

or

SQL> select dbms_auto_index.report_activity() from dual;


The below example generates HTML automatic indexing operations report executed in the last 6 hours activity.

SQL> select dbms_auto_index.report_activity(activity_start => systimestamp-0.25, activity_end => systimestamp-1, type => 'HTML') from dual;

REPORT_LAST_ACTIVITY Function -The below example generates report of the last automatic indexing executed in a database.

declare
act_report clob := null;
begin
act_report := dbms_auto_index.report_last_activity();
end;

or

SQL>  select dbms_auto_index.report_last_activity() from dual;


The Oracle 21c comes with an enhancement in Automatic indexing, which helps to reduce the over head of the cursor invalidations during automatic index creation and also added new enhancements to improve query performance.

Thanks & Regards
http://oracleracexpert.com, Oracle ACE

Friday, August 26, 2022

DBMS_CLOUD Package in Oracle 21c

In order to work with data in object stores you can either use DBMS_CLOUD package or manually define external table. The package was DBMS_CLOUD package introduced in Oracle Autonomous Database to work with an object store, we can also use in on-prem for versions 19c and 21c but you need to install manually. You can refer Oracle note How To Setup And Use DBMS_CLOUD Package (Doc ID 2748362.1) for setup and other details.

You can run below command to create directory, grant access on directory and DBMS_CLOUD package to emp_usr user.

CREATE OR REPLACE DIRECTORY EMP_DIR AS '/oracle/emp_dir';
GRANT EXECUTE ON DBMS_CLOUD TO emp_usr;
GRANT READ, WRITE ON DIRECTORY emp_dir TO emp_usr, C##CLOUD$SERVICE;

Note DBMS_CLOUD is owned by a separate schema C##CLOUD$SERVICE, this user locked by default. Both emp_usr, C##CLOUD$SERVICE user needs to have access.

Credential - You can create credential using the CREATE_CREDENTIAL procedure for your object store.

GRANT CREATE CREDENTIAL TO emp_usr;

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
CREDENTIAL_NAME => 'cloud_obj_cred',
USERNAME => '<username>,
PASSWORD => '<user auth token>'
) ;
END;
/

You can use UPDATE_CREDENTIALS procedure to make changes to the credentials

BEGIN
DBMS_CREDENTIAL.UPDATE_CREDENTIAL(
CREDENTIAL_NAME => 'cloud_obj_cred',
ATTRIBUTE => 'username',
VALUE => 'emp_usr');
END;
/

To drop credential you can use DROP_CREDENTIAL procedure.

BEGIN
DBMS_CLOUD.DROP_CREDENTIAL(CREDENTIAL_NAME => 'cloud_obj_cred’);
END;
/

You can disable and enable the credentials using DISABLE_CREDENTIAL and ENABLE_CREDENTIAL

BEGIN
DBMS_CREDENTIAL.DISABLE_CREDENTIAL('cloud_obj_cred’);
END;
/

BEGIN
DBMS_CREDENTIAL.ENABLE_CREDENTIAL('cloud_obj_cred’);
END;
/

Object store - For managing the objects in the cloud object store, please use below commands

To transfer a file from direct to cloud object store use the PUT_OBJECT procedure

BEGIN
DBMS_CLOUD.PUT_OBJECT (
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxxxxxx/emp_file.txt',
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

In order to transfer a object from cloud object store to the directory use the GET_OBJECT procedure

BEGIN
DBMS_CLOUD.GET_OBJECT (
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxxxxxx/emp_file.txt',
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

You can use dbms_cloud.list_objects to lists objects in the object store and use GET_METADATA function In order to get specific object metadata

SELECT * FROM DBMS_CLOUD.LIST_OBJECTS(CREDENTIAL_NAME => 'cloud_obj_cred’,LOCATION_URI => 'https://xxxxxxxxxxx');

SELECT * FROM DBMS_CLOUD.GET_METADATA(CREDENTIAL_NAME => 'cloud_obj_cred’,LOCATION_URI => 'https://xxxxxxxxxxx/emp_file.txt') AS METADATA FROM DUAL;

You can delete objects from the cloud object store using DELETE_OBJECT procedure
 
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxx/emp_file.txt');
END;
/

You can delete files from the directory using DELETE_FILE procedure
 
BEGIN
DBMS_CLOUD.DELETE_FILE(
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

To list the files in a directory use LIST_FILES function

SELECT * FROM DBMS_CLOUD.LIST_FILES(DIRECTORY_NAME => 'emp_dir');

To export data into a cloud object store in the required format use EXPORT_DATA procedure

BEGIN
DBMS_CLOUD.EXPORT_DATA (
CREDENTIAL_NAME => 'cloud_obj_cred’,
FILE_URI_LIST => 'https://xxxxxxxxxxx/emp_file.csv',
QUERY => 'select * from emp',
FORMAT => '{"TYPE" : "csv"}');
END;
/

To delete all operation use DELETE_ALL_OPERATIONS procedure

BEGIN
DBMS_CLOUD.DELETE_ALL_OPERATIONS;
END;
/

To delete specific operation use DELETE_OPERATIONS procedure , but you need to provide operation ID

BEGIN
DBMS_CLOUD.DELETE_OPERATION(<id>);
END;
/

If you enter problems with DBMS_CLOUD with the user or role you can test the configuration using the same sample code that was used for the DBMS_CLOUD setup .

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE  

Wednesday, May 25, 2022

Multitenant: DBCA PDB Remote Clone or Relocate in Oracle 19c

In Oracle 19c, you can clone or relocate a pluggable database (PDB) from one CDB (multitenant container database) to another using the DBCA (Database Configuration Assistant).

The following pre-requisites must met

• The local and remote PDBs must be in the archive log mode and local undo mode.
• It must have the CREATE PLUGGABLE DATABASE privilege in the local CDB root container.
• The database user in the remote PDB that the database link connects must have the CREATE       PLUGGABLE DATABASE, SYSOPER and SESSION privileges.
• The same database options installed on local and remote PDB’s

You can use below query to verify database has local undo mode and archive log mode enabled

SQL> select property_name, property_value from database_properties
where property_name = 'local_undo_enabled';
SQL> select log_mode from v$database;

First, create a user that is used in the database link automatically to connect during the cloning operation. When using DBCA we need to supply the credentials only and no need to create database link

SQL> CREATE USER c##remote_user1 IDENTIFIED BY password CONTAINER=ALL;
SQL> GRANT create session, create pluggable database TO c##remote_user1 CONTAINER=ALL;


You can launch DBCA in silent mode to clone PDB1 from CDB1 as PDB11 in CDB11

$dbca -silent -createPluggableDatabase -createFromRemotePDB -remotePDBName PDB1 -remoteDBConnString CDB1 -remoteDBSYSDBAUserName SYS -remoteDBSYSDBAUserPassword xxxxxxxx -sysDBAUserName sys -sysDBAPassword xxxxxxxx -dbLinkUsername c##remote_user1 -dbLinkUserPassword xxxxxxxx -sourceDB CDB11 -pdbName PDB11

Prepare for db operation
50% complete
Create pluggable database using remote clone operation
100% complete
Pluggable database "PDB11" plugged successfully.
Look at the log file "/oracle/cfgtoollogs/dbca/CDB11/PDB11/CDB11.log" for further details.

You can connect to CDB11 and check the status.

$sqlplus sys@CDB11 as sysdba

SQL> SHOW PDBS
CON_ID CON_NAME OPEN MODE RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED READ ONLY NO
3 PDB11 READ WRITE NO

If you have cloned PDB11 as part of some testing and need to cleanup then use below commands by connecting to CDB11 as SYS

SQL> alter pluggable database PDB11 close;
SQL> drop pluggable database PDB11 including datafiles;



You can also use DBCA to delete the pluggable database that was cloned.

$dbca -silent -deletePluggableDatabase -sourceDB CDB11 -pdbName PDB11

Prepare for db operation
25% complete
Deleting Pluggable Database
40% complete
82% complete
94% complete
100% complete
Pluggable database "PDB11" deleted successfully.
Look at the log file "/oracle/cfgtoollogs/dbca/CDB11/PDB11/CDB11.log" for further details.

You can also use DBCA delete the instance using below command in silent mode

$dbca -silent -deleteDatabase -sourceDB CDB11 -sysDBAUserName sys -sysDBAPassword xxxxxxxxx

The relocatePDB command relocates a PDB from a remote CDB to a local CDB.

$dbca -relocatePDB
-pdbName name_of_the_local_pdb_to_create
-sourceDB database_name_of_the_local_pdb
-remotePDBName name_of_the_remote_pdb_to_relocate
-remoteDBConnString db_connection_string_of_the_remote_pdb
-sysDBAUserName name_of_the_sysdba_user
-sysDBAPassword password_of_the_sysdba_user
-dbLinkUsername name_of_the_dblink_user_of_the_remote_pdb
-dbLinkUserPassword password_of_the_dblink_user_of_the_remote_pdb

Example:
$ dbca -silent -relocatePDB -pdbName PDB11 -sourceDB CDB11  -remotePDBName PDB1 -remoteDBConnString TESTDB -remoteDBSYSDBAUserName sys  -remoteDBSYSDBAUserPassword xxxxxxx  -dbLinkUsername c##remote_user1 -dbLinkUserPassword xxxxxxx

Prepare for db operation
50% complete
Create pluggable database using relocate PDB operation
100% complete
Pluggable database "PDB11" plugged successfully.
Look at the log file "/oracle/cfgtoollogs/dbca/CDB11/PDB11/CDB11.log" for further details.


Refer Oracle documentation for more details and syntax

When running the DBCA in silent mode the outcome will be reported as exit codes. These exit codes helps to identify the command is successful or failed.

Exit Code Description
0  : Command execution successful
6  : Command execution successful but with warnings
-1 : Command execution failed
-2 : Invalid input from user
-4 :  Command canceled by user

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE

Wednesday, April 27, 2022

Physical Standby Swithover_status as UNRESOLVABLE GAP

On Data Guard site, I can see that Archive logs are copying but not applying to Physical Standby Database. When I query I see that SWITCHOVER_STATUS showing as “UNRESOLVABLE GAP”

SQL> select OPEN_MODE,LOG_MODE,DATABASE_ROLE, switchover_status from v$database;

OPEN_MODE LOG_MODE DATABASE_ROLE SWITCHOVER_STATUS
-------------------- -------------------- ------------------------------ ---------------------------------------------
MOUNTED ARCHIVELOG PHYSICAL STANDBY UNRESOLVABLE GAP


I didn’t see any errors when we query v$archive_Dest_status;
SQL> select DEST_NAME, ERROR from v$archive_Dest_status;

But when we query V$ARCHIVE_GAP we can see archive log Gap

SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;
THREAD# LOW_SEQUENCE# HIGH_SEQUENCE#
---------- ------------- --------------
1 47402 47409

I don’t see missing archive logs in the archive log destination, but when I query v$managed_standby I can see that MRP0 processes waiting for archive log sequence 47402

SQL> select PROCESS,STATUS,THREAD#,SEQUENCE# from v$managed_standby where PROCESS='MRP0';
PROCESS STATUS THREAD# SEQUENCE#
------------------------------------ ------------------------------------------------ ---------- ----------
MRP0 WAIT_FOR_GAP 1 47402

If the archive logs on the Physical Standby site removed by mistake then you can restore using RMAN
RMAN> restore archivelog from logseq 47402 until logseq 47409;

If you want to restore archive logs into different destination other than default use below command.
RMAN> set archivelog destination to '/tmp/archive_restore';

In case, if you need to restore specific archive log then use below command
RMAN> restore archivelog from logseq=47402;

Once the archive logs are restored, it will apply to standby site. In case if you have restored to non-default destination then you need to copy the archive logs into default destination.

If the archive logs are not applying on Physical Standby site then shut down and open the Physical Standby in recovery mode again. In case if the archive logs are missing and cannot able to restore from backup then you might get below error message. 

SQL> STARTUP NOMOUNT;
ORACLE instance started.
Total System Global Area 2.0737E+10 bytes
Fixed Size 9923356 bytes
Variable Size 8680473632 bytes
Database Buffers 2040487392 bytes
Redo Buffers 6859032 bytes
SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
Database altered.

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION
*
ERROR at line 1:
ORA-01153: an incompatible media recovery is active


Note that with missing archive logs you cannot able to recover the database.

Thanks & Regards
http://oracleracexpert.com, Oracle ACE

Sunday, April 17, 2022

Webinar: Oracle Data Guard New Features in Oracle 18c & 19c

This Webinar helps you to understand the benefits of Oracle Data Guard, Standby Database types, protection modes and new features in Oracle 18c and 19c.

Date and time: May 5th 2022 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)


This Webinar covers following Topics.

• Introduction to Oracle Data Guard
• Oracle Data Guard Benefits
• Standby Database types
• Data Guard Protection Modes
• What's new in Oracle 18c
• What's new in Oracle 19c
• References
• Contact Info
• Q&A

To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For presentation link Click here

Thanks & Regards,
http://www.oracleracexpert.com

Wednesday, April 6, 2022

How to roll forward image copies using RMAN

Recovery Manager (RMAN) is a utility that can be used to backup and recovery your database and it simplifies backing up, restoring, and recovering database or database files

Oracle RMAN can be used to roll forward the copy to the point in time of the most recent level 1 incremental backup by applying level 1 incremental backups to an older image copy. This will help reducing recovery time. To roll forward processes all changes between SCN of original image and incremental backup are applied to the image copy.

When using Roll forward image no need to restore the database instead just switch to updated image copy this will reduce restore time and recovery will be minimal.

run {
allocate channel c1 device type disk;
recover copy of database with tag 'rman_incr_backup';
backup incremental level 1 for recover of copy with tag ‘rman_incr_backup' database;
}


The TAG is used to identify which incremental backups are applying to which image copies.

Note that when the backup runs first time there is no level 0 exists to apply incremental backups. Therefore, it will create level 0 image copy of the database. Going forward it will create level 1 incremental backup and will update previous image copy.

You can also use compression during roll forward image copies using below example

run {
allocate channel c1 device type disk;
recover copy of database with tag ‘rman_incr_backup';
backup as compressed backupset incremental level 1 for recover of copy with tag ‘rman_incr_backup' database;
}


In case if you need to switch the database to image copy you then “shutdown database” and “startup mount”

SQL> shutdown immediate;
SQL> startup nomount;

By using below RMAN command, you can switch the database to image copy

run {
allocate channel c1 device type disk;
switch database to copy;
recover database;
alter database open;
}


When using FRA (Fast Recovery Area) make sure you have has enough space to keep take full image copy database backup.

The Block change tracking will help to improve performance of the incremental backups and you can check the status and enable using below command. 

SQL> select status from v$block_change_tracking;
SQL> alter database enable block change tracking using file '/oracle/oradata/TESTDB/block_change_tracking_file.dbf' reuse;


You can use below command to disable Block change tracking

SQL> alter database disable block change tracking;

Thanks & Regards
http://oracleracexpert.com, Oracle ACE

Thursday, March 17, 2022

Automatic SQL Tuning Set in Oracle 21c

The SQL tuning sets introduced in Oracle 10g and DBMS_SQLTUNE package used to manage SQL Tuning. You can use SQL tuning sets to group statements into a single object and use as input to tuning tools.

The below command can be used to create a SQL Tuning set

EXEC DBMS_SQLTUNE.create_sqlset(sqlset_name => 'SQL_Tuning_Set1');

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET (
sqlset_name => 'SQL_Tuning_Set1',
description => 'SQL Tuning Set 1’);
END;


You can use UPDATE_SQLSET procedure to update the attributes of the SQL statements in the SQL tuning Sets

You can use below query to find the SQL Tuning sets owned by the user

SQL> SELECT NAME, STATEMENT_COUNT, DESCRIPTION FROM USER_SQLSET;

You can use DELETE_SQLSET procedure to deletes all the statements in SQL Tuning set

BEGIN
DBMS_SQLTUNE.DELETE_SQLSET ( sqlset_name => 'SQL_Tuning_Set1');
END;
/


You can use DROP_SQLSET procedure command the SQL Tuning set using below command

BEGIN
DBMS_SQLTUNE.DROP_SQLSET( sqlset_name => 'SQL_Tuning_Set1');
END;
/

You can also transfer the SQL tuning sets following steps create, pack, transfer, and unpack by using use below procedures. You can use datapump or export/import to export/import from source to destination database.

DBMS_SQLTUNE.create_stgtab_sqlset – create a stage
DBMS_SQLTUNE.pack_stgtab_sqlset – To pack SQL tuning sets
DBMS_SQLTUNE.unpack_stgtab_sqlset – To unpack SQL Tuning Sets

In Oracle 11g, further enhancements added to use SQL tuning sets with SQL Performance Analyzer. The DBMS_SQLPA package helps to build and compare two different version of the workload performance.

You can use CREATE_ANALYSIS_TASK to create an analysis task for SQL tuning set or for a single statement or single statement from the workload repository with range of snapshots

You can use below examples…

variable v_task VARCHAR2(64);
variable v_tset_task VARCHAR2(64);

-- SQL Text
EXEC :v_task := DBMS_SQLPA.CREATE_ANALYSIS_TASK(
sql_text => select dname, count(*) from dept, emp where dept.deptno = emp.deptno);

-- SQL ID
EXEC :v_task := DBMS_SQLPA.CREATE_ANALYSIS_TASK( SQL_ID => 'cv1d34ds5kdd4');

--Workload repository
exec :v_task := DBMS_SQLPA.CREATE_ANALYSIS_TASK( BEGIN_SNAP => 1, END_SNAP => 2,
SQL_ID => 'cv1d34ds5kdd4');

-- SQL Tuning Set
EXEC :v_tset_task := DBMS_SQLPA.CREATE_ANALYSIS_TASK( SQLSET_NAME => 'SQL_Tuning_Set1', order_by => 'BUFFER_GETS' );

You can use CANCEL_ANALYSIS_TASK procedure to cancel the task.

EXEC DBMS_SQLPA.CANCEL_ANALYSIS_TASK(:v_task);

In Oracle 21c, Automatic SQL tuning automates the entire SQL tuning processes. The automated SQL Tuning sets (ASTS) is a system generated execution plan and performance metrics, it is useful for repairing SQL performance regression when using SQL Plan management.

This feature is introduced in 19c RU 19.7 onwards and available with AWR . You can use ASTS with SQL plan management to implement entire workflow without manual intervention.

You can run below query dba_sqlset_Statements to view statements in ASTS

SQL> SELECT SQL_TEXT FROM DBA_SQLSET_STATEMENTS WHERE SQLSET_NAME = 'SYS_AUTO_STS';

You can use below commands to enable/Disable ASTS

BEGIN
DBMS_AUTO_TASK_ADMIN.ENABLE(
CLIENT_NAME => 'ASTS CAPTURE TASK',
OPERATION => NULL,
WINDOW_NAME => NULL);
END;
/

BEGIN
DBMS_AUTO_TASK_ADMIN.DISABLE(
CLIENT_NAME => 'ASTS CAPTURE TASK',
OPERATION => NULL,
WINDOW_NAME => NULL);
END;
/

Thanks & Regards
http://oracleracexpert.com

Friday, March 11, 2022

PGA Memory are not eligible to receive ORA-4036 interrupts

Users may receive ORA-04036 errors when PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA and you will see the errors written in the trace files as well.

ARC1 (PID:42467): Archived Log entry 10265 added for T-1.S-10440 ID 0xb264618a LAD:1
2022-05-15T02:07:49.670543-07:00
PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA
memory are not eligible to receive ORA-4036 interrupts. Further occurrences
of this condition will be written to the trace file of the DBRM process.

When you encounter this issue, the sessions consuming the PGA will be terminated until the bottleneck is cleared . Note that Oracle can exceed the amount of RAM without PGA_AGGREGATE_LIMIT which may lead to RAM buffer paging and RAC node eviction errors

The V$PGA_TARGET_ADVICE will help to predict how the cache hit percentage and over allocation count statistics displayed by the V$PGASTAT performance view

SQL> select pga_target_for_estimate, pga_target_factor, estd_time  from v$pga_target_advice;

The below query can help to get PGA Target advice by querying v$pga_target_advice_histogram

SQL> SELECT LOW_OPTIMAL_SIZE/1024 "LOW VALUE IN KB", (HIGH_OPTIMAL_SIZE+1)/1024 "HIGH VALUE IN KB", ESTD_OPTIMAL_EXECUTIONS "OPTIMAL VALUE IN KB ", ESTD_ONEPASS_EXECUTIONS "ONE PASS EXECUTION", ESTD_MULTIPASSES_EXECUTIONS "MULTI-PASS EXECUTION "
FROM  V$PGA_TARGET_ADVICE_HISTOGRAM
WHERE PGA_TARGET_FACTOR = 2 AND ESTD_TOTAL_EXECUTIONS != 0
ORDER BY 1;

By default PGA_AGGREGATE_LIMIT is set to 2GB, when you receive the errors, I would suggest to double the value or set the appropriate value required for your environment.

You can run below command to get PGA_AGGREGATE_LIMIT and increase the value

SQL> show parameter PGA_AGGREGATE_LIMIT
SQL> alter system set PGA_AGGREGATE_LIMIT=<xGB> scope=spfile;

Sometimes users may receive below errors when PGA_AGGREGATE_LIMIT set to zero. Make sure you set the non-zero and appropriate value.

ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
REP-0069: Internal error
REP-57054: In-process job terminated:Terminated with error:
REP-300: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Users may experience "Database Crash" in Oracle 19c versions when USING DBMS_STATS.GATHER_TABLE_STATS . This is due to a Oracle product defect Bug:30846782 which is fixed in 21.1.

As a workaround you may try to reduce the memory usage, set hidden parameter "_fix_control"='20424684:OFF'.

At session level:
alter session set "_fix_control"='20424684:OFF';
At Instance level:
alter system set "_fix_control"='20424684:OFF';

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE.








Monday, February 21, 2022

DBMS_DST Package updating Database Time Zone File

Users normally see below message in alert.log file when upgrading the Database. You can use DBMS_DST package to upgrade the time zone file.

Database is using a timezone file older than version xx

The timezone has 2 file i.e large file and small file and these are located under oracore/zoneinfo under ORACLE HOME directory.

• The large versions are designated as timezlrg_version_number.dat., this file has all the time zones defined in the database

• The small versions are designated as timezone_version_number.dat, this file has all most commonly used time zones

You can run query V$TIMEZONE_FILE to identify time zone file version used by the database.

SQL> select filename, version, from v$timezone_file;
FILENAME VERSION
-------------------- ----------
timezlrg_26.dat 26

You can get the primary and secondary time zone versions details from DATABASE_PROPERTIES

SQL> column property_name format a40
SQL>column property_value format a30
SQL>select property_name, property_value from database_properties where property_name like 'DST_%' order by property_name;

PROPERTY_NAME PROPERTY_VALUE
---------------------------------------- ------------------------------
DST_PRIMARY_TT_VERSION 26
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE NONE

After upgrade you can check upg_summary.log to identify Database time zone version and current release time zone version

Oracle Database Release 19 Post-Upgrade Status Tool 04-30-2021 11:07:0
Database Name: TDB1
Component Current Full Elapsed Time
Name Status Version HH:MM:SS
Oracle Server UPGRADED 19.10.0.0.0 00:09:54
JServer JAVA Virtual Machine VALID 19.10.0.0.0 00:01:12
Oracle XDK UPGRADED 19.10.0.0.0 00:00:38
Oracle Database Java Packages UPGRADED 19.10.0.0.0 00:00:05
Oracle Text UPGRADED 19.10.0.0.0 00:00:23
Oracle Workspace Manager UPGRADED 19.10.0.0.0 00:00:22
Oracle Real Application Clusters OPTION OFF 19.10.0.0.0 00:00:00
Oracle XML Database UPGRADED 19.10.0.0.0 00:00:51
Oracle Multimedia UPGRADED 19.10.0.0.0 00:01:47
Datapatch 00:01:41
Final Actions 00:01:44
Post Upgrade 00:00:16

Total Upgrade Time: 00:17:31

Database time zone version is 26. It is older than current release time
zone version 32. Time zone upgrade is needed using the DBMS_DST package
.

Grand Total Upgrade Time: [0d:0h:20m:20s]

You can also get latest time zone version using below query

SQL> SELECT DBMS_DST.get_latest_timezone_version FROM dual;
GET_LATEST_TIMEZONE_VERSION
---------------------------
32

To upgrade time zone follow below steps

1. Shutdown the database and startup in upgrade mode

SQL> Shutdown immediate
SQL> Startup upgrade

2. Prepare the time zone version upgrade using BEGIN_PREPARE procedure

SQL> set serveroutput on
SQL> declare
l_tz_version pls_integer;
begin
l_tz_version := dbms_dst.get_latest_timezone_version;
dbms_output.put_line('l_tz_version=' || l_tz_version);
DBMS_DST.BEGIN_PREPARE(l_tz_version);
end;
/
PL/SQL procedure successfully completed.

The below query will show the upgrade version we are attempting to

SQL> column property_name format a40
SQL>column property_value format a30
SQL>select property_name, property_value from database_properties where property_name like 'DST_%' order by property_name;

PROPERTY_NAME PROPERTY_VALUE
---------------------------------------- ------------------------------
DST_PRIMARY_TT_VERSION 26
DST_SECONDARY_TT_VERSION 32
DST_UPGRADE_STATE PREPARE

3. You can find the affected tables by running below queries

SQL> exec DBMS_DST.find_affected_tables;
PL/SQL procedure successfully completed.

SQL> select count(*) from sys.dst$affected_tables;
COUNT(*)
----------
0
SQL> select * from sys.dst$error_table;
no rows selected

4. Use END_PREPARE procedure to end the prepare stage and begin upgrade using BEGIN_UPGRADE procedure

SQL> EXEC DBMS_DST.END_PREPARE;
PL/SQL procedure successfully completed.

SQL> set serveroutput on
SQL> declare
l_tz_version pls_integer;
begin
l_tz_version := dbms_dst.get_latest_timezone_version;
dbms_output.put_line('l_tz_version=' || l_tz_version);
DBMS_DST.BEGIN_UPGRADE(l_tz_version);
end;
/
l_tz_version=32
An upgrade window has been successfully started.

PL/SQL procedure successfully completed.

5. Open in normal mode and upgrade database time zone file using DBMS_DST.UPGRADE_DATABASE

SQL> shutdown immediate
SQL> startup

sql> set serveroutput on
sql> declare
l_failures pls_integer;
begin
DBMS_DST.UPGRADE_DATABASE(l_failures);
dbms_output.put_line('dbms_dst.upgrade_database : l_failures=' || l_failures);
DBMS_DST.END_UPGRADE(l_failures);
dbms_output.put_line('dbms_dst.end_upgrade : l_failures=' || l_failures);
end;
/
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Table list: "MDSYS"."SDO_DIAG_MESSAGES_TABLE"
Number of failures: 0
Table list: "DVSYS"."SIMULATION_LOG$"
Number of failures: 0
Table list: "DVSYS"."AUDIT_TRAIL$"
Number of failures: 0
DBMS_DST.upgrade_database : l_failures=0
An upgrade window has been successfully ended.
DBMS_DST.end_upgrade : l_failures=0

PL/SQL procedure successfully completed.

Verify the time zone upgrade by running below query

SQL> column property_name format a40
SQL>column property_value format a30
SQL>select property_name, property_value from database_properties where property_name like 'DST_%' order by property_name;

PROPERTY_NAME PROPERTY_VALUE
---------------------------------------- ------------------------------
DST_PRIMARY_TT_VERSION 32
DST_SECONDARY_TT_VERSION 0
DST_UPGRADE_STATE NONE

Thanks & Regards,
https://oracleracexpert.com, Oracle ACE

Wednesday, February 9, 2022

Oracle Data pump import stuck Processing object type DATABASE_EXPORT/SCHEMA

You might come across issues when importing data using DataPump. I have faced the issue several times while importing Domain Index and other objects. You can encounter DOMAIN INDEX issue  related to CTXSYS schema. Note that data import may complete quickly but import gets stuck when creating DOMAIN INDEXES or INDEX and it may still run even after few days. 

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/CODE_BASE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/DOMAIN_INDEX/INDEX

Pls note that domain or normal index rebuild may take time based upon the index size during the import and also other reasons may effect the operation. Here are few reasons that you need to look …

1. Did you gather Stats before the import – If yes, exclude STATS and manually gather stats after import completed.

2. Make sure all tablespaces have enough space – Sometimes imports get stuck due to not having enough space

3. Make sure you have enough STREAMS_POOL_SIZE - The value should be at least 512M or more

If you are still facing the issue then you need to run below commands to identify the SQL Statement A Data Pump Process Is Executing

a) Find out the the datapump import job running or not using below SQL

SQL> select owner_name, job_name, operation, job_mode, from dba_datapump_jobs where state='EXECUTING' ;

You can run below command to identify the session used by datapump job.

SQL> select owner_name, job_name, session_type from dba_datapump_sessions;

b)  If the job is still running on step 4 then Identify The Current SQL Statement A Data Pump Process Is Executing (refer Oracle support Doc ID 1528301.1) and identify the object

After you have tried all the options if the index creating is still taking time then the last option to EXCLUDE the object taking time and manually create after import operation is successful…

You can get the object DDL command using below query

SQL> select dbms_metadata.get_ddl('INDEX','<Index_Name>','<Schema_Name>') from dual;
or 
SQL> select dbms_metadata.get_ddl('TABLE','<Table_Name>','<Schema_Name') from dual;

You can exclude the index by adding below clause in the import command…

Exclude= INDEX:"LIKE ‘Index_name _that got stuck_%'"

After import is successful you can create the object manually in my case it is index…

To monitor Data Pump jobs query views DBA_DATAPUMP_JOBS AND DBA_DATAPUMP_SESSIONS. You can also query V$SESSION_LONGOPS to see the progress of data pump job.

The below script very useful to identify database role, version, registry status, patch level and Invalid objects. I would highly suggest to run this script for any maintenance activity you perform on a Database.

SET PAGESIZE 2000
SET LINESIZE 500
COL OBJECT_NAME FORMAT A30
COL OBJECT_TYPE FORMAT A30
COL COMP_ID FORMAT A10
COL COMP_NAME FORMAT A45  
COL OWNER FORMAT A15
COL STATUS FORMAT A10
COL VERSION FORMAT A10  
/* Database Role and Version */
select NAME, PLATFORM_ID, DATABASE_ROLE from v$database;
select * from V$version where banner like 'Oracle Database%';
/* Database Component Registry status */
select comp_id, comp_name, status, version from dba_registry;
/* Database patch Level*/
select * from dba_registry_history;
/* INVALID objects in the DB count, by type & in detail */
select count(*) "INVALID Objects Count" from dba_objects where status !='VALID';
select owner, object_type, count(*) from dba_objects where status !='VALID' group by owner, object_type order by owner, object_type;
select owner, object_type, object_name, status from dba_objects where status !='VALID' order by owner, object_type, object_name;

Hope this helps

Regards,
http://oracleracexpert.com, Oracle ACE

Wednesday, February 2, 2022

Automatic Temporary Tablespace and undo tablespace Shrink in Oracle21c

Automatic Temporary Tablespace Shrink

Temporary tablespaces are used for storing temporary data and Users will notice high temporary tablespace usage when using sorts, hash joins and query transformations and DBA needs to manually size the temporary tablespace.

The automatic temporary tablespace helps to take care of below operations automatically
  • Shrink temporary tablespace to reclaim unused space
  • Grow temporary tablespace based upon high temp usage
You can run below query to identify the free space

SQL> SELECT * FROM dba_temp_free_space;

Or

SQL> SELECT TABLESPACE_NAME,TABLESPACE_SIZE/1024/1024 "TABLESPACE_SIZE", FREE_SPACE/1024/1024 "FREE_SPACE" from dba_temp_free_space;

Automatic Undo  Tablespace Shrink

Oracle Database creates and manages the information needed to roll back, or undo, changes before they are committed. These undo records are stored in the undo segments in an undo tabs pace.

An undo tablespace is used for undo management, it helps to undo data or rollback any transaction. In 11g Oracle introduced automatic undo management it helps to manage undo segments in a database. The UNDO_MANAGEMENT parameter is set to AUTO or null enables automatic undo management and UNDO_TABLESPACE specifies the name of the undo tablespace.

Use UNDO_RETENTION initialization parameter to specify minimum undo retention period in seconds and you will see better results with a fixed-size undo tablespace when using Automatic tuning of undo retention. The Undo Advisor can help you estimate the capacity and you can access through Oracle Enterprise Manager or using DBMS_ADVISOR or DBMS_UNDO_ADV package.

The undo tablespace can grow large and the easy way to reclaim the space from undo tablespace is to create a new undo tablespace and set the database with new undo tablespace and drop the old undo tablespace.

SQL> CREATE UNDO TABLESPACE UNDO2 DATAFILE '/u01/oradata/ORCL/undo02.dbf' SIZE 5G AUTOEXTEND ON ;
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDO2;
SQL> DROP TABLESPACE UNDO1 INCLUDING CONTENTS AND DATAFILES;

Oracle 21c introduces automated way to recover the space, this feature shrinks the undo tablespace by dropping the expired segments and extents, also it performs the data file shrink if possible. Note that data file shrink is based upon allocated extents.

This feature takes care of below operations automatically
  • Recovery space from transactions that are not active
  • Allow large transactions to run successfully by recovering space from expired undo
Please see the example for using DBMS_ADVISOR for UNDO.

DECLARE
tname VARCHAR2(30);
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'INSTANCE', 1);
DBMS_ADVISOR.EXECUTE_TASK(tname);
END;
/

Where
TARGET_OBJECTS is the undo tablespace of the system
START_SNAPSHOT Starting snapshot in the AWR to perform analysis
END_SNAPSHOT Ending snapshot in the AWR to perform analysis
BEGIN_TIME_SEC The beginning time of the period and now.
END_TIME_SEC The ending time of the period and now

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE

Thursday, January 6, 2022

Oracle Data pump enhancements in Oracle 21c

Oracle 21c offering new enhancements for Oracle data pump and take an advantage of these features

  • CHECKSUM, CHECKSUM_ALGORITHM
  • VERIFY_ONLY and VERIFY_CHECKSUM
  • INCLUDE and EXCLUDE in the Same Operation
  • Index Compression
  • Transportable Tablespace Enhancements
  • JSON Data Type Support

CHECKSUM, CHECKSUM_ALGORITHM

These parameters enables the export to perform checksum validation for each of the dump files, you can enable using CHECKSUM_ALGORITHM or CHECKSUM. In order to use these the COMPATIBLE parameter must be set to 21.0 or higher

$ expdp \'/ AS SYSDBA\' dumpfile=test.dmp schemas=testuser DIRECTORY=test_dir LOGFILE=test.log CHECKSUM=YES checksum_algorithm=SHA256

VERIFY_ONLY and VERIFY_CHECKSUM
The VERIFY_ONLY , VERIFY_CHECKSUM parameter uses the checksum to validate dump files during the import. The VERIFY_CHECKSUM and VERIFY_ONLY parameters are mutually exclusive

$ impdp \'/ AS SYSDBA\' dumpfile=test.dmp DIRECTORY=test_dir LOGFILE=test.log verify_checksum=yes

INCLUDE and EXCLUDE in the Same Operation
From Oracle 21c, the Include and exclude objects within the same export or import job

$ expdp \'/ AS SYSDBA\' dumpfile=test.dmp schemas=testuser DIRECTORY=test_dir LOGFILE=test.log include=table exclude=statistics

Index Compression
In Oracle database 21c, we can compress indexes while importing using the INDEX_COMPRESSION_CLAUSE and the TRANSFORM parameter

$ impdp \'/ AS SYSDBA\' dumpfile=test.dmp FULL=Y DIRECTORY=test_dir TRANSFORM=TABLE_COMPRESSION_CLAUSE:\"COMPRESS BASIC\" TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\" EXCLUDE=CONSTRAINT

Master table "SYS"."SYS_IMPORT_FULL_01" successfully loaded/unloaded Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01": "/******** AS SYSDBA" FULL=Y DIRECTORY=test_dir TRANSFORM=TABLE_COMPRESSION_CLAUSE:\"COMPRESS BASIC\" TRANSFORM=INDEX_COMPRESSION_CLAUSE:\"COMPRESS ADVANCED LOW\" EXCLUDE=CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported "TEST"."EMP" 188.08 KB 107 rows
. . imported "TEST"."DEPT" 90.50 KB 90 rows
. . imported "TEST"."SAL" 40.50 KB 38 rows
Processing object type TABLE_EXPORT/TABLE/COMMENT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/TRIGGER
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job "SYS"."SYS_IMPORT_FULL_01" successfully completed at Fri Dec 10 09:22:27 2021 elapsed 0 00:00:32

Transportable Tablespace Enhancements
Before Oracle 21c any failure you cannot able to resume transportable tablespace jobs , but now Oracle Data Pump Resumes Transportable Tablespace Jobs and also Parallelizes Metadata Operations using PARALLEL parameter

$ expdp \'/ AS SYSDBA\' dumpfile=test.dmp TRANSPORT_TABLESPACES=testusr DIRECTORY=test_dir TRANSPORT_FULL_CHECK=YES LOGFILE=test.log REUSE_DUMPFILES=YES PARALLEL=2

Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01": "/******** AS SYSDBA" dumpfile=test.dmp TRANSPORT_TABLESPACES= testusr TRANSPORT_FULL_CHECK=YES LOGFILE=test.log REUSE_DUMPFILES=YES PARALLEL=2
ORA-39396: Warning: exporting encrypted data using transportable option without password
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS Processing object type TRANSPORTABLE_EXPORT/STATISTICS/TABLE_STATISTICS Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/STATISTICS/MARKER
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/INDEX
Processing object type TRANSPORTABLE_EXPORT/COMMENT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT Processing object type TRANSPORTABLE_EXPORT/TRIGGER
Processing object type TRANSPORTABLE_EXPORT/INDEX/BITMAP_INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX/DOMAIN_INDEX/SECONDARY_TABLE/INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/INDEX/DOMAIN_INDEX/SECONDARY_TABLE/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX/DOMAIN_INDEX/SECONDARY_TABLE/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX/DOMAIN_INDEX/INDEX Processing object type TRANSPORTABLE_EXPORT/MATERIALIZED_VIEW Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded

JSON Data Type Support
Oracle Data Pump enables export and import of Oracle Database native JSON objects, We can also export and import of tables containing the JSON Datatype in full, tablespaces and table modes using Transportable Tablespace

Oracle 21c also offers export/import from Oracle autonomous Database and Cloud Object stores

Thanks & Regards,