Sunday, April 17, 2022

Webinar: Oracle Data Guard New Features in Oracle 18c & 19c

This Webinar helps you to understand the benefits of Oracle Data Guard, Standby Database types, protection modes and new features in Oracle 18c and 19c.

Date and time: May 5th 2022 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)


This Webinar covers following Topics.

• Introduction to Oracle Data Guard
• Oracle Data Guard Benefits
• Standby Database types
• Data Guard Protection Modes
• What's new in Oracle 18c
• What's new in Oracle 19c
• References
• Contact Info
• Q&A

To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For presentation link Click here

Thanks & Regards,
http://www.oracleracexpert.com

Friday, March 11, 2022

PGA Memory are not eligible to receive ORA-4036 interrupts

Users may receive ORA-04036 errors when PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA and you will see the errors written in the trace files as well.

ARC1 (PID:42467): Archived Log entry 10265 added for T-1.S-10440 ID 0xb264618a LAD:1
2022-05-15T02:07:49.670543-07:00
PGA_AGGREGATE_LIMIT has been exceeded but some processes using the most PGA
memory are not eligible to receive ORA-4036 interrupts. Further occurrences
of this condition will be written to the trace file of the DBRM process.

When you encounter this issue, the sessions consuming the PGA will be terminated until the bottleneck is cleared . Note that Oracle can exceed the amount of RAM without PGA_AGGREGATE_LIMIT which may lead to RAM buffer paging and RAC node eviction errors

The V$PGA_TARGET_ADVICE will help to predict how the cache hit percentage and over allocation count statistics displayed by the V$PGASTAT performance view

SQL> select pga_target_for_estimate, pga_target_factor, estd_time  from v$pga_target_advice;

The below query can help to get PGA Target advice by querying v$pga_target_advice_histogram

SQL> SELECT LOW_OPTIMAL_SIZE/1024 "LOW VALUE IN KB", (HIGH_OPTIMAL_SIZE+1)/1024 "HIGH VALUE IN KB", ESTD_OPTIMAL_EXECUTIONS "OPTIMAL VALUE IN KB ", ESTD_ONEPASS_EXECUTIONS "ONE PASS EXECUTION", ESTD_MULTIPASSES_EXECUTIONS "MULTI-PASS EXECUTION "
FROM  V$PGA_TARGET_ADVICE_HISTOGRAM
WHERE PGA_TARGET_FACTOR = 2 AND ESTD_TOTAL_EXECUTIONS != 0
ORDER BY 1;

By default PGA_AGGREGATE_LIMIT is set to 2GB, when you receive the errors, I would suggest to double the value or set the appropriate value required for your environment.

You can run below command to get PGA_AGGREGATE_LIMIT and increase the value

SQL> show parameter PGA_AGGREGATE_LIMIT
SQL> alter system set PGA_AGGREGATE_LIMIT=<xGB> scope=spfile;

Sometimes users may receive below errors when PGA_AGGREGATE_LIMIT set to zero. Make sure you set the non-zero and appropriate value.

ORA-04036: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT
REP-0069: Internal error
REP-57054: In-process job terminated:Terminated with error:
REP-300: PGA memory used by the instance exceeds PGA_AGGREGATE_LIMIT

Users may experience "Database Crash" in Oracle 19c versions when USING DBMS_STATS.GATHER_TABLE_STATS . This is due to a Oracle product defect Bug:30846782 which is fixed in 21.1.

As a workaround you may try to reduce the memory usage, set hidden parameter "_fix_control"='20424684:OFF'.

At session level:
alter session set "_fix_control"='20424684:OFF';
At Instance level:
alter system set "_fix_control"='20424684:OFF';

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE.








Wednesday, February 9, 2022

Oracle Data pump import stuck Processing object type DATABASE_EXPORT/SCHEMA

You might come across issues when importing data using DataPump. I have faced the issue several times while importing Domain Index and other objects. You can encounter DOMAIN INDEX issue  related to CTXSYS schema. Note that data import may complete quickly but import gets stuck when creating DOMAIN INDEXES or INDEX and it may still run even after few days. 

Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/COMMENT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/CODE_BASE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/DOMAIN_INDEX/INDEX

Pls note that domain or normal index rebuild may take time based upon the index size during the import and also other reasons may effect the operation. Here are few reasons that you need to look …

1. Did you gather Stats before the import – If yes, exclude STATS and manually gather stats after import completed.

2. Make sure all tablespaces have enough space – Sometimes imports get stuck due to not having enough space

3. Make sure you have enough STREAMS_POOL_SIZE - The value should be at least 512M or more

If you are still facing the issue then you need to run below commands to identify the SQL Statement A Data Pump Process Is Executing

a) Find out the the datapump import job running or not using below SQL

SQL> select owner_name, job_name, operation, job_mode, from dba_datapump_jobs where state='EXECUTING' ;

You can run below command to identify the session used by datapump job.

SQL> select owner_name, job_name, session_type from dba_datapump_sessions;

b)  If the job is still running on step 4 then Identify The Current SQL Statement A Data Pump Process Is Executing (refer Oracle support Doc ID 1528301.1) and identify the object

After you have tried all the options if the index creating is still taking time then the last option to EXCLUDE the object taking time and manually create after import operation is successful…

You can get the object DDL command using below query

SQL> select dbms_metadata.get_ddl('INDEX','<Index_Name>','<Schema_Name>') from dual;
or 
SQL> select dbms_metadata.get_ddl('TABLE','<Table_Name>','<Schema_Name') from dual;

You can exclude the index by adding below clause in the import command…

Exclude= INDEX:"LIKE ‘Index_name _that got stuck_%'"

After import is successful you can create the object manually in my case it is index…

To monitor Data Pump jobs query views DBA_DATAPUMP_JOBS AND DBA_DATAPUMP_SESSIONS. You can also query V$SESSION_LONGOPS to see the progress of data pump job.

The below script very useful to identify database role, version, registry status, patch level and Invalid objects. I would highly suggest to run this script for any maintenance activity you perform on a Database.

SET PAGESIZE 2000
SET LINESIZE 500
COL OBJECT_NAME FORMAT A30
COL OBJECT_TYPE FORMAT A30
COL COMP_ID FORMAT A10
COL COMP_NAME FORMAT A45  
COL OWNER FORMAT A15
COL STATUS FORMAT A10
COL VERSION FORMAT A10  
/* Database Role and Version */
select NAME, PLATFORM_ID, DATABASE_ROLE from v$database;
select * from V$version where banner like 'Oracle Database%';
/* Database Component Registry status */
select comp_id, comp_name, status, version from dba_registry;
/* Database patch Level*/
select * from dba_registry_history;
/* INVALID objects in the DB count, by type & in detail */
select count(*) "INVALID Objects Count" from dba_objects where status !='VALID';
select owner, object_type, count(*) from dba_objects where status !='VALID' group by owner, object_type order by owner, object_type;
select owner, object_type, object_name, status from dba_objects where status !='VALID' order by owner, object_type, object_name;

Hope this helps

Regards,
http://oracleracexpert.com, Oracle ACE

Wednesday, February 2, 2022

Automatic Temporary Tablespace and undo tablespace Shrink in Oracle21c

Automatic Temporary Tablespace Shrink

Temporary tablespaces are used for storing temporary data and Users will notice high temporary tablespace usage when using sorts, hash joins and query transformations and DBA needs to manually size the temporary tablespace.

The automatic temporary tablespace helps to take care of below operations automatically
  • Shrink temporary tablespace to reclaim unused space
  • Grow temporary tablespace based upon high temp usage
You can run below query to identify the free space

SQL> SELECT * FROM dba_temp_free_space;

Or

SQL> SELECT TABLESPACE_NAME,TABLESPACE_SIZE/1024/1024 "TABLESPACE_SIZE", FREE_SPACE/1024/1024 "FREE_SPACE" from dba_temp_free_space;

Automatic Undo  Tablespace Shrink

Oracle Database creates and manages the information needed to roll back, or undo, changes before they are committed. These undo records are stored in the undo segments in an undo tabs pace.

An undo tablespace is used for undo management, it helps to undo data or rollback any transaction. In 11g Oracle introduced automatic undo management it helps to manage undo segments in a database. The UNDO_MANAGEMENT parameter is set to AUTO or null enables automatic undo management and UNDO_TABLESPACE specifies the name of the undo tablespace.

Use UNDO_RETENTION initialization parameter to specify minimum undo retention period in seconds and you will see better results with a fixed-size undo tablespace when using Automatic tuning of undo retention. The Undo Advisor can help you estimate the capacity and you can access through Oracle Enterprise Manager or using DBMS_ADVISOR or DBMS_UNDO_ADV package.

The undo tablespace can grow large and the easy way to reclaim the space from undo tablespace is to create a new undo tablespace and set the database with new undo tablespace and drop the old undo tablespace.

SQL> CREATE UNDO TABLESPACE UNDO2 DATAFILE '/u01/oradata/ORCL/undo02.dbf' SIZE 5G AUTOEXTEND ON ;
SQL> ALTER SYSTEM SET UNDO_TABLESPACE=UNDO2;
SQL> DROP TABLESPACE UNDO1 INCLUDING CONTENTS AND DATAFILES;

Oracle 21c introduces automated way to recover the space, this feature shrinks the undo tablespace by dropping the expired segments and extents, also it performs the data file shrink if possible. Note that data file shrink is based upon allocated extents.

This feature takes care of below operations automatically
  • Recovery space from transactions that are not active
  • Allow large transactions to run successfully by recovering space from expired undo
Please see the example for using DBMS_ADVISOR for UNDO.

DECLARE
tname VARCHAR2(30);
oid NUMBER;
BEGIN
DBMS_ADVISOR.CREATE_TASK('Undo Advisor', tid, tname, 'Undo Advisor Task');
DBMS_ADVISOR.CREATE_OBJECT(tname, 'UNDO_TBS', null, null, null, 'null', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'TARGET_OBJECTS', oid);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'START_SNAPSHOT', 1);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'END_SNAPSHOT', 2);
DBMS_ADVISOR.SET_TASK_PARAMETER(tname, 'INSTANCE', 1);
DBMS_ADVISOR.EXECUTE_TASK(tname);
END;
/

Where
TARGET_OBJECTS is the undo tablespace of the system
START_SNAPSHOT Starting snapshot in the AWR to perform analysis
END_SNAPSHOT Ending snapshot in the AWR to perform analysis
BEGIN_TIME_SEC The beginning time of the period and now.
END_TIME_SEC The ending time of the period and now

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE

Wednesday, January 19, 2022

DBMS_CLOUD Package in Oracle 21c

In order to work with data in object stores you can either use DBMS_CLOUD package or manually define external table. The package was DBMS_CLOUD package introduced in Oracle Autonomous Database to work with an object store, we can also use in on-prem for versions 19c and 21c but you need to install manually. You can refer Oracle note How To Setup And Use DBMS_CLOUD Package (Doc ID 2748362.1) for setup and other details.

You can run below command to create directory, grant access on directory and DBMS_CLOUD package to emp_usr user.

CREATE OR REPLACE DIRECTORY EMP_DIR AS '/oracle/emp_dir';
GRANT EXECUTE ON DBMS_CLOUD TO emp_usr;
GRANT READ, WRITE ON DIRECTORY emp_dir TO emp_usr, C##CLOUD$SERVICE;

Note DBMS_CLOUD is owned by a separate schema C##CLOUD$SERVICE, this user locked by default. Both emp_usr, C##CLOUD$SERVICE user needs to have access.

Credential - You can create credential using the CREATE_CREDENTIAL procedure for your object store.

GRANT CREATE CREDENTIAL TO emp_usr;

BEGIN
DBMS_CLOUD.CREATE_CREDENTIAL (
CREDENTIAL_NAME => 'cloud_obj_cred',
USERNAME => '<username>,
PASSWORD => '<user auth token>'
) ;
END;
/

You can use UPDATE_CREDENTIALS procedure to make changes to the credentials

BEGIN
DBMS_CREDENTIAL.UPDATE_CREDENTIAL(
CREDENTIAL_NAME => 'cloud_obj_cred',
ATTRIBUTE => 'username',
VALUE => 'emp_usr');
END;
/

To drop credential you can use DROP_CREDENTIAL procedure.

BEGIN
DBMS_CLOUD.DROP_CREDENTIAL(CREDENTIAL_NAME => 'cloud_obj_cred’);
END;
/

You can disable and enable the credentials using DISABLE_CREDENTIAL and ENABLE_CREDENTIAL

BEGIN
DBMS_CREDENTIAL.DISABLE_CREDENTIAL('cloud_obj_cred’);
END;
/

BEGIN
DBMS_CREDENTIAL.ENABLE_CREDENTIAL('cloud_obj_cred’);
END;
/

Object store - For managing the objects in the cloud object store, please use below commands

To transfer a file from direct to cloud object store use the PUT_OBJECT procedure

BEGIN
DBMS_CLOUD.PUT_OBJECT (
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxxxxxx/emp_file.txt',
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

In order to transfer a object from cloud object store to the directory use the GET_OBJECT procedure

BEGIN
DBMS_CLOUD.GET_OBJECT (
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxxxxxx/emp_file.txt',
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

You can use dbms_cloud.list_objects to lists objects in the object store and use GET_METADATA function In order to get specific object metadata

SELECT * FROM DBMS_CLOUD.LIST_OBJECTS(CREDENTIAL_NAME => 'cloud_obj_cred’,LOCATION_URI => 'https://xxxxxxxxxxx');

SELECT * FROM DBMS_CLOUD.GET_METADATA(CREDENTIAL_NAME => 'cloud_obj_cred’,LOCATION_URI => 'https://xxxxxxxxxxx/emp_file.txt') AS METADATA FROM DUAL;

You can delete objects from the cloud object store using DELETE_OBJECT procedure
 
BEGIN
DBMS_CLOUD.DELETE_OBJECT(
CREDENTIAL_NAME => 'cloud_obj_cred’,
OBJECT_URI => 'https://xxxxxxxx/emp_file.txt');
END;
/

You can delete files from the directory using DELETE_FILE procedure
 
BEGIN
DBMS_CLOUD.DELETE_FILE(
DIRECTORY_NAME => 'emp_dir',
FILE_NAME => 'emp_file.txt');
END;
/

To list the files in a directory use LIST_FILES function

SELECT * FROM DBMS_CLOUD.LIST_FILES(DIRECTORY_NAME => 'emp_dir');

To export data into a cloud object store in the required format use EXPORT_DATA procedure

BEGIN
DBMS_CLOUD.EXPORT_DATA (
CREDENTIAL_NAME => 'cloud_obj_cred’,
FILE_URI_LIST => 'https://xxxxxxxxxxx/emp_file.csv',
QUERY => 'select * from emp',
FORMAT => '{"TYPE" : "csv"}');
END;
/

To delete all operation use DELETE_ALL_OPERATIONS procedure

BEGIN
DBMS_CLOUD.DELETE_ALL_OPERATIONS;
END;
/

To delete specific operation use DELETE_OPERATIONS procedure , but you need to provide operation ID

BEGIN
DBMS_CLOUD.DELETE_OPERATION(<id>);
END;
/

If you enter problems with DBMS_CLOUD with the user or role you can test the configuration using the same sample code that was used for the DBMS_CLOUD setup .

Thanks & Regards,
http://oracleracexpert.com, Oracle ACE