Showing posts with label alert.log errors. Show all posts
Showing posts with label alert.log errors. Show all posts

Thursday, February 5, 2026

Resolving ORA-19502, ORA-16038 and ORA-27072 Errors in Oracle Database

We recently encountered errors below and there are several common causes.

 ORA-19502: write error on file "/oraarch/TESTDB/1_432678_12436018.dbf", block number 182272 (block size=512)
 ORA-16038: log 2 sequence# 432678 cannot be archived
 ORA-19502: write error on file "", block number (block size=)
 ORA-00312: online log 2 thread 1: '/redo2/TESTDB/TESTDB_1B.rdo'
 ORA-27072: File I/O error


The “ORA-27072: File I/O error” , can occur due to below are common reasons

  • Disk issue – This error can also occur if the disk or storage is inaccessible. It might be due to hardware related issues
  • File corruption- The file system where database resides might have corrupted.
  • Permission issue – If the database user does not have enough permissions, you will get this error.
  • Mount failures – when Filesystem not mounted properly

The “ORA-16038” error mainly occurs when archive log file cannot be archived. In this case if the database cannot be able to reuse redo log files, logs cannot switch, the database may hung.

The “ORA-19502” error mainly caused by insufficient disk space or file system full.

In our case, the issue was caused by a full archive log filesystem. 

When archive log file system got full, the redo log archiving failed triggering ORA-16038 and ORA-19502 errors. This eventually resulted ORA-27072 due to failed write attempts

Recommended steps

1. Check the archive log and db_recovery_file_dest destinations

SHOW PARAMETER log_archive_dest;
SHOW PARAMETER db_recovery_file_dest;

If using FRA:

SHOW PARAMETER db_recovery_file_dest_size;
 
2. User should use “df-h” to check the diskspace

User should Pay special attention to:
  • Archive destination mount point
  • FRA mount point

3. If the file system is full Increase size by extending lun or increasing FRA size.

4. Make sure user run the backup and delete old archive logs

rman target /
DELETE ARCHIVELOG ALL COMPLETED BEFORE 'SYSDATE-x';

Here X means number of days

5. In case FRA is full then user should increase the db_recovery_file_dest_size using below example

Check FRA usage using below query

SELECT name, space_limit/1024/1024 MB_LIMIT,
               space_used/1024/1024 MB_USED,
               space_reclaimable/1024/1024 MB_RECLAIMABLE
FROM   v$recovery_file_dest;  

ALTER SYSTEM SET db_recovery_file_dest_size = 200G;

6. Always check for alert.log to review errors and find the root cause.

Look for:
  • ARCn errors
  • Log switch failures
  • Repeated I/O messages
To Avoid this issue in future user can take below measures:
  • Monitor FRA usage regularly
  • Set up alerting when disk usage exceeds 80%
  • Configure proper RMAN retention policy
  • Automate archive log deletion after backup
  • Separate archive logs from other mount points
  • Monitor log switch frequency
In our environment, the archive log filesystem became completely full.

This caused:
ORA-19502 (write failure)
ORA-16038 (cannot archive log)
ORA-27072 (I/O error)

Once disk space was cleared, archiving resumed automatically and the database returned to normal operation.

Thanks & Regards,

Friday, December 18, 2020

ORA-39095: Dump file space has been exhausted: Unable to allocate 8192 bytes

Uses can receive below error during the export in case if you did not provide enough file names

$expdp xxx/xxxx directory=EXP_DIR DUMPFILE=exp_user.dmp LOGFILE=exp_user.log SCHEMAS=EMPUSER FILESIZE=20m
…..

ORA-39095: dump file space has been exhausted: unable to allocate 4096 bytes
Job "EMPUSER"."sys_export_table_01" stopped due to fatal error at 10:05:03

You will also receive this error if you specify one dump file or a number less than parallelism value. The slave processes locking the file does not release the lock even after finishing as it wait other processes to write to the file.

$expdp xxx/xxxx directory= EXP_DIR DUMPFILE=exp_user.dmp LOGFILE=exp_user.log SCHEMAS=EMPUSER PARALLEL=5

ORA-39095: dump file space has been exhausted: unable to allocate 8192 bytes

DONOT use PARALLEL clause to avoid this error.

In case if you have space issue on file system, then you can generate multiple dumps on multiple file systems or disks.


Thanks,
http://oracleracexpert.com
Oracle ACE


Friday, August 7, 2020

ORA-03291: Invalid truncate option - missing STORAGE keyword

Users creating objects for development activity and come across below error message when truncating a table

ORA-03291: Invalid truncate option - missing STORAGE keyword

The truncate statement expects STORAGE keywork, either specify DROP STORAGE or REUSE STORAGE 

For ex:- SQL> truncate table dept drop storage ;

I have created this blog post shared with all developers.

Thanks & Regards,
Satishbabu G, Oracle ACE


Monday, August 3, 2020

ORA-16198: Received timed out error from KSR & LAD:2 network reconnect

I have setup Data Guard and log synchronization happening without any issue but recently I found below errors, which are happening rarely in alert.log file

GWR (PID:13557): ORA-16198: Received timed out error from KSR
LGWR (PID:13557): Attempting LAD:2 network reconnect (16198)
LGWR (PID:13557): LAD:2 n
etwork reconnect abandoned
2020-07-13T15:32:25.865034-07:00
Errors in file /oracle/diag/rdbms/orcl/ORCL/trace/orcl_lgwr_13557.trc:
ORA-16198: Timeout incurred on internal channel during remote archival
LGWR (PID:13557): Error 16198 for LNO:2 to 'orcl_stdby'
2020-07-13T15:32:25.865253-07:00
LGWR (PID:13557): LAD:2 is UNSYNCHRONIZED
LGWR (PID:13557): Failed to archive LNO:2 T-1.S-5003, error=16198
LGWR (PID:13557): Error 16198 disconnecting from LAD:2 standby host 'orcl_stdby'


Redo Transport Services failed with ORA-16198 error from primary database standby database using LGWR SYNC mode.

Data Guard Broker log shows below error messages…

Data Guard Broker Status Summary:
  Type                        Name                             Severity  Status
  Configuration               orcl_dg_fsfconf                   Warning  ORA-16608: one or more members have warnings
  Primary Database            orcl                              Success  ORA-0: normal, successful completion
  Physical Standby Database   orcl_stdby                        Warning  ORA-16853: apply lag has exceeded specified threshold

When I very configuration everything looks good and no issues.

DGMGRL for Linux: Release 19.0.0.0.0 - Production on Mon Aug 3 15:19:29 2020
Version 19.5.0.0.0
Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.
Welcome to DGMGRL, type "help" for information.
Connected to "ORCL"
Connected as SYSDBA.
DGMGRL> show configuration;
Configuration - orcl_dg_fsfconf
Protection Mode: MaxAvailability
Members:
orcl - Primary database
orcl_stdby - Physical standby database
Fast-Start Failover: Disabled
Configuration Status:
SUCCESS (status updated 34 seconds ago)

If this is happening frequently then it is critical and must take action immediately. In my case it is happening rarely which lean towards time out setting. I see that my current DB has net_timeout as 30.
After research, I have increased the net_timeout to 60 and issue has been resolved.

In case if you are not using Data Guard broker then you can run below command from SQL*Plus
SQL> alter system set log_archive_dest_2 service="orcl_stdby", SYNC AFFIRM delay=0 optional compression=disable max_failure=0 reopen=300 db_unique_name="orcl_stdby" net_timeout=30, valid_for=(online_logfile,all_roles)

In case if you are using DGMGRL command-line interface, then you can run below command

DGMGRL> EDIT DATABASE 'ORCL' SET PROPERTY NetTimeout = 30;

Thanks,
Satishbabu G, Oracle ACE
http://oracleracepxert.com

ORA-00700: soft internal error, arguments

I have come across ORA-00700 error recently and found below error in the logs. This

ORA-00700: soft internal error, arguments: [dbgrfafr_1], [60], [60], [1], [0x7FB5DD11BC70], [], [], [], [], [], [], []
Incident details in: /oracle/diag/rdbms/orcl/ORCL/incident/incdir_75765/ORCL_ora_8274_i75765.trc


Incident details in the trace will have more detailed error. In my case below connections are coming from foglight monitoring tool . After research found that this error can occur due failing statement called from monitoring tools like TOAD,foglight..etc

Dump file /oracle/diag/rdbms/orcl/ORCL/incident/incdir_75765/ORCL_ora_8274_i75765.trc
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.5.0.0.0
Build label: RDBMS_19.3.0.0.0DBRU_LINUX.X64_190417
ORACLE_HOME: /oracle/product/19.0.0.0/dbhome_1
System name: Linux
Node name: server1
Release: 3.10.0-1062.12.1.el7.x86_64
Version: #1 SMP Thu Dec 12 06:44:49 EST 2019
Machine: x86_64
Instance name: ORCL
Redo thread mounted by this instance: 1
Oracle process number: 130
Unix process pid: 29807, image: oracle@server1

*** 2020-05-20T00:45:56.870884-07:00
*** SESSION ID:(204.25936) 2020-05-20T00:45:56.870895-07:00
*** CLIENT ID:() 2020-05-20T00:45:56.870900-07:00
*** SERVICE NAME:(ORCL) 2020-05-20T00:45:56.870904-07:00
*** MODULE NAME:(Foglight for Oracle <5.9.5.20>) 2020-05-20T00:45:56.870909-07:00
*** ACTION NAME:(isrgorcl-db-ORCL-DBO_Instance_General) 2020-05-20T00:45:56.870914-07:00
*** CLIENT DRIVER:(jdbcthin : 12.2.0.1.0) 2020-05-20T00:45:56.870919-07:00

[TOC00000]
Jump to table of contents
Dump continued from file: /oracle/diag/rdbms/orcl/ORCL/trace/ORCL_ora_29807.trc
[TOC00001]
ORA-00700: soft internal error, arguments: [dbgrfafr_1], [60], [60], [0], [0x7FFC2A3E6488], [], [], [], [], [], [], []

[TOC00001-END]
[TOC00002]
========= Dump for incident 201097 (ORA 700 [dbgrfafr_1]) ========

*** 2020-05-20T00:45:56.871792-07:00
dbkedDefDump(): Starting incident default dumps (flags=0x2, level=3, mask=0x0)
[TOC00003]

----- Current SQL Statement for this session (sql_id=gxpn94tgbwz4d) -----
select /*+ RULE */ version "db_version",
decode(log_mode,'NOARCHIVELOG','NO','YES') "is_archived",
decode(a.asm_files,0,'NO','YES') "is_asm",
case when dg.dataguard > 0 THEN 'YES'
when controlfile_type = 'STANDBY' THEN 'YES'
else 'NO'
end as "is_dataguard",
database_role,
decode(r.rman_conf,0,'NO','YES') "is_rman",
flashback_on "is_flashback",
instance_name,
instance_number,
open_mode,
decode(c.cell_count,0,'NO','YES') "is_exadata",
decode(al.al_count,0,'NO','YES') "is_alertlog",
'NO' "is_rds",
decode(cdb,'YES','YES','NO') is_pluggable
from sys.v$instance, sys.v$database,
(select count(*) asm_files from v$datafile where name like '+%' and rownum<2) a,
(select count(*) dataguard from sys.v$archive_dest where target = 'STANDBY' and rownum<2) dg,
(select count(*) rman_conf from sys.v$RMAN_STATUS where rownum<2) r,
(select count(*) cell_count from sys.v$cell where rownum<2) c,
(select count(*) al_count from sys.x$DBGALERTEXT where rownum<2) al
[TOC00003-END]

Also few other users encountered HIGH CPU consumption issue. This can happen when $DBGALERTEXT is populated from the XML alert log file situated in the ADR location. Note that this is an undocumented fixed table and when XML alert log is very large it takes time to access and also it can cause ORA-700 error.

You can query on X$DBGALERTEXT table to see high CPU taking a long time to complete.

SELECT count(*)
FROM X$DBGALERTEXT
WHERE to_date(to_char(originating_timestamp, 'dd-mon-yyyy hh24:mi'), 'dd-mon-yyyy hh24:mi') > to_date(to_char(systimestamp - .00694, 'dd-mon-yyyy hh24:mi'), 'dd-mon-yyyy hh24:mi') /* last 10 minutes */
AND (message_text = 'ORA-00600'
OR message_text LIKE 'útal%'
OR message_text LIKE '%error%'
OR message_text LIKE '%ORA-%'
OR message_text LIKE '%terminating the instance%');

It’s a good practice to purge the table regularly and it can be purged using ADRCI utility.

--First make sure you check the count
SQL> select count(*) from X$DBGALERTEXT;

--Connect to ADRCI utility

$adrci> show home
ADR Homes:
diag/rdbms/orcl/ORCL
diag/tnslsnr/server1/listener_orcl
$adrci> SET HOMEPATH /oracle/diag/rdbms/orcl/ORCL

--Run below command to purse logs older than 1 day.
adrci> purge -age 1440 -type alert
adrci> exit

If you still see the error from Monitoring tools like TOAD, FOGLIGHT contact the vendor. If the issue is not from monitoring tools then contact Oracle

Refer  Metalink note 2056666.1  for more info.

Regards
Satishbabu G, Oracle ACE

Tuesday, July 28, 2020

RENAME Table or Column or Index in Oracle

RENAME Table
You can rename an existing table in any schema except the schema SYS. To rename a table you must be either DB owner or Table owner.

Use RENAME TABLE to rename a table.

Syntax: RENAME TABLE table-Name TO new-Table-Name
Ex:- RENAME TABLE EMPLOYEE TO EMPLOYEE_ACT

If you have a view or foreign key that references the table, attempts to rename it will generate an error. Also if there are any check constraints or triggers on the table, attempts to rename it will also generate an error.

RENAME Column: Use the RENAME COLUMN to rename a column in a table.
ex: RENAME COLUMN EMPLOYEE.Employee_ID TO EMP_ID;

You can use ALTER TABLE and RENAME COLUMN to modify Column data type
ALTER TABLE EMPLOYEE ADD COLUMN EMP_ID NEWTYPE
UPDATE EMPLOYEE SET EMP_ID = Employee_ID
ALTER TABLE EMPLOYEE DROP COLUMN Employee_ID
RENAME COLUMN t. Employee_ID TO EMP_ID

If a view, trigger, check constraint, foreign key constraint then attempt to rename it will generate an error. Also RENAME COLUMN is not allowed if you have any open cursors that reference the column that is being altered.

If there is an index defined on the column then you can still rename. The index will update automatically to refer by its new name

RENAME Index: Use the RENAME Index to rename an index, you cannot rename indexes in SYS schema

Ex:- RENAME INDEX EMPLOYEE_ID_INDEX TO EMP_ID_INDEX

In case if there are any open cursors that reference the index being renamed then RENAME INDEX is not allowed .

Thanks
Satishbabu G, Oracle ACE

Tuesday, June 16, 2020

Duplicating On-premise Database to Oracle Cloud in Oracle 18c

From Oracle 18c, by using DUPLICATE command you can duplicate an on-prem database to Oracle Could. Oracle databases on Oracle Cloud are always encrypted even if no encryption clause is specified during duplication.

Using Oracle RMAN you to perform two main types of database duplications.
  • Backup-based duplication – In this method we will use pre-existing RMAN backups or copies of the source database. 
  • Active database duplication - The database will be duplicated by copying the live source database over the network to the auxiliary instance. 
Follow the steps to migrate on-prem database to Cloud:
1. Ensure the prerequisites for the DUPLICATION technique are met, see Prerequisites for Duplicating a Database".
2. Configure Recovery Manager to use Oracle Database Backup Cloud Service as the backup destination. Use CONFIGURE command. Pls refer Oracle Cloud Using Oracle Database Backup Service for more details.

Syntax:-
RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt
PARMS='SBT_LIBRARY= SBT-library-location-for-backup-module,
SBT_PARMS=(OPC_PFILE=location-of-the-configuration file)';

Ex:-
RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt
PARMS='SBT_LIBRARY=/oracle/18c/lib/libopc.so,
SBT_PARMS=(OPC_PFILE=/oracle/18c/dbs/emp.ora)';

3. Complete the planning tasks, as described in "Planning to Duplicate a Database

4. Prepare the auxiliary instance, as described in "Preparing the Auxiliary Instance"

• You must create auxiliary instance as CDB and start instance with enable_pluggable_database=TRUE in the initialization parameter file
• When instructed to create an initialization parameter file for the auxiliary instance, user must copy the file from the source database. This ensures that the auxiliary instance is also a CDB. After you copy the file you need to perform the following steps:
   – Modify the DB_NAME parameter
   – Modify the various destination/location parameters
• Start the auxiliary instance in NOMOUNT mode.

5. Start RMAN and connect to the root as a common user with the SYSBACKUP privilege or SYSDBA.

6. If the source CDB uses encryption, then open the Oracle keystore that contains the master key on the source CDB.

7. Configure RMAN channels, if necessary, as described in "Configuring RMAN Channels for Use in Duplication".

8. On the destination CDB, open the Oracle keystore from the source CDB. If the destination CDB uses a password-based software keystore, then you must specify the password used to open this keystore

SET DECRYPTION WALLET OPEN IDENTIFIED BY 'password';

9. Use the DUPLICATE command to duplicate the source CDB.

Use one of the following options of the DUPLICATE command:
  • DUPLICATE DATABASE or DUPLICATE...ACTIVE DATABASE - Use this command for duplicating non-CDBs and CDBs.
  • DUPLICATE DATABASE ... FOR STANDBY - Use this command create a standby database by duplicating the source.
  • Use the DUPLICATE DATABASE ... FOR FARSYNC – Use this command to create an Oracle Data Guard far sync instance using duplication.
  • DUPLICATE PLUGGABLE DATABASE – Use this command to duplicate one or more PDBs while connected to the root.
You can also use SET NEWNAME command to specify alternate names for duplicate database files,

Note that Using duplication to create a standby database to Oracle Cloud is not supported


Regards
Satishbabu Gunukula, Oracle ACE

Monday, May 11, 2020

Relocated PDBs in Oracle Database 18c

Oracle 18c allows RMAN backups created before the non-CDB or PDB was migrated into a different target CDB can be used for recovery operations. The COMPATIBLE parameter of the source and Oracle Cloud must be set to 18.0.0 or higher

The RMAN commands used to backup and recovery CDBs and PDBs are the same as those used for non-CDBs, with few variations in the syntax.

The backup and recovery operations performed on non-CDBs can also be performed on CDBs and PDBs. This includes the following:
– Full and incremental backups
– Complete and point-in-time recovery (PITR)
– Reporting operations (such as listing and cross-checking backups)
– Flashback Database

We need take metadata for the existing backups and available to the destination CDB. To export metadata user needs to run DBMS_PDB.EXPORTRMANBACKUP procedure on the source database.

EXECUTE DBMS_PDB.exportrmanbackup();
Or
EXECUTE DBMS_PDB.exportrmanbackup('EMP_PDB');

In case if you are unplug you no need to run this command as unplug already includes the metadata.

Convert NON-CDB to PDB : As we have metadata, now we can covert the NON-CDB instance to PDB.

Step1: Open the non-CDB instance in read-only mode and describe and shutdown

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP OPEN READ ONLY;
SQL> BEGIN
DBMS_PDB.DESCRIBE( pdb_descr_file => '/oracle/empdb.xml');
END;
SQL> SHUTDOWN IMMEDIATE;

Step2: Create the new pluggable database using the non-CDB description file that we have taken in above step

SQL> CREATE PLUGGABLE DATABASE empdb_pdb USING '/oracle/empdb.xml' COPY;
SQL> ALTER SESSION SET CONTAINER= empdb_pdb;
SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
SQL> ALTER PLUGGABLE DATABASE OPEN;
SQL> ALTER PLUGGABLE DATABASE SAVE STATE;

Step3:- Restore and recovery using pre-plugin backup

SQL> ALTER PLUGGABLE DATABASE empdb_pdb CLOSE IMMEDIATE;

RMAN> SET PREPLUGIN CONTAINER=db18cpdb;
RMAN> RESTORE PLUGGABLE DATABASE empdb_pdb FROM PREPLUGIN;
RMAN> RECOVER PLUGGABLE DATABASE empdb_pdb FROM PREPLUGIN;

Sometimes users may come across RMAN-06054 error in that case you need to CATALOG the missing archive log and start the recovery again.

RMAN-06054: media recovery requesting unknown archived log for thread 3 sequence 94983
RMAN>SET PREPLUGIN CONTAINER= empdb_pdb;
RMAN>CATALOG PREPLUGIN ARCHIVELOG 'oracle/archivelog/arc_empdb_pdb_3_94983.arc';
RMAN>RECOVER PLUGGABLE DATABASE empdb_pdb FROM PREPLUGIN;

Perform normal recovery and open the database

RMAN>RECOVER PLUGGABLE DATABASE empdb_pdb;
RMAN>ALTER PLUGGABLE DATABASE empdb_pdb OPEN;


Reference:
Check preplugin backups available to the CDB instance
RMAN> LIST PREPLUGIN BACKUP OF PLUGGABLE DATABASE empdb_pdb;

Preplugin backups are usable only on the destination CDB into which you plug in the source non-CDB or PDB

Regards
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Sunday, May 3, 2020

How to Register or Catalog Backups, Archive logs, data file copy, FRA...etc

When you backup using RMAN all backups  will be registered with catalog. If you have taken backup without catalog and you can also register manually.

Backup pieces copied manually to different location to catalog backup piece run below command.
RMAN> CATALOG BACKUPPIECE /oracle/backup/DB1_1978087_6446005710.bkp' ;

In case if you have taken backup using o/s command and to catalog a datafile copy use below command.
RMAN> CATALOG DATAFILECOPY '/oracle/backup/employee01.bak' LEVEL 0;

In case, archive log location full and you have moved backups to different location using o/s command then you can register the archive logs using below command

RMAN> CATALOG ARCHIVELOG ‘/oracle/backup2/db1_arch_99089_0876589.arc'
If you are using Flash recovery area and want to catalog all files in the FRA use below command.
RMAN> CATALOG recovery area;

This command will prompt to confirm that you want the found files to be cataloged. If you do not want to prompt use “noprompt” clause.
RMAN> CATALOG recovery area noprompt;

If you want to catalog only files Start with specific name or belongs to specific SID (db1) use below command
RMAN> CATALOG START WITH '/oracle/backup/db1'

If you run CROSSCHECK, it will go through the files that match criteria and will be marked as available, and files not found will be marked as EXPIRED.

RMAN> CROSSCHECK BACKUP ;
If you want to DELETE EXPIRED backups you can use below command, note that EXPIRED copy will no longer appear in RMAN. You have to be extra cautious to delete backups.

RMAN> DELETE EXPIRED BACKUP ;

If you find extra copies and want to delete those extra copies use DELETE BACKUPPIECE command:
RMAN> DELETE BACKUPPIECE 4555;
Hope this helps managing your backups taken outside of RMAN recovery catalog .

Thanks & Regards 
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com



Wednesday, April 29, 2020

ORA-19815: WARNING: db_recovery_file_dest_size of 128849010 bytes is 99.03% used, and has 14136528 remaining bytes available.

You will receive ORA-19815 is reaching maximum size of flash backup recovery area (FRA) and note that this limitation will be managed by db_recovery_file_dest_size parameter

If you got this error as one time then you might be running adhoc load or some batch job. Also you will get below message in alert.log file. Please follow one of the choice to fix the issue.

ORA-19815: WARNING: db_recovery_file_dest_size of 128849010 bytes is 99.03% used, and has 14136528 remaining bytes available.
2020-02-21T11:31:19.892420-07:00

************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
system command was used to delete files, then use RMAN CROSSCHECK and
DELETE EXPIRED commands.

In case if you archive logs are going to FRA then you should run RMAN backups and delete the unnecessary files to avoid Database hung or CRASH situation.

RMAN> backup archivelog all delete input;

If the Flash recovery area is 100% then you might receive below error when starting up the database

Database mounted.
ORA-03113: end-of-file on communication channel
Process ID: 9789
Session ID: 321 Serial number: 3


Pls make sure you follow below steps to add more space to flash recovery area (FRA)

SQL> startup mount;
SQL> alter system set db_recovery_file_dest_size=<size>G scope=both;
Replace <size> with a number larger than the current and ensure is sufficient space in the destination
SQL> alter database open;


To see what FRA has run below command

SQL> select * from v$flash_recovery_area_usage;
FILE_TYPE            PERCENT_SPACE_USED PERCENT_SPACE_RECLAIMABLE NUMBER_OF_FILES     CON_ID
-------------------- ------------------ ------------------------- --------------- ----------
CONTROL FILE                          0                         0               0          0
REDO LOG                              0                         0               0          0
ARCHIVED LOG                          0                         0               0          0
BACKUP PIECE                          0                         0               0          0
IMAGE COPY                            0                         0               0          0
FLASHBACK LOG                     99.76                         0             192          0
FOREIGN ARCHIVED LOG                  0                         0               0          0
AUXILIARY DATAFILE C                  0                         0               0          0

In my case I am using for FLASHBACK LOG, these logs are different from redo logs or undo. They are a separate, optional recording of database activity.

These logs are automatically deleted when FRA is our of space and logs is not needed for recvoery within the window specified in DB_FLASHBACK_RETENTION_TARGET parameter

To determine the disk quota and current disk usage in the flash recovery area, query the view
V$RECOVERY_FILE_DEST.

Thanks & Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Wednesday, April 8, 2020

ORA-00230: operation disallowed: snapshot control file enqueue unavailable

User might encounter below error, in case if RMAN backup is already running and user might have triggered another backup. The second backup must wait control file enqueuer to create a snapshot of the controlfile.

RMAN-03002: failure of backup command at 04/10/2020 16:40:33
ORA-00230: operation disallowed: snapshot control file enqueue unavailable

You can run below command to find out the backup that is running and causing the wait

SELECT S.SID, USERNAME , PROGRAM, MODULE,
ACTION, LOGON_TIME FROM V$SESSION s, V$ENQUEUE_LOCK l
WHERE l.SID = s.SID AND l.TYPE = 'CF' AND l.ID1 = 0 AND l.ID2 = 2;

To avoid this error make sure that backups are not overlapping each other.

Regards,
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Sunday, March 29, 2020

ORA-01665: control file is not a standby control file

Users may see this error when trying to mount standby database after restore or during the switch over

SQL> ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE MOUNT STANDBY DATABASE
*
ERROR at line 1:
ORA-01665: control file is not a standby control file


SQL> SELECT database_role FROM v$database;
DATABASE_ROLE
----------------------------------------------------------------
PRIMARY

In current scenario, we see the database role as primary and trying to convert as Physical Standby. You can convert when database not mounted, otherwise you will see below error.

SQL> Alter database convert to physical standby;
Alter database convert to physical standby
*
ERROR at line 1:
ORA-01507: database not mounted

SQL> shutdown immediate
SQL> startup nomount;

SQL> Alter database convert to physical standby;
Database altered.
SQL> SELECT database_role FROM v$database;
DATABASE_ROLE
----------------------------------------------------------------
PHYSICAL STANDBY
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE disconnect from session;
Database altered.

Make sure you start the recovery using below command

sql> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE disconnect from session;

To avoid any user mistakes I would advise to use Data Guard broker

Refer below presentation related to Data Guard.

1. “Why Oracle DataGuard? New Features in Oracle 18c,19c”
https://www.oracleracexpert.com/2019/12/why-oracle-dataguard-new-features-in.html

2. Data Guard Physical Standby Setup in Oracle?
https://www.oracleracexpert.com/2019/02/webinar-data-guard-physical-standby.html

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com


Wednesday, March 11, 2020

ORA-16416: No viable Physical Standby switchover targets available

You will see this error when you are trying to switchover from Primary to Standby, you will see the error when log_archive_dest_n not ready at standby or there is some issue with REDO log.

Also note that you will see this error if f there is an archive gap, that prevents switchover because of which the operation fails

SQL> ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY;
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY
*
ERROR at line 1:
ORA-16416: No viable Physical Standby switchover targets available

Error from standby alert.log file

.... (PID:24072): No viable Physical Standby switchover targets available
ORA-16416 signalled during: ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY...
2020-02-27T19:36:42.425842-08:00
rfs (PID:36824): Database mount ID mismatch [0x5a5dee6f:0x5a5d7a7b] (1516105327:1516075643)
rfs (PID:36824): Not using real application clusters
2020-02-27T19:38:17.144104-08:00
.... (PID:24072): Using STANDBY_ARCHIVE_DEST parameter default value as /oraarch/ORCLP krsd.c:18141]
2020-02-27T19:38:17.154204-08:00
ALTER SYSTEM SET log_archive_dest_2='SERVICE=ORCLP AFFIRM SYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=ORCLP' SCOPE=BOTH;
2020-02-27T19:38:21.552799-08:00
ALTER DATABASE COMMIT TO SWITCHOVER TO STANDBY
2020-02-27T19:38:21.552940-08:00
TMI: adbdrv kcv_primary_switchover BEGIN 2020-02-27 19:38:21.552864
2020-02-27T19:38:21.553038-08:00
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY [Process Id: 24072] (ORCLP)
.... (PID:24072): Waiting for target standby to receive all redo
.... (PID:24072): Waiting for target standby to receive all redo
2020-02-27T19:38:21.553572-08:00
.... (PID:24072): Waiting for all non-current ORLs to be archived
2020-02-27T19:38:21.553661-08:00
.... (PID:24072): All non-current ORLs have been archived
2020-02-27T19:38:21.553741-08:00
.... (PID:24072): Waiting for all FAL entries to be archived
2020-02-27T19:38:21.553820-08:00
.... (PID:24072): All FAL entries have been archived
2020-02-27T19:38:21.553924-08:00
.... (PID:24072): Waiting for potential Physical Standby switchover target to become synchronized
2020-02-27T19:38:21.598598-08:00
Errors in file /home/oracle/diag/rdbms/ORCLP_stdby/ORCLP/trace/ORCLP_tt00_16934.trc:
ORA-16009: invalid redo transport destination
2020-02-27T19:38:21.598892-08:00
TT00 (PID:16934): krsg_check_connection: Error 16009 connecting to standby 'ORCLP'
2020-02-27T19:38:22.554301-08:00


Run below command to find out the Archive Destination status errors..

SQL> select DEST_ID, STATUS, DESTINATION from V$ARCHIVE_DEST_STATUS;
SQL> select DEST_ID,DEST_NAME,STATUS,TARGET,DESTINATION,ERROR,ALTERNATE from v$archive_dest

SQL> select GROUP#,THREAD#,BYTES/1024/1024/1024,ARCHIVED,STATUS from v$standby_log;

Also run below command to identify the gap

SQL> SELECT THREAD#, SEQUENCE# FROM V$THREAD;
SQL> SELECT THREAD#, LOW_SEQUENCE#, HIGH_SEQUENCE# FROM V$ARCHIVE_GAP;

After running the above commands analyzing the logs I see that the error due to invalid REDO LOG destination. I have created the directories as same as Primary on Standby site.

I also come across issue with LOG_ARCHIVE_DEST_2, after fixing the parameter it worked fine. Also, make sure LOG_ARCHIVE_DEST_STATE_2 enabled.

ORA-16474: target_db_name not found in the LOG_ARCHIVE_DEST_n parameter

I would strongly advise to use Data Guard Broker, it minimizes the errors, and switchover is very easy.

Refer below Data Guard presentation
Webinar: Data Guard Physical Standby Setup in Oracle?

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Tuesday, February 11, 2020

ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O

User may receive below error when connecting to Database, I have encountered this issue in various versions of Oracle such as 10g, 11g, 12c and 19c.

ERROR:
ORA-01017: invalid username/password; logon denied
ORA-27090: Unable to reserve kernel resources for asynchronous disk I/O
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
Additional information: 128
Additional information: 2022083592


You may see below messages in alert.log

ORA-27090: Message 27090 not found; product=RDBMS; facility=ORA
Additional information: 3
Additional information: 128
Additional information: 65536

OR

ORA-27090: UNABLE TO RESERVE KERNEL RESOURCES FOR ASYNCHRONOUS DISK I/O
Additional information: 3
Additional information: 128
Additional information: 65536

OR

ORA-27090: Message 27090 not found; product=RDBMS; facility=ORA
Linux-x86_64 Error: 4: Interrupted system call
Additional information: 3
Additional information: 128
Additional information: 65536


DBWR trace
io_setup(4096, 0x66e1588) = -1 EAGAIN (Resource temporarily unavailable),

You will see above error when "aio-max-nr" kernel limit is too low.

Increasing the "aio-max-nr" kernel limit as per the Oracle recommendation. I have set the value as
fs.aio-max-nr= 3145728 in my environment.

Thanks
Satishbabu Gunukula, Oracle ACE

Monday, February 10, 2020

ORA-00600, ORA-07445 and ORA-00020 errors and related bugs in 12c (12.2.0.1)


I have come across below error recently in 12c (12.2.0.1). It first started with ORA-00600 followed by ORA-07445 and ORA-00020. The ORA-00600, ORA-07445 looks like real culprit and which lead to ORA-00020 error.

ORA-00600: internal error code, arguments: [17126], [0x0B0D29528], [], [], [], [], [], [], [], [], [], []
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-01-13T23:20:30.493308-08:00
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x32000000170] [PC:0x10F9D3FD, kghfnd_in_free_lists()+717] [flags: 0x0, count: 1]
Errors in file /home/oracle/diag/rdbms/ORADB/ORADB/trace/ORADB_ora_62437.trc (incident=41538):
ORA-07445: exception encountered: core dump [kghfnd_in_free_lists()+717] [SIGSEGV] [ADDR:0x32000000170] [PC:0x10F9D3FD] [Address not mapped to object] []
Incident details in: /home/oracle/diag/rdbms/ORADB/ORADB/incident/incdir_41538/ORADB_ora_62437_i41538.trc
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-01-13T23:20:31.182120-08:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2020-01-13T23:20:31.182223-08:00
opidcl aborting process unknown ospid (48392) as a result of ORA-600
……

2020-01-14T01:29:37.222083-08:00
ORA-00020: maximum number of processes (500) exceeded
ORA-20 errors will not be written to the alert log for
the next minute. Please look at trace files to see all
the ORA-20 errors.

After investigation and troubleshooting I found below bugs

1. There is a unpublished bug related to Unified memory KGH related crashes/errors. After discussing with support come to know that particular fix was found on an internal note to already be included in Oracle 12.2.0.1

Bug 30053838
 - LNX-20-ATP: HIT ORA-7445 [KGHFND_IN_FREE_LISTS] AND ORA-600 [KSM_PGA_UM_EXT_FREE:INVALID_EXT_MAGIC], INST CRASH

2. There is a published bug 24596874 and the fix fix for 24596874 is first included in 20.1.0 . Also in term patches may be available for earlier versions.

For bug 24596874 , users notice Memory corruption when CDB environment is setup with star schema on one PDB. Redaction policies enabled on tables/view of the schema and query workload executed with 50 concurrent sessions. Multiple ORA 600 [kghfrh:ds] were seen in the alert log.

So both of the above bugs are ruled out as unpublished bug already part of 12.2.0.1 and published bug is related to Star schema on PDB.

After further investigation with Oracle support found another unpublished Bug 30448845 : DATABASE HANG WITH SIGNATURE: 'PMON TIMER'<='PRIOR SPAWNER CLEAN UP'<='LATCH FREE'<='RELIABLE MESSAGE'

As per Oracle support, this is fixed in 18.1 but the fix is not easy to backports. Oracle suggest to update parameter _pmon_slaves_arr_size=0 as workaround in Oracle 12.2

Users cannot update internal oracle parameters directly in SPFILE, please follow the steps mentioned in the below blog post
https://www.oracleracexpert.com/2020/01/ora-00800-soft-external-error-arguments.html

Note that every environment, errors, load, use cases are different, I would highly recommend users to open a ticket with Oracle support and get confirmation before making any changes to your databases.
Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com 

Wednesday, January 29, 2020

ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes]

When working with Oracle DB Version 19.5 recently noticed below errors in alert.log file

Starting background process VKTM
2020-02-04T17:05:58.711181-08:00
Errors in file /home/oracle/diag/rdbms/testdb/TESTDB/trace/TESTDB_vktm_43832.trc (incident=42521):
ORA-00800: soft external error, arguments: [Set Priority Failed], [VKTM], [Check traces and OS configuration], [Check Oracle document and MOS notes], []
Incident details in: /home/oracle/diag/rdbms/testdb/TESTDB/incident/incdir_42521/TESTDB_vktm_43832_i42521.trc
2020-02-04T17:05:58.713567-08:00
Error attempting to elevate VKTM's priority: no further priority changes will be attempted for this process
VKTM started with pid=5, OS id=43832


After research found that parameter _high_priority_processes="VKTM" may help, but it is a internally parameter you cannot able to apply directly on SPFILE. I always raise a ticket with Oracle support to got the confirmation before making changes.

This issue has been reported in the internal bug - Bug 30664554 : ORA-00800: SOFT EXTERNAL ERROR, ARGUMENTS: [SET PRIORITY FAILED], [VKTM]
In order to update internal parameters on a DB using SPFILE you need to follow bellows steps.

1. Take a backup of SPFILE and PFILE to revert changes in case of any isuses.
2. Create PFILE from SPFILE
SQL> create pfile=’$ORACLE_HOME/dbs/initTESTDB.ora’ from spfile
3. Add/update the parameter in PFILE
4. Rename SPFILE and stop/start the DB to use PFILE that was created
5. Create SPFILE from PFILE
SQL> create SPFILE=’$ORACLE_HOME/dbs/spfileTESTDB.ora’ from pfile;
6. stop/start the DB to use SPFILE that was created
Now the changes are applied to SPFILE.

Note that every environment, errors, load, use cases are different, I would highly recommend users to open a ticket with Oracle support and get confirmation before making any changes to your databases.
Thanks
Satishbabu Gunukula,ORACLE ACE
https://oracleracexpert.com

Friday, September 20, 2019

weblogic.jdbc.extensions.PoolDisabledSQLException

We have come across content pool suspended issue in WebLogic as pool got exhausted. All new connections failings and existing connections are running very slow. Which intern created issue in Agile PLM as it is using WebLogic.

weblogic.jdbc.extensions.PoolDisabledSQLException: weblogic.common.resourcepool.ResourceDisabledException: Pool AgileContentPool is Suspended, cannot allocate resources to applications..
        at weblogic.jdbc.common.internal.JDBCUtil.wrapAndThrowResourceException(JDBCUtil.java:265)
        at weblogic.jdbc.jts.Driver.newConnection(Driver.java:900)
        at weblogic.jdbc.jts.Driver.createLocalConnection(Driver.java:220)
        at weblogic.jdbc.jts.Driver.connect(Driver.java:170)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnectionInternal(RmiDataSource.java:523)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:516)
        at weblogic.jdbc.common.internal.RmiDataSource.getConnection(RmiDataSource.java:512)
<2019-09-10 10:38:56,366> <PCMHelperSessionBean_9xz6y2_Impl:ERROR> Session terminated...

To find the root causes examine the thread dumps while the error is occurring and pay attention to stuck thread.

If there are long running queries find out what it is running and users may see ORA-00020: Maximum number of processes reached.

Possible reasons :

1. Database running some custom or un-optimized long running queries
2. Application forgot to release the connection back to pool due to recent config changes, which can result connection leak.
3. Inactive connection timeout value may need to change to support system growth/load.

We have verified at DB level and everything working fine except more connections hanging around and not releasing. After investigation we found that most of the connections going though one server due to recent patch and didn’t update the Pcclient.jnlp, Jndiurl.properties..etc

After updating the config we see that user connections going trough all servers and connection load balance happening as expected and issue has been resolved.

Hope this post helps to resolve Agile Content pool suspended issue.

Thanks,
Satishbabu Gunukula, Oracle ACE

Wednesday, August 14, 2019

How to find Oracle Database and Oracle WebLogic Server Version?

There are several ways to find Oracle Database and Oracle WebLogic Version

Oracle Database
1. Select v$instance for only Database version

SQL> select version from V$INSTANCE;
VERSION
-----------------
11.2.0.4.0

2. Select v$version table in database

SQL> Select * from v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 – Production

Identifying Your Oracle Database Software Release



·         10 is the major database release number
·         1 is the database maintenence release number
·         0 is the application server release number
·         1 is the component-specific release number
·         0 is the platform-specific release number


3.  Search for “Enterprise Edition” in alert_SID.log file
$cat alert_ORCL.log |grep "Enterprise Edition"
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production


Oracle Weblogic:

1.Find WebLoic version from WebLogic Jar file using below command

For ex:-
$cd /u01/home/ wls1211/wlserver_12.1/server/lib/
java -cp weblogic.jar weblogic.version

2. Check registry.xml under $MW_HOME and search for earch for “component name=”WebLogic Server””

For ex: cat /u01/home/ wls1211/registry.xml | grep 'component name="WebLogic Server"'
<component name="WebLogic Server" version="12.1.1.0" InstallDir="/u01/home/oracle/Agile/wls1211/wlserver_12.1">

3. You can also find version by searching “WebLogic Server” in WebLogic Admin server logfile
$DOMAIN_HOME/servers/AdminServer/admin/AdminServer.log

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Monday, May 6, 2019

Oracle Database Upgrade to 12c and Available Methods

Oracle 12c offers many new features and upgrading database can bring many advantages to organization. There are various upgrade and migration methods available and the best method to use for your upgrade/migration scenario depends on the source database version, the source and destination operating systems, your downtime requirements, and the personal preference of the DBA. Based upon factors there is a method available to best fit your organization needs.

We have several upgrade methods available

• DBUA
• command line
• Full Transportable Export/Import
• Transportable Tablespaces
• Data Pump Export/Import
• Original Export/Import

For complete Article Please check below link

https://www.linkedin.com/pulse/oracle-database-upgrade-12c-available-methods-gunukula-oracle-ace

Regards
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Sunday, May 5, 2019

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.LOAD_METADATA

I have recently come across the below error when importing the data using DATA PUMP.

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.LOAD_METADATA [SELECT..
......................................
ORA-39183: internal error -19 ocurred during decompression phase 2


ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9715

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name

0x64f77ff8     21979  package body SYS.KUPW$WORKER


Cause: Normally users will see ORA-39183 error when the dump got corrupted during the file transfer or user might not be using BINARY mode using FTP

I would recommend taking once more export and copy again, it should work.

Users might come across below known issue

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [PASSWORD_HISTORY:"<USER>"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 8358

Cause: In this case hashes of the old passwords stored in user_history$ are corrupted.

Workaround: Please follow below steps, if you error doesn’t match please contact Oracle support

1. EXCLUDE=PASSWORD_HISTORY on impdp statment
or
2. Delete the user_history$ information pertaining to that user.
    SQL> delete from user_history$ where user# in (select user# from user$ where name='USER1');

Please refer below metatlink notes before making any change

RA-39126: "Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [PASSWORD_HISTORY: ...]" (Doc ID 2114832.1)

DataPump Import Errors ORA-39183 Internal Error -19 Ocurred During Decompression Phase 2 ORA-6512 [ID 1524577.1]

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com