Tuesday, May 14, 2019

Recovering Tables and Table Partitions using RMAN Backups

Oracle 12c offers new feature to recover tables and table partitions at a point-in-time using RMAN without affecting the other objects in the database. Use RECOVER TABLE command to recover tables or table partitions from an RMAN backup.

This feature is useful in the following scenarios:
  • Object has Logical corruption or dropped
  • There is no sufficient undo to perform Flashback table
  • DDL operation modified the structure and you want to recover the data (Flashback cannot rewind the structural changes)
  • Recover a small number of tables to a point-in-time
User should verify the pre-requisites before you prepare to recover the tables and table partitions. RMAN enables you to specify the point-in-time either using SCN, Time or sequence number.

The conditions must be met to perform table/ table Partition recovery:

Before you prepare to recover the tables and table partitions,
  • Database must be in ARCHIVELOG mode and read-write mode
  • At least one full backup is available along with archived logs
  • Enough disk space is available on the database server for auxiliary instance
  • If present, any dependent objects to include in recovery
  • At least 1 Gigabyte extra in memory for the auxiliary database
  • Tables and table partitions on standby databases cannot be recovered.
  • Tables with named NOT NULL constraints cannot be recovered with the REMAP option. · COMPATIBLE parameter must be set to 11.1.0 or higher to recover table partition

 
                    
RMAN enables recovery of selected tables without affecting remaining database objects. During the recovery process RMAN creates an auxiliary database, which is used to recover the tables or table partitions to a specified point-in-time. User need to specify the auxiliary database location using AUXILIARY DESTINATION clause in the RECOVERY command or SET NEWNAME command.

Please find the steps performed by RMAN during the recovery process:

1. Determine the backup which has the tables or table partitions that needs to recover to specified point-in-time
2. Create auxiliary database and recovery the tables or table partitions until specified point-in-time
3. Take a data dump export with recovered tables or table partitions
4. Import the dump into target database
5. Rename the recovered tables or table partitions in the target database

Please find an example to recovery TABLE1 table.

RECOVER TABLE TESTUSER.TABLE1
UNTIL SCN 384840289
AUXILIARY DESTINATION '/TEMP/ORCLDB/recover'
DATAPUMP DESTINATION '/TEMP/ORCLDB/dumpfiles'
DUMP FILE 'testdump.dat';

If source table exists then user can specify NOTABLEIMPORT or REMAP TABLE. Also user can use UNTIL TIME or UNTIL SEQUENCE clause to specify point-in-time recovery.

Please find an example to recovery TABLE1 table as TABLE1_REC to the state that it was 2 days before the current date.

RECOVER TABLE TESTUSER.TABLE1
UNTIL TIME ‘SYSDATE-2’
AUXILIARY DESTINATION '/TEMP/ORCLDB/recover'
REMAP TABLE 'TESTUSER'.'TABLE1':'TABLE1_REC';

There are some limitations recovering tables and table partitions:
  • We cannot recover table and table partitions belongs of SYS schema
  • We cannot recover table or table partitions from SYSAUX,SYSTEM tablespace Tables
  • We cannot recover tables with named NOT NULL constraint using REMAP option.
  • We cannot recover Table/Table partitions ON STANDBY database
  • We cannot recovery table partitions if version is prior Oracle Database 11g R1
Note that there are other methods available to recover tables to a point-in-time such as Oracle Flashback and Tablespace Point-in-Time Recovery.

Refer below link for

Flashback and Database Point-in-Time Recovery
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com


Monday, May 6, 2019

Oracle Database Upgrade to 12c and Available Methods

Oracle 12c offers many new features and upgrading database can bring many advantages to organization. There are various upgrade and migration methods available and the best method to use for your upgrade/migration scenario depends on the source database version, the source and destination operating systems, your downtime requirements, and the personal preference of the DBA. Based upon factors there is a method available to best fit your organization needs.

We have several upgrade methods available

• DBUA
• command line
• Full Transportable Export/Import
• Transportable Tablespaces
• Data Pump Export/Import
• Original Export/Import

For complete Article Please check below link

https://www.linkedin.com/pulse/oracle-database-upgrade-12c-available-methods-gunukula-oracle-ace

Regards
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Sunday, May 5, 2019

Restoring and Recovering Files over Network in Oracle

Oracle 12c offers many new features and restoring, recovering files over the network is one of the new feature. Using RMAN you can restore and recover a database, datafile, controlfile, tablespace or spfile over the network from a physical standby database. To restore the database over the network, use the RESTORE… FROM SERVICE command and use the RECOVER…FROM SERVICE command to recover the database over the network. The FROM SERVICE clause specifies the service name of the physical standby.

You can also use multisection, compression and encryption to improve backup and restore performance.

  • Use SECTION SIZE with RESTORE command to perform multisection restore
  • Use SET ENCRYPTION clause before the RESTORE command to specify the encryption
  • Use USING COMPRESSED BACKUPSET clause to compress backup sets
This feature is useful to synchronize primary and standby database. Here are the few scenarios
Roll-forward a physical standby database to sync with the primary database
Restore the primary database using physical standby database.
Restore physical standby database using the primary database.

In the following example restoring data file over the network from physical standby to primary database

Connected to primary database implicitly
RMAN> CONNECT TARGET /

Restoring the datafile on the primary using datafile on physical database with service “standby_db”
RMAN> RESTORE DATAFILE '/oradata1/orcl/users.dbf' FROM SERVICE standby_db SECTION SIZE 1024M;

Prior to 12c , to restore and recover files over the network, you need to use RMAN BACKUP INCREMENTAL FROM SCN command to create a backup on the primary database that started at the current SCN of the standby, and was then used to roll the standby database forward in time. That manual, multi-step method is not necessary as of Oracle Database 12c.

Regards
Satishbabu Gunukula, Oracle ACE

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.LOAD_METADATA

I have recently come across the below error when importing the data using DATA PUMP.

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.LOAD_METADATA [SELECT..
......................................
ORA-39183: internal error -19 ocurred during decompression phase 2


ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 9715

----- PL/SQL Call Stack -----
  object      line  object
  handle    number  name

0x64f77ff8     21979  package body SYS.KUPW$WORKER


Cause: Normally users will see ORA-39183 error when the dump got corrupted during the file transfer or user might not be using BINARY mode using FTP

I would recommend taking once more export and copy again, it should work.

Users might come across below known issue

ORA-39126: Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [PASSWORD_HISTORY:"<USER>"]
ORA-06502: PL/SQL: numeric or value error
LPX-00216: invalid character 0 (0x0)
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPW$WORKER", line 8358

Cause: In this case hashes of the old passwords stored in user_history$ are corrupted.

Workaround: Please follow below steps, if you error doesn’t match please contact Oracle support

1. EXCLUDE=PASSWORD_HISTORY on impdp statment
or
2. Delete the user_history$ information pertaining to that user.
    SQL> delete from user_history$ where user# in (select user# from user$ where name='USER1');

Please refer below metatlink notes before making any change

RA-39126: "Worker unexpected fatal error in KUPW$WORKER.PUT_DDLS [PASSWORD_HISTORY: ...]" (Doc ID 2114832.1)

DataPump Import Errors ORA-39183 Internal Error -19 Ocurred During Decompression Phase 2 ORA-6512 [ID 1524577.1]

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Saturday, May 4, 2019

ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib)

Normally users see ORA-04030 error with shortage of RAM, small PGA and kernel parameter config.

 ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib)
 Non critical error ORA-48913 caught while writing to trace file "/u01/home/oracle/diag/rdbms/ORCL/ORCL/incident/incdir_8937/ORCL_ora_11258_i8937.trc"
 Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
 ORA-04030: out of process memory when trying to allocate 169040 bytes (pga heap,kgh stack)
 ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib)
 Non critical error ORA-48913 caught while writing to trace file "/u01/home/oracle/diag/rdbms/ORCL/ORCL/incident/incdir_8938/ORCL_ora_11258_i8938.trc"
 Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached

Users can refer metalink Note 233869.1 titled "Diagnosing and Resolving ORA-4030 errors"

By looking the error it is clear that “out of process memory”, where oracle cannot get the required RAM. But note that you need to analyze the trace to find out the root cause

In my case I found that one of the un-optimized PL/SQL job consuming the resources and caused the issue.

Current SQL Statement for this session (sql_id=gjn38xnrxcfbj) -----
BEGIN procedure_1(:1 , :2 , :3 , :4 , :5 , :6 , :7 , :8 , :9 , :10 , :11 , :12 , :13 , :14 , :15 , :16 , :17 , :18 , :19 , :20 , :21 ) ; END;

57% pid 67: 4089 MB used of 4092 MB allocated <= CURRENT PROC
Dump of Real-Free Memory Allocator Heap [0x7f2c640b4000]
mag=0xfefe0001 flg=0x5000003 fds=0x6 blksz=65536
blkdstbl=0x7f2c640b4010, iniblk=66560 maxblk=524288 numsegs=115
In-use num=65424 siz=4290117632, Freeable num=0 siz=0, Free num=1 siz=20512768

In our case max_map_count value is not enough and needs to investigage why the pl/sql processes requires such a alarge amout of memory. In this case adjusting PGA_AGGREGATE_TARGET or MEMORY_TARGET will not limit the size processes can grow.

To find out current vale of max_map_count run below command

$cat /proc/sys/vm/max_map_count
65530

To change the value run below command
sysctl -w vm.max_map_count= <greater than 65530>

Increase the value by following metalink note
Modification of "vm.max_map_count" on Oracle Linux 6 (Doc ID 2195159.1)

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com