Wednesday, December 30, 2009

Time difference between the RAC nodes is out of sync

If the time difference between the RAC nodes is out of sync (time difference > 30 sec) then it will result one of the following issues

1. CRS installation failure on remote node
2. RAC node reboots periodically
3. CRS Application status UKNOWN or OFFLINE

To avoid these issues configure NTP (Network Time Protocol) on both nodes using any one of the following methods

1. system-config-time or system-config-date or dateconfig

Type command system-config-time or system-config-date or dateconfig at terminal --> Click “Network Time Protocol” ->check “Enable Network Time Protocol” and select NTP server --> Click OK

2. date MMDDHHMMSYY
Type command date with current date and time

3. /etc/ntp.conf
Update /etc/ntp.conf file with timeservers IP addresses and start or restart the ntp daemon
$ /etc/init.d/ntp start
or
$ /etc/rc.d/init.d/ntp start

Once RAC nodes are time sync you might need to shutdown and startup the CRS applications manually.
$ crs_stop ora.testrac1.ons
$ crs_start ora.testrac1.ons

In case if you encounter CRS-0223: Resource 'ora.testrac1.ons’ has placement error then stop and start all CRS Applications to resolve the issue.

$ crs_stop -all
$ crs_start -all

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Set DISPLAY variable & Enable access control

Before starting the Oracle Universal Installer, the DISPLAY environment must be set correctly for display terminal support. To set the DISPLAY environment run the following command where hostname is the hostname or IP address of the system where the X server is running:

$ export DISPLAY=hostname:0.0

It may be necessary to enable access control on the system where the X server is running, so that the clients can connect. The "xhost" command is used to set access controls. The xhost command must be run on the console of the system where the X server is running. It cannot be done remotely.

To enable access control so that clients from any host can connect to the X server, type the following command:
$ xhost +
access control disabled, clients can connect from any host

To enable access control only to certain clients run the following command
$ xhost +hostname
For ex:- $ xhost +server1
Server1 being added to access control list

Common Errors:-
1. Error: Can't open display
    Error: Couldn't find per display Information

    Solution: 1. Run xhost + to enable access control
                    2. Set your DISPLAY

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Wednesday, December 23, 2009

Duplicate RAC Database Using RMAN

Oracle provides following methods to convert a single instance database to RAC. You can choose any method based upon your convince.
1. Manual (Using RMAN)
2. Enterprise Manager
3. DBCA
4. RCONFIG (from 10gR2)

Duplicating RAC database is very simple, first duplicate RAC Database to a single instance using RMAN and convert the single instance into a RAC cluster. Please note that straight RAC to RAC duplicate is not possible.

Follow the simple steps to duplicate RAC Database using ASM or other filesystem.



Step1: Create a parameter file for duplicate database(auxiliary)
The easy ways is copy Init.ora parameter from Target database, replace the Target database name with Auxiliary database name and comment all RAC related parameters for ex:- cluster_database, cluster_instances, thread ...Etc

Set CONTROL_FILES to two copies of the control file to +DISKGRP or file system

Step 2: Set DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT to convert the datafile and redo log file names from +DISKGRP1 to +DISKGRP2 or /dbs1 to /dbs2 (For Non-ASM file systems)

Also Set DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_DEST_n to ‘+DISKGRP2’ or /dbs2

Step 3: set _no_recovery_through_resetlogs=TRUE parameter to avoid internal Bug 4355382 ORA-38856: FAILED TO OPEN DATABASE WITH RESETLOGS WHEN USING RAC BACKUP

Step 4: Create a password file for auxiliary database using below command.
$ orapwd file=orapwdupDB password=xxxxxxxx

Step 5: Create a static listener for auxiliary database and reload, because auxiliary database will not register itself with the listener.

(SID_DESC =
(GLOBAL_DBNAME =dupDB.oracleracexpert.com)
(ORACLE_HOME = /oracle/product/db/10202)
(SID_NAME = dupDB)
)

Step 6: Take Full database backup of Target database
RMAN > backup database plus archivelog;

Copy the backup dumps from Target to Auxiliary host. If backup directory structure is different then update the RMAN configuration of the target database to reflect the new backup location

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/rman/dupDB';

Step7: Set ORACLE_SID and start auxiliary database in NO MOUNT state
$ export ORACLE_SID=dupDB
SQL> startup nomount

Step 8: Duplicate Target Database using RMAN Duplicate command
RMAN> CONNECT TARGET /;
RMAN> CONNECT CATALOG rman/*****@catadb;
RMAN> CONNECT AUXILIARY sys/*****@dupDB;
RMAN> DUPLICATE TARGET DATABASE TO dupDB;

Step 9: Add second thread of online redo logs and enable that thread:
SQL> alter database add logfile thread 2
group 3 ('+DISKGRP1','+DISKGRP2') size 50m reuse;
SQL> alter database add logfile thread 2
group 4 ('+DISKGRP1','+DISKGRP2') size 50m reuse;
SQL> alter database enable public thread 2;

For Non-ASM file systems replace ‘+DISKGRP1’, ‘+DISKGRP2' with actual file systems path

Step 10: Uncomment or add all RAC related parameters, shutdown the instance and startup both Instances.

Step 11: create spfile on the shared storage, because all instances must use the same server parameter file. See the link to “Create spfile from pfile”.

Step12: Register RAC instances with CRS
$ srvctl add database -d dupDB -o /oracle/product/db/10202
$ srvctl add instance -d dupDB -i dupDB1 -n testrac01
$ srvctl add instance -d dupDB -i dupDB2 -n testrac02

Step13: Shutdown and startup instances using srvctl
$ srvctl start database –d dupDB

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Tuesday, November 24, 2009

Oracle SPFILE and PFILE

Oracle provides two different parameter files, PFILE and SPFILE.

PFILE is a text based file and in order to add/modify any database parameters, need to edit the INIT.ORA file using “vi” in unix or notepad in windows. To apply new or modified database parameters changes, database restart is required.

SPFILE is a binary file and introduced in Oracle 9i. SPFILE simplifies administration, maintaining parameter settings consistent and Server parameter file is a binary file let you make persistent changes to individual parameters. Use the CREATE SPFILE statement to create a Server Parameter file from PFILE with SYSDBA/SYSOPER privilege.

By default PFILE or SPFILE default location is “$ORACLE_HOME/dbs” for UNIX and LINUX, %ORACLE_HOME%\database for Windows. In case of RAC, SPFILE located on the shared storage.

You can change the SPFILE parameters using Enterprise manager or ALTER SYSTEM SET ‘parameter’ statement with SCOPE clause. The SCOPE clause has three values MEMORY, SPFILE and BOTH.

SPFILE: The change is applied in the server parameter file only and is effective at the next startup.
For Ex: - SQL> ALTER SYSTEM SET SGA_MAX_SIZE=1024m SCOPE=spfile;

Memory: The change is applied in memory only and the effect is immediate.
For Ex:- SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES=50 SCOPE=MEMORY;

BOTH: The change is applied in both the server parameter file and memory. For dynamic parameters the effect is immediate.
Ex: - SQL> ALTER SYSTEM SET SGA_TARGET=1024m SCOPE=BOTH;

For Static parameters the MEMROY, BOTH specifications are not allowed. Only SPFILE is allowed.

If you do not specify the SCOPE clause then the default is BOTH. For Dynamic parameters you can specify DEFERRED keyword and the specified change is effective only for future sessions.

During Database startup Oracle searches for initialization parameter file under $ORACLE_HOME/dbs in UNIX and $ORACLE_HOME/database on Windows in following order.

spfileSID.ora
spfile.ora
initSID.ora
init.ora

Backup/Restore parameter files(SPFILE and PFILE):
Using Recovery Manager(RMAN), SPFILE can be backed up with database control file by setting
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;
SPFILE can be restored using following RMAN command.
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;

PFILE cannot be backed-up using RMAN, backup and restore can be done using O/S copy command.

Common Errors and solutions in modifying SPFILE:
1. SQL>alter system SET LOG_ARCHIVE_DEST='/oradata/TESTDB' scope=both;
alter system SET LOG_ARCHIVE_DEST='/oradata/ TESTDB' scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16018: cannot use LOG_ARCHIVE_DEST with LOG_ARCHIVE_DEST_n or
DB_RECOVERY_FILE_DEST

Solution: OCATION keyword is required.
SQL> alter system set log_archive_dest_1='location=/oradata/ TESTDB' scope=both;
System altered.

2. SQL> alter system set log_archive_dest_1='location=/oradta/ TESTDB' scope=both;
alter system set log_archive_dest_1='location=/oradata/ TESTDB' scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16032: parameter LOG_ARCHIVE_DEST_1 destination string cannot be translated
ORA-07286: sksagdi: cannot obtain device information.
Linux-x86_64 Error: 2: No such file or directory

Solution: LOG_ARCHIVE_DEST_1 is incorrect or does not exist. Prove correct path.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com

Send EMAIL using UTL_MAIL in Oracle 10g

This summary is not available. Please click here to view the post.

Thursday, November 12, 2009

How to restore a database to Point in Time using RMAN

You can recover whole database to a specific SCN, time or log sequence number using RMAN, this is called incomplete recovery or point-in-time Recovery (DBPITR).

You must restore all data files from backups create prior to the time to which you want to recovery and you must open database with RESETLOGS option when completes. Please note that RESETLOGS operation creates a new incarnation of the database.

Usually incomplete recovery will be performed under following situations:
1. Corrupt or destroy of some or all online redo logs due to media failure
2. User error causes data loss, for ex accidental drop of a table
3. Archive log missing and you cannot able to perform incomplete recovery
4. Loss of current control file and must use a backup control file to open database.



Follow the steps to recover the database until a specified SCN, time or log sequence

Step 1: Shutdown and startup mount
SQL> SHUTDOWN IMMEDAITE;
SQL> STARTUP MOUNT;

Step 2: Determine the SCN, time or log sequence that you want to recovery the
SCN – you can get the SCN from alert.log file
Sequence – v$log_history

3. Perform the incomplete recovery
If specifying a time, then set NLS_LANG and NLS_DATE_FORMAT environment variables.

RUN
{
SET UNTIL TIME 'Aug 10 2009 11:00:00';
# SET UNTIL SCN 100; # alternatively, specify SCN
# SET UNTIL SEQUENCE 123; # alternatively, specify log seq
RESTORE DATABASE;
RECOVER DATABASE;
}

4. Open database with resetlogs
SQL> ALTER DATABASE OPEN RESETLOGS;

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Oracle ODBC, OCI, OCCI Drivers, Downloads and Documentation

Oracle ODBC(Open Database Connectivity):
The Oracle ODBC Driver provides access to Oracle databases for applications written using the ODBC interface. With ODBC you can access the Oracle database using .NET, applications from any .NET programming language.

Download Oracle ODBC Driver
http://www.oracle.com/technology/software/tech/windows/odbc/index.html

Oracle ODBC Documentation
http://www.oracle.com/technology/docs/tech/windows/odbc/index.html

Please see the Oracle ODBC Driver discussion forum for any questions/issues.
http://forums.oracle.com/forums/forum.jsp?forum=145

Oracle Call Interface(OCI):
The Oracle Call Interfaces (OCI) is a set of low-level APIs (Application Programming Interface Calls) used to interact with the Oracle Database. It allows one to use operations like logon, execute, parse, fetch, etc.

Download Oracle Call Interface (OCI)
http://www.oracle.com/technology/tech/oci/instantclient/index.html

Please see the below link for OCI documentation
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14250/toc.htm

Please see the OCI (Oracle Call Interface) Discussion forum for any questions/issues
http://forums.oracle.com/forums/forum.jspa?forumID=67

Oracle C++ Call Interface:
Oracle C++ Call Interface (OCCI) is a high-performance and comprehensive API to access the Oracle database and used for client-server, middle-tier, and complex object modeling applications.

Download OCCI for Linux/Windows
http://www.oracle.com/technology/tech/oci/occi/occidownloads.html

Please see the below link for documentation
http://download.oracle.com/docs/cd/B28359_01/appdev.111/b28390/toc.htm

Please see the C++ Call Interface (OCCI) Discussion forum for any questions/issues.
http://forums.oracle.com/forums/forum.jspa?forumID=168

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Monday, October 19, 2009

Upgrade Oracle database from 10.2.0.x to 10.2.0.4

Follow the simple steps to upgrade your database from 10.2.0.x to 10.2.0.4

Here are the Upgrade paths:-
Upgrade 10.2.0.1. to 10.2.0.4
Upgrade 10.2.0.2. to 10.2.0.4
Upgrade 10.2.0.3. to 10.2.0.4

If you have updated your Oracle version then you must upgrade your database using any one of the below methods, otherwise you will get the following error during Database startup.

ORA-01092: ORACLE instance terminated.
ORA-39700: database must be opened with UPGRADE option

1. Database Upgrade assistant
2. Database Upgrade assistant in Noninteractive mode
3. Manual Upgrade Method

1. Upgrade Database using Database Upgrade assistant:
Step 1: Set the ORACLE_HOME and ORACLE_SID and start DBUA
$ ORACLE_SID=testdb; export ORACLE_SID
$ ORACLE_HOME=/oracle/v10204; export ORACLE_HOME
$ dbua

Step 2: Follow the upgrade steps
(i) Select the database that you want to upgrade
(ii) On the Recompile Invalid Objects screen, select the Recompile the invalid objects then click next.
(iii) On the backup screen, select “I would like to take this tool to backup the database”, if you have not taken database backup.
(iv) Click finish

2. Upgrade Database using Database Upgrade assistant in Noninteractive mode:
Run the following command to upgrade Oracle Database in silent mode:
$ dbua -silent -dbname $ORACLE_SID -oracleHome $ORACLE_HOME
-sysDBAUserName UserName -sysDBAPassword SYS_password
-recompile_invalid_objects true

3. Upgrade Database using Manual Upgrade Method:

Step 1: Backup the database.
Backup the databases using RMAN or command line file level backup

Step 2: Set the ORACLE_HOME and ORACLE_SID and run Pre-upgrade checks
$ ORACLE_SID=testdb; export ORACLE_SID
$ ORACLE_HOME=/oracle/v10204; export ORACLE_HOME

SQL> SPOOL Pre_checks.log
SQL> @?/rdbms/admin/utlu102i.sql
SQL> SPOOL OFF

Review the spool file and make the necessary database/parameter changes based on the recommendation.

If you are upgrading a RAC database, then set CLUSTER_DATABASE to “FALSE”.

Step 3: Start the database in UPGRADE mode and run catupgrd.sql
SQL> STARTUP UPGRADE
SQL> SPOOL upgrade_db.log
SQL> @?/rdbms/admin/catupgrd.sql
SQL> SPOOL OFF

Review the log for errors.

Step 4: Restart database and recompile invalid objects
SQL> SHUTDOWN IMMEDIATE
SQL> STARTUP
SQL> @?/rdbms/admin/utlrp.sql

If RAC database, then set CLUSTER_DATABASE to “TRUE” after upgrade.

Step 5: Check components status after upgrade
Run the following command to check the status of all the components
SQL> SELECT COMP_NAME, VERSION, STATUS FROM SYS.DBA_REGISTRY;

The status of all the components should be VALID for a successful upgrade

Step 6: Upgrade RMAN  Catalog
If you are using the Oracle Recovery Manager catalog, enter the following command:
$ rman catalog username/password@alias
RMAN> UPGRADE CATALOG;

Step 7: Startup all other processes and services.

Click here for steps to upgrade oracle version to 10.2.0.4

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Oracle version upgrade from 10.2.0.x to 10.2.0.4

Follow the steps to upgrade oracle version from 10.2.0.x to 10.2.0.4.

Step1: Download the patch 6810189 from Metalink and extract the file.
$ unzip p6810189_10204_LINUX-x86-64.zip (ex:- for linux 64 bit)

Step2: - Set the ORACLE_HOME and ORACLE_SID and backup the database
$ ORACLE_SID=testdb; export ORACLE_SID
$ ORACLE_HOME=/oracle/v10201; export ORACLE_HOME

Backup the databases using RMAN or command line file level backup, if any

Step 3: Shutdown database and all other processes/services.
Shutdown listener, if running on same version
$ lnsrctl stop

Stop dbconsole, if any
$ emctl stop dbconsole

Shutdown database
SQL> SHUTDOWN IMMEDIATE

Step 4:- Backup Oracle home and Inventory.
Oracle recommends that you take a backup of Oracle Inventory and Oracle home.

Step 5: Start the OUI and install the 10.2.0.4
For Unix:- $ ./runInstaller
Start Oracle Universal Installer from patch set and select the ORACLE_HOME that you want to patch from the list and click next…next…to complete the installation. Run the $ORACLE_HOME/root.sh script as the root user, when prompted.

For Windows:- Start the OUI by running SETUP.EXE from patch set and select the ORACLE_HOME that you want to patch from the list and click next… next…to complete the installation

Step 6: Startup the listener.
$ lsnrctl start

Click here for steps to Upgrade Oracle Database to 10.2.0.4

Regards
Satishbabu Gunukula
http://www.oracleracexpert.com/

Monday, October 5, 2009

Create, Drop and Alter ASM disk groups

Crete Disk Group:
Create Disk groups using the CREATE DISKGROUP statement and specify the level of redundancy.

Disk group redundancy types:-
NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware RAID or mirroring.

SQL> CREATE DISKGROUP data NORMAL REDUNDANCY
FAILGROUP failure_group_1 DISK '/dev/sda1' NAME dataa1,'/dev/sda2' NAME dataa2,
FAILGROUP failure_group_2 DISK '/dev/sdb1' NAME datab1,'/dev/sdb2' NAME datab2;

Drop Disk Group:
Drop disk group using DROP DISKGROUP statement.
SQL> DROP DISKGROUP data INCLUDING CONTENTS;

Alter Disk Group:
Add or remove disks from disk groups Using ALTER DISKGROUP statement. You can also use wildcard "*" to reference disks.

Add a disk.
SQL> ALTER DISKGROUP data ADD DISK '/dev/datac1', '/dev/datac2';

Add all reference disks
SQL> ALTER DISKGROUP data ADD DISK '/dev/datad*;

Drop/remove a disk.
SQL> ALTER DISKGROUP data DROP DISK datab2;

The UNDROP command used to undo only pending drop of disks. After you drop the disks you cannot revert.
SQL> ALTER DISKGROUP data UNDROP DISKS;

Diskgroup Rebalance:
Disk groups can be rebalanced manually Using REBALANCE clause and you can modify the POWER clause default value.
SQL> ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;

MOUNT and DISMOUNT DiskGroups:
Normally Disk groups are mounted at ASM instance startup and dismounted at shutdown. Using MOUNT and DISMOUNT options you can make one or more Disk Groups available or unavailable.
SQL> ALTER DISKGROUP data MOUNT;
SQL> ALTER DISKGROUP data DISMOUNT;
SQL> ALTER DISKGROUP ALL MOUNT;
SQL> ALTER DISKGROUP ALL DISMOUNT;

DiskGroup Check:
Use CHECK ALL to verify the internal consistency of disk group metadata and repair in case of any error.
SQL> ALTER DISKGROUP data CHECK ALL;

DiskGroup resize:
Resize the one or all disks in the Diskgroup.

Resize all disks in a failure group.
SQL> ALTER DISKGROUP data RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 1024G;

Resize a specific disk.
SQL> ALTER DISKGROUP data RESIZE DISK dataa1 SIZE 1024G;

Resize all disks in a disk group.
SQL> ALTER DISKGROUP data RESIZE ALL SIZE 1024G;

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Migrate Database to ASM Using RMAN

We are Using RMAN to relocate non-ASM files to ASM files. The ASM files cannot be accessed through normal OS interfaces.

Step1: Query V$CONTROLFILE and V$LOGFILE to get the file names.
SQL> select * from V$CONTROLFILE;
SQL> select * from V$LOGFILE;

Step 2: Shutdown the database.
SQL> SHUTDOWN IMMEDIATE;

Step3: Modify the target database parameter file:
(i) Remove the CONTROL_FILES parameter from the spfile, so the control files will be created automatically in ASM Disk group mentioned in DB_CREATE_FILE_DEST destination
Using a pfile then set CONTROL_FILES parameter to the appropriate ASM files or aliases.
(ii) Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM disk groups.

Step 4: Startup nomount mode and Restore control file
RMAN> STARTUP NOMOUNT;

Restore the control file into the new location.
RMAN> RESTORE CONTROLFILE FROM 'old_controlfile_name';

Step 5: Startup mount mode and backup the Database.
RMAN> ALTER DATABASE MOUNT;

Backup database into ASM disk group.
RMAN> BACKUP AS COPY DATABASE FORMAT '+diskgroup1';

Step 6: Switch database and create or rename Redo log members
Switch all data files to the new ASM Diskgroup location.
RMAN> SWITCH DATABASE TO COPY;
RMAN> SQL “ALTER DATABASE RENAME ‘old_redolog_member’ to ‘+diskgroup2’;
or
Create new redo logs in ASM Disk group and delete the old redo log files.

Step 7: Open Database and create temporary tablespace.
Open database using resetlogs
SQL> ALTER DATABASE OPEN RESETLOGS;

Create temporary tablespace in ASM disk group.
SQL> CREATE TABLESPACE temp1 ADD TEMPFILE ‘+diskgroup1’;

Step 8: Drop old database files.
1. SQL> DROP TABLESPACE ‘old_temporary_tablespace’ including contents and datafiles;
2. Remove all remaining Non-ASM database files using OS commands

Regards
Satishbabu Gunukula
http://www.oracleracexpert.com/

Thursday, October 1, 2009

Cross Platform Transportable Tablespace using RMAN

When transporting tablespaces between databases where the endian format is different between source and destination platforms, the endian format of the datafiles must be converted to match the destination platform.

This conversion can be performed in two ways

1. Using RMAN CONVERT TABLESPACE command (converting on the source host)
2. Using RMAN CONVERT DATAFILE command(converting on the destination host)

Transportable tablespaces export is manageable across platforms and only meta data will be exported. In Cross platform transportable tablespace the data movement is simpler and faster.

This mode requires that you have the EXP_FULL_DATABASE role.

Please note that the
1. source and target database must use the same character/national character set.
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.

For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit (Little Endian) machine TESTLINUX to Solaris 64 bit (Big Endian) machine TESTSOLARIS. Both the source and target platforms are of different endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbf.

Follow the below steps

Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;

PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------- ----------------------------------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little

Step 2: Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;

Step 3: Export metadata
(i) Using export utility
$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log

(ii) Using Export data pump utility
$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log

TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace

The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.

Step 4: Different endian formats and convert using RMAN
The endian formats are different and then a conversion is necessary for transporting the tablespace.

RMAN Cross platform transportable tablespace.

Run the below command to covert the tablespace to Source (Linux 64 bit) to Target (Solaris 64 bit) platform.

Method 1: Using CONVERT Tablespace... FROM PLATFORM on Source host
RMAN> convert tablespace test_user_tbs
2> to platform ‘Solaris[tm] OE (64-bit)'
3> format='/oradata/rman_backups/%N_%f';

The data file “test_user_tbs01.dbf” is not touched and a new file will be created for Solaris platform under “/oradata/rman_backups“and copy the file to Target platform.

Method 2: Using CONVERT DATAFILE... FROM PLATFORM on Destination host
RMAN> convert datafile test_user_tbs01.dbf
2> from platform ‘Linux IA (64-bit)'
3> db_file_name_convert ‘/linux/oradata/’ ‘/solaris/oradata’

Use RMAN's CONVERT command to convert the datafiles to be transported to the destination host's format. The converted datafiles are stored in “/solaris/oradata”.

Step 5: Initiate import command to plug the tablespace into the database.
(i) Using import utility
$ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log

(ii) Using impdp utility
$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log

You can use REMAP_SCHEMA=(source:target), if you want to import into another schema.

Step 6: Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;

Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Tuesday, September 29, 2009

Createdisk, Deletedisk and Querydisk in ASM

The /etc/init.d/oracleasm script is used to create, delete and query ASM disks and make disks available.

Create and Delete ASM disk:
Run below command to Create ASM disks.
# /etc/init.d/oracleasm createdisk DATA1 /dev/sdc
Creating Oracle ASM disk "DATA1" [ OK ]

Run below command to Delete ASM disks.
# /etc/init.d/oracleasm deletedisk DATA1
Deleting Oracle ASM disk "DATA1" [ OK ]

Query/List/Scan ASM disk:
Run the below querydisk command to see if the Disk/Device is used by ASM:
# /etc/init.d/oracleasm querydisk /dev/sdc
Checking if device "/dev/sdc" is an Oracle ASM disk [ OK ]

# /etc/init.d/oracleasm querydisk DATA1
Checking for ASM disk "DATA1" [ OK ]

Run below command to list Existing disks
# /etc/init.d/oracleasm listdisks
DATA1

Run the below command to scan the disks created on another node.
# /etc/init.d/oracleasm scandisks
Scanning system for ASM disks [ OK ]

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Friday, September 18, 2009

How to move or replace and repair OCR (Oracle Cluster Registry)

Moving or Replacing Oracle Cluster Registry:
If you are replacing or moving an OCR/OCR MIRROR make sure that other OCR file is ONLINE and also clusterware is running on the node that you are performing the replace operation.

Please note that the OCR that you are replacing or moving can be either online or offline.

Run the below command to move or replace OCR
#ocrconfig -replace ocr
For ex:- ocrconfig -replace ocr /dev/raw/raw5

Run the below command to move or replace OCR MIRROR.

#ocrconfig -replace ocrmirror
For ex: - ocrconfig -replace ocr /dev/raw/raw6

Repairing Oracle Cluster Registry (OCR) Configuration:
You may need to repair an OCR configuration, if your OCR configuration changes ran (adding/removing/replacing OCR/OCR mirror) while the node is shutdown.

Make sure you run the command on the node which you have shutdown and you cannot perform this operation while oracle clusterware is running.

Run the below command to repair
# ocrconfig –repair ocrmirror device_name
for ex:- ocrconfig –repair ocrmirror /dev/raw/raw1

Click here to learn How to Add and Remove Oracle Cluster Registry

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Wednesday, September 16, 2009

How to Change Static parameters in SPFILE and PFILE

Change Static parameters in SPFILE:
Changing static parameters requires the SPFILE option for the SCOPE clause with ALTER SYSTEM Statement and changes applies to SPFILE only. The changes cannot be applied to active Instance and the database needs to be restarted to take effect the modified parameters.

You can identify the static parameters using below query
SQL> select name, value, issys_modifiable from v$parameter
where name='processes';
NAME                     VALUE          ISSYS_MODIFIABLE
------------------- -------------- ---------------------------
Processes               150                 FALSE

All parameter that have the column ISSYS_MODIFIABLE value FALSE in the V$PARAMETER view are STATIC parameters and remaining are DYNAMIC parameters in Oracle database.

The ALTER SYSTEM command with SCOPE=SPFILE the will not update the value in the V$PARAMETER view but it will show the new value in the V$SPPARAMETER view as Oracle SPFILE parameter is updated.

For ex:-
SQL> alter system set processes=200 scope=both;
alter system set processes=200 scope=both
               *
ERROR at line 1:
ORA-02095: specified initialization parameter cannot be modified

Above statement failed because “processes” is Static parameter and can be used with SCOPE=SPFILE only.

SQL> alter system set processes=200 scope=spfile;
System altered.

You can view the new value in V$SPPARAMETER view as the SPFILE is updated.

SQL> select name, value, isspecified from v$spparameter
where name ='processes';
NAME                     VALUE         ISSYS_MODIFIABLE
------------------- -------------- ---------------------------
Processes               150                 TRUE

Changing Static parameters in PFILE:
Edit the “$ORACLE_HOME/dbs/init.ora” file with new value and bounce the database to take effect the modified parameters.

Identify weather using PFILE or SPFILE:
To check SPFILE or PFILE used by Database or instance, run below commands.

You can see the Oracle SPFILE location.

SQL > show parameter spfile;
NAME                    TYPE        VALUE
------------------ ----------- -----------------------------------------
spfile                     string          /oracle/v10201/dbs/spfileOradb.ora

SQL > select name, value from v$parameter where name = 'spfile';
NAME          VALUE
------------ ------------------------------------------------------------
spfile             /oracle/v10201/dbs/spfileOradb.ora

SQL> select count(1) from v$spparameter where isspecified = 'TRUE';

Returns a NON-ZERO value if SPFILE is in use.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

ORA-19755: could not open change tracking file

You will receive ORA-19755 under following circumstances

1. When moving the Change tracking file
2. During Database Recovery
3. During Database startup

The reason is due to missing or corrupted Block change tracking file.

Errors during database startup
ORA-19755: could not open change tracking file
ORA-19750: change tracking file: '+DATA/testdb/changetracking/ctf.411. 5567291'
ORA-17503: ksfdopn:2 Failed to open file +DATA/testdb/changetracking/ctf.411.5567291'
ORA-15012: ASM file '+DATA/testdb/changetracking/ctf.411. 5567291' does not exist

Errors during Recovery:-
RMAN-03002: failure of recover command at 05/05/2009 08:13:29
ORA-00283: recovery session canceled due to errors
RMAN-11003: failure during parse/execution of SQL statement: alter database recover logfile ‘/oradata/testdb/flash_recovery_area/archivelog/arch_1_532_432353.arc‘
ORA-00283: recovery session canceled due to errors
ORA-19755: could not open change tracking file
ORA-19750: change tracking file: ‘/u02/oradata/test/ofsap_blk_change.dbf’
ORA-27037: unable to obtain file status

Solution:-
Disable Block change tracking and perform recover and enable the tracking.

Click here learn Enable/Disable and moving the Block change tracking file

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Friday, September 11, 2009

How to Add and Remove OCR (Oracle Cluster Registry)

Adding an OCR (Oracle Cluster Registry):-

You can also add an OCR after completing the Oracle RAC installation, if you didn’t mirror the OCR during installation. Please note that you must run all the commands as “root”.

Run the below command to add OCR MIRROR.
#ocrconfig -replace ocrmirror

Removing an Oracle Cluster Registry (OCR)
To remove OCR/OCR MIRROR the other OCR must be online.

Run the below command on any node to remove OCR.
# ocrconfig -replace ocr

Run the below command on any node to remove OCR Mirror.
# ocrconfig -replace ocrmirror

Please note that these commands update the OCR configuration on all of the nodes where Oracle Clusterware is running.

Click here to learn How to backup and recover Oracle Cluster Registry.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Thursday, September 3, 2009

ORA-29740: evicted by member 0, group incarnation

I have seen RAC node eviction error in common on RAC Databases running on Oracle 10g R2 (10.2.0.1). Here is the details explanation and solution.

Node Eviction in RAC causes the Instance hang or restart.
Check for Possible Error messages in “Database/ASM” alert.log and trace files.

Alert.log:
IPC Send timeout detected.Sender: ospid 25102
Receiver: inst 2 binc 860622349 ospid 12543
IPC Send timeout to 1.2 inc 10 for msg type 36 from opid 44
Communications reconfiguration: instance_number 2
Trace dumping is performing id=[cdmp_20090819220537]
Waiting for clusterware split-brain resolution
Errors in file /oracle/10201/admin/testdb/bdump/testdb_lmon_3433.trc:
ORA-29740: evicted by member 0, group incarnation 12
LMON: terminating instance due to error 29740

Trace file generated for ospid 25102 (1st line in alert.log):
(13190 <- 30904)SKGXPDOAINVALCON: connection 0x2a9754b730 scoono 0x15694aec acconn 0x399413e2 getting closed. inactive: threshold: 0x4bff6 (13190 <- 30904)SKGXPDOAINVALCON: WARN: potential problem in keep alive connection protocol LMON Trace file:
GES IPC: Receivers 3 Senders 3
GES IPC: Buffers Receive 1000 Send (i:1050 b:1050) Reserve 301
kjxgmrcfg: Reconfiguration started, reason 1
kjxgmcs: Setting state to 0 0.
kjxgrrcfgchk: Initiating reconfig, reason 3
kjxgmrcfg: Reconfiguration started, reason 3
kjxgrrecp2: Waiting for split-brain resolution, upd 0, seq 12

If you find these symptoms then you are hitting a bug 4631662.

Cause: Due to Bug 4631662, you will see instance evictions caused by network timeouts. This bug is caused by a failure in "ach reaping", "ach reaping" reduces the packet size being sent to a receiver.
Solution:
1. Upgrade database to 10.2.0.2, this feature is disabled in 10.2.0.2.
2. Modify the below parameter to disable the feature.

DATABASE:
Run the below command if you are using spfile.
SQL> alter system set "_skgxp_udp_ach_reaping_time"=0 sid='*';

If you are using pfile add the below line to ini.ora parameter
*._skgxp_udp_ach_reaping_time = 0

ASM: Add the following lines to init.ora parameter.
*._disable_instance_params_check = TRUE
*._skgxp_udp_ach_reaping_time = 0

Click here to see the instructions to create pfile or spfile and apply the changes.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Transportable tablespace export and Import

Transportable tablespaces export and import is manageable across platforms and only Meta data will be exported. In Cross platform transportable tablespace the data movement is simpler and faster.

This mode requires that you have the EXP_FULL_DATABASE role.

Please note that
1. source and target database must use the same character set/national character set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.

Transportable tablespace export and import on same endian platforms
Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ---------------------- ------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little

For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.

Step 2:- Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;

Step 3: Export metadata
(i) Using export utility$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log

(ii) Using Export data pump utility$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log

TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.

The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.

Step 4: Copy the files to Target system
If you are using FTP use binary option.

Step 5: Initiate Import to plug the tablespace into the database.
(i) Using import utility $ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log

(ii) Using impdp utility
Click here to see the Instructions to create Directory and grant privileges.

$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log

You can use REMAP_SCHEMA= (source: target), if you want to import into another schema.

Step6: - Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;

Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.

Transport Tablespace Import Common Errors:-
1. Oracle Error : EXP-00044: must be connected "AS SYSDBA" to do Point-in-time Recovery or Transportable Tablespace import
Cause: The user must log in "as SYSDBA" to perform transportable tablespace imports or Point-In-Time Recovery imports.
Action: Ask your database administrator to perform the Transportable Tablespace import or
the Tablespace Point-in-time Recovery import.

2. IMP-00017: following statement failed with ORACLE error
19721:IMP-00003: ORACLE error 19721 encountered
ORA-06512: at "SYS.DBMS_PLUGTS", line 2065
ORA-06512: at line 1
Cause: A duplicated data file name in the import parameters file was causing the issue
Action: Modify the import parameters file with the right datafile name

Please see the Data pump Export/Import related documents:

Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Wednesday, September 2, 2009

Create spfile from pfile

Create spfile from pfile in Oracle:
Use the CREATE SPFILE statement to create a Server Parameter file from PFILE with SYSDBA/SYSOPER privilege. You can specify the file name and path in the create command. See the SPFILE example.

SQL> create spfile from pfile;
SQL> Create spfile='/tmp/spfileOradb.ora' from pfile;

Common Errors:
1. If you edit SPFILE or corrupted, you will receive below error (Don not edit spfile).
SQL> alter system set db_files=200 scope=spfile;
alter system set db_files=200 scope=spfile
*
ERROR at line 1:
ORA-27046: file size is not a multiple of logical block size
Additional information: 1

2. If spfile deleted accidentally, you will get below error during Database startup.
SQL> startup
ORA-01078: failure in processing system parameters

Solution: Recreate a binary SPFILE from PFILE.

Remove parameter from Spfile:
You can remove spfile parameter by using RESET command.

SQL> ALTER SYSTEM RESET session_cached_cursors SCOPE=spfile sid='SID*'

Create pfile from spfile:
Use the CREATE PFILE statement to create a Parameter file from SPFILE with SYSDBA/SYSOPER privilege. You can specify the file name and path in the create command. See the pfile example.

SQL> create pfile from spfile;
SQL> create pfile = '/tmp/initDB1.ora' from spfile;

Common Errors:
you will encounter this error while creating pfile/spfile.

ERROR at line 1:
ORA-27037: unable to obtain file status
Linux Error: 2: No such file or directory Additional information: 3

Reason:
1. If database is down and you are trying to create a pfile from spfile.
2. Providing wrong file names.

Change Oracle spfile Location :
If you want to change the spfile location, create a PFILE in the default location and in pfile specify the spfile parameter location to non-default location.

For ex: - PFILE contents look like below
spfile = “New_path”/spfileOradb.ora

Create spfile from pfile in RAC :
In order to use same spfile at startup each RAC instance uses its own pfile and that points to one shared spfile on shared storage. If you are using ASM the spfile then the spfile will be located in Shared ASM file system.

SQL > show parameter spfile;
NAME TYPE VALUE
-------- ------- -----------------------------------------
spfile string +DATA/testdb/spfiletestdb.ora

Use above commands to create spfile from pfile and pfile from spfile.

Common Errors in RAC:
You will receive below error when modifying the parameter SCOPE=memory using SID=’*’

SQL> alter system set sga_target=500m scope=memory SID=’*’;
alter system set sga_target=500m scope=memory
*
ERROR at line 1:
ORA-32018: parameter cannot be modified in memory on another instance

Solution: - You need to modify the parameter individually on each instance using the SID=’Instance_Name’

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Tuesday, September 1, 2009

Block Change Tracking in Oracle 10g

Oracle Block change Tracking improves the incremental Backup performance and is used to record changed blocks in each datafile in a change tracking file. If change Tracking is enabled, RMAN uses block change tracking file to identify changed blocks for incremental backups and avoids full datafile scans during the backup.

Block change tracking feature introduce in Oracle 10g R1. By default, the Block change tracking file is created as Oracle managed file in DB_CREATE_FILE_DEST.

By default, Oracle will not record block change information. You can Enable or disable the change tracking when the database is open or mounted.

Enable and Disable Block Change Tracking:
Run the below command to enable block change tracking
SQL>ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

Run the below command to create change tracking file in a specified location.
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE '/oradata/block_change_track.ora' REUSE;

Run the below command to disable block change tracking
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Run the below Query to monitor the status of block change tracking
SQL> SELECT FILE,STATUS,BYTES FROM V$BLOCK_CHANGE_TRACKING;

Moving Block Change tracking file without Database shutdown:
If your database is 24x7 critical production and you cannot shut down, then follow the below steps. Please note that you must disable change tracking and you will lose the old contents of the change tracking file, if you choose this method.

Step1: Disable the change tracking
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Step2: Re-enable it at the new location
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';

Moving Block Change tracking file with Database shutdown:
Step1: Determine the change tracking file:
SQL> SELECT filename FROM V$BLOCK_CHANGE_TRACKING;

Step2: shutdown and move or copy the tracking file.
SQL> SHUTDOWN IMMEDIATE
$ cp ‘/old_lockation/block_change_tracking.ora’ ‘/new_location/ block_change_tracking.ora’

Step3: Mount the database and rename change tracking file to new location.
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RENAME FILE ‘old_location’ TO ‘new_location’;

Step4: Open the database
SQL> ALTER DATABASE OPEN;

Block change tracking in RAC (Real Applications Clusters) environment, the file must be located on shared storage so that the file is accessible for all the nodes.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Monday, August 31, 2009

Delete Archivelog files without using RMAN

Here I am explaining two methods to delete archive logs and other database files with out using RMAN from ASM Disk Group in Oracle 10g/11g.

Method 1: asmcmd - ASM command-line utility
ASMCMD> rm file_name
For ex:-ASMCMD> rm ‘+dgroup2/testdb/archivelogs/thread_1_seq_363.510.1’
Or
ASMCMD> rm ‘+dgroup2/testdb/datafile/USERS.250.5334166963’

If you use a wildcard, rm deletes all matches except non-empty directories (unless the -r flag is used). The rm command can delete the file or alias only if the file is not currently in use by a client database.

Method 2: SQLPLUS utility
SQL> ALTER DISKGROUP DROP file
For ex:- SQL> ALTER DISKGROUP FLASH DROP FILE ‘+FLASH/testdb/archivelog/2009_08_11/thread_1_seq_363.510.1';

The “asmcmd” and “sqlplus” commands will not update the database views (V$ARCHIVED_LOG, V$FLASH_RECOVERY_AREA_USAGE), controlfile, Recovery Catalog that the files have been removed.

To update the Database views, control file or RMAN Catalog about deleted files you need to run the below command from RMAN.

RMAN> CROSSCHECK ARCHIVELOG ALL;
RMAN> DELETE EXPIRED ARCHIVELOG ALL;

Click here to learn How to delete archive logs from ASM.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Sunday, August 30, 2009

Download Oracle Grid Control and ASMLib

Oracle Enterprise Manager Grid Control is used for is a centralized administration and using this tool you can monitor and manage databases.

Download Oracle Enterprise Manager 10g Grid Control
http://www.oracle.com/technology/software/products/oem/index.html

Download Oracle Enterprise Manager Cloud Control 12c
Includes
- Agent
- Plug-in's
- Pre-upgrade Console
- Connectors for Heterogeneous Mgmt
- Installation Guide

Click here for steps to Install Oracle 10g Grid Control

Automatic Storage Management (ASM) is a new feature in Oracle Database 10g/11g and this feature simplifies the storage of Oracle data files, control files and log files...etc. It provides integration of the file system and the volume manager.

Download Oracle ASMLib using below link, follow the link for your platform.
http://www.oracle.com/technology/tech/linux/asmlib/index.html

Click here for steps to Install Oracle ASM on Linux in 10g/11g.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Download oracle 8i, 9i, 10g and 11g Database

Please find the direct links to download oracle 8i, 9i, 10g and 11g Database

Download Oracle 8i Database
http://www.oracle.com/technology/software/products/oracle8i/index.html

Download Oracle 9i Databsae
Includes
- Client
- Enterprise Edition/Standard Edition
http://www.oracle.com/technology/software/products/oracle9i/index.html

Download Oracle 10g Database
Includes
- Client
- Clusterware
- Examples
- Gateways
http://www.oracle.com/technology/software/products/database/oracle10g/index.html

Download Oracle 11gIncludes
- Client
- Clusterware
- Examples
- Gateways
http://www.oracle.com/technology/products/database/oracle11g/index.html

Download Oracle Enterprise Manager Cloud Control 12c
Includes
- Agent
- Plug-in's
- Pre-upgrade Console
- Connectors for Heterogeneous Mgmt
- Installation Guide

Download Oracle GoldenGate
Includes
- Management Pack
- Veridata

Click here for all Oracle software downloads
http://www.oracle.com/technology/software/index.html

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Download SQL Developer, Migration Workbench

Oracle SQL Developer:- Use Oracle SQL Developer Migrations to migrate Microsoft Access, MySQL, Sybase databases and Microsoft SQL Server to Oracle Database

Download Oracle SQL Developer Tool: http://www.oracle.com/technology/tech/migration/workbench/index_sqldev_omwb.html

Oracle Migration Workbench:- The Oracle Migration Workbench tool simplifying the migration from non-Oracle databases to Oracle. This tool now entered a desupport cycle.Now the existing functionality has been incorporated into Oracle SQL Developer.

Download Oracle Migration Workbench Tool: http://www.oracle.com/technology/tech/migration/workbench/index.html

See the Oracle Migration Technology Center in making decisions and migrating your databases and applications to the Oracle platform. http://www.oracle.com/technology/tech/migration/index.html

Please see the SQL Developer Forum for any question/issues
http://forums.oracle.com/forums/forum.jspa?forumID=260

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Download Oracle Client, Instant Client

Oracle Client:
The Oracle client is required software if you wish to access an Oracle database.

To download oracle client choose the Oracle version (8i/9i/10g/11) and select the Oracle release for Operating system that you want to download and it will take you to next page, where you will find the Oracle client.

Oracle Instant Client:
Instant Client allows you to run your applications without installing the standard Oracle client or having an ORACLE_HOME

Click here to download Oracle Instant Client for all O/S platforms.
http://www.oracle.com/technology/software/tech/oci/instantclient/index.html

For more information on Instant Client, see the official Instant Client site

Please see the Oracle Instant Client discussion forum for any questions/issues.
http://forums.oracle.com/forums/forum.jspa?forumID=190

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Thursday, August 27, 2009

Transportable tablespace export/Import on same endian platforms

Transportable tablespaces export is manageable across platforms and only Meta data will be exported. In Cross platform transportable tablespace the data movement is simpler and faster.

This mode requires that you have the EXP_FULL_DATABASE role.

Please note that
1. Source and target database must use the same character/national char set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.

For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.

Follow the below steps

Step1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ----- -------------------------- ---------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little

Step 2: Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;

Step3: Export metadata
(i) Using export utility
$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log

(ii) Using data pump export utility
$ expdp system/password transport_tablespaces=test_user_tbs transport_full_check=y directory=export_dir dumpfile=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log

TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.

The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.

Step 4: Copy the files to Target system
If you are using FTP use binary option.

Step5: Run the below import command to plug the tablespace into the database.

Click here to see the Instructions to create Directory and grant privileges.

(i) Using import utility
$ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log

(ii) Using data pump import utility
$ impdp test_user/test123 transport_datafiles='test_user_tbs01.dbf' directory=import_dir dumpfile=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.

logYou can use REMAP_SCHEMA=(source:target), if you want to import into another schema.

Step7: Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;

Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.

Please see the Data pump Export/Import related documents:
Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.
Click here for Transportable Tablespace across different endian Platforms.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Wednesday, August 26, 2009

Oracle Data Pump Export/Import

Oracle Data Pump utility is used for exporting data and metadata into set of operating system files and it is newer, faster and flexible alternative to “export/import” utilities.

Oracle Datapump utility introduced in Oracle 10g Release1 and this utility can be invoked using expdp(export) and impdb(import) commands. User need to specify the export/import parameters to determine the operation and you can specify the parameters on the command line or in a parameter file.

The expdp and impdp uses the procedures provided in the DBMS_DATAPUMP package to execute the commands and DBMS_METADATA package is used to move the data.

Please note that it is not possible to start or restart data pump jobs on one instance in Oracle RAC if jobs currently running on other instances.

Oracle Data Pump Export :-

1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;

2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;

3. Take Data Pump Export

Click here to see Roles/privileges required for Export modes.

Oracle data pump export examples for all 5 modes.

(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log

(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log

If you want to export more than one schema then specify the schema names separated by comma.

(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log

You can specify more than one table.

(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log

You can specify more than one tablespace.

(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log

Click here to learn more on Transportable Tablespace with examples.

Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.

Copy the dump file to the target system where you to import.

1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';

2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;

3. Import the data using Data Pump Import.

Oracle data pump import examples for all 5 modes.

(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log

(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log

(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log

From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log

Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND

(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log

Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.

(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.

Common Errors with Data pump import (impdp) utility:-

1. ORA-31631: privileges are required
   ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.

2. ORA-31631: privileges are required
    ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.

3. ORA-01950: no privileges on tablespace "string"
    Cause: User does not have privileges to allocate an extent in the specified tablespace.
   Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.

Click here to learn Roles/ privileges required for Data pump Export and Import.

4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
    IMP-00017: following statement failed with ORACLE error 3113:
    "BEGIN "
    "SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE   SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb

5. Import failed with below errors
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'CTXSYS.DRIIMP' must be declared
Cause: The user that you are importing does not have privileages on CTXSYS.DRIIMP package or CTXSYS user does not exists
Action: Create CTXSYS user or grant required permissions

Please see the Data pump Export and Import related documents:
Click here for Data Pump Export modes and Interfaces.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Tuesday, August 25, 2009

Data Pump Export modes and Interfaces

Oracle Data Pump provides 5 Export modes:-

1. Full Export Mode: A full export is specified using the FULL parameter and entire database is exported using this mode. EXP_FULL_DATABASE role required for this mode.
2. Schema Mode: A Schema export is specified using the SCHEMAS parameter and you can specify more than one schema separated by comma. EXP_FULL_DATABASE role is required if you want to export other schemas.
3. Table Mode: A Table export is specified using the TABLES parameter and you can export specified set of tables and dependency objects. EXP_FULL_DATABASE role is required to export objects to that are not in your own schema.
4. Tablespace Mode: A tablespace export is specified using the TABLESPACES parameter and you can export full/part tables contained in a specified set of tablespaces including dependent objects. Privileged users get all tables and Nonprivileged users get only the tables in their own schemas.
5. Transportable Tablespace Mode: A transportable tablespace mode is specified using the TRANSPORT_TABLESPACES parameter. EXP_FULL_DATABASE role is required for this mode.

You can interact with Data Pump Export using below interfaces
1. Command line – specify all parameters directly on command line.
2. Parameter file – Specify all command-line parameters in a file i.e. PARFILE
3. Interactive – This mode is enabled by pressing Ctrl+C during an export and you can run various commands.

Oracle Data pump uses four methods for moving data in and out.
1. Datafile copy - The copying the datafiles is the fatest method of moving data and this method uses TRANSPORT_TABLESPACE,TRANSPORTABLE parameters
2. Direct Path - This is the second fatest method and data will be moved with minimal interpretation and in this method the SQL layer of the database is bypassed.
3. External Tables - This method creates external table mapping to a datafile dump
4. Network Link import - This methold is used to retrive data from a remote database over the network link and no dump files involved in this method.

Please see the Data pump Export/Import related documents:
Click here to learn Oracle Data Pump Export/Import with examples.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Friday, August 21, 2009

Delete archive logs using RMAN.

You can delete the archive logs using RMAN with or with out Recovery Catalog.
Use the below command to delete archive logs from ASM Disk Group or other file systems.

Method 1: Delete archive logs using RMAN with Recovery Catalog

RMAN> connect catalog userid/password@connection_string
RMAN> connect target
RMAN> delete force archivelog like '’;
For ex: -
RMAN> delete force archivelog like '+FLASH/testdb/archivelog/2009_08_11/thread_1_seq_366.663.1;

To delete multiple files just place them after the LIKE clause and use wildcard for file names.
RMAN> delete force archivelog like '+FLASH/asm/archivelog/2009_08_11/thread_1_*;

Method 2: Delete archive logs using RMAN with out Recovery Catalog.

RMAN> connect target
RMAN> delete force archivelog like '+FLASH/asm/archivelog/2009_08_11/thread_1_seq_366.663.1;

RMAN will update the controlfile/recovery catalog that the files have been removed.

Click here to learn How to delete the database files and archive logs without using RMAN.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Monday, August 17, 2009

Install and Configure ASMLib in 10g (Automatic Storage Management)

The Oracle ASM feature was introduced in Oracle 10g Release 1.

There are two methods to configure ASM on Linux.

1. Configure ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices, which are managed by ASM using ASMLib calls. ASMLib works with block devices and raw devices are not required with this method.

2. Configure ASM with Standard Linux I/O: This method creates Oracle database files on raw character devices, which are managed by ASM using standard Linux I/O system calls. It requires creating RAW devices for all disk partitions used by the ASM.

Here we will “Configure ASM with ASMLib I/O” method.

Step 1: Download and Install ASMLib

Download Oracle “ASMLib” software from below link and follow the link for your platform.
http://www.oracle.com/technology/tech/linux/asmlib/index.html

You must install all three packages for the kernel you are running. Use “uname –r “command to determine the version of your kernel.

oracleasm-support-version.cpu_type.rpm
oracleasm-kernel-version.cpu_type.rpm
oracleasmlib-version.cpu_type.rpm

See the below example to install the packages and run the command as root.

# rpm -ivh oracleasm-support-2.0.3-1.x86_64.rpm \
> oracleasm-2.6.9-67.ELsmp-2.0.3-1.x86_64.rpm \
> oracleasmlib-2.0.2-1.x86_64.rpm

Step 2: Configure and Enable Oracle ASM

Run the below command to configure the Oracle ASM and it will ask for the user and group that default to owing the ASM drivers access point.

# /etc/init.d/oracleasm configure
Configuring the Oracle ASM library driver.

This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.

Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration [ OK ]
Creating /dev/oracleasm mount point [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]

This command will load the ASM driver and mount the ASM driver filesystem. By selecting “y” during the configuration, the system will always load the module and mount the file system on system boot.

Run the below command to enable automatic start
#/etc/init.d/oracleasm enable

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Friday, August 14, 2009

ORA-07445 Error

The ORA-07445 is very common error in many versions, but there are different reasons for it.

Here are some reasons for this error:-
1. Software Bugs
2. High RAM Usage.
3. OS Limits
4. Shared pool corruptions due to I/o slaves error (Refer Metalink note:1404681)
5. SGA corruption (Refer Metalink note: 5202899/5736850)
6. Library cache lock by query coordinator (Refer Metalink note: 3271112)
7. Hardware Errors
8.Oracle Block Corruptions
9. Program Errors

Oracle Metalink provided a Look up tool to search ORA-00600/ORA-07445 errors, look for workaround or bugs.

Error: ORA-07445: exception encountered: core dump [%s] [%s] [%s] [%s] [%s] [%s]"
Cause: An OS exception occurred which should result in the creation of a core file. This is an internal error.
Action: Contact your customer support representative.

The ORA-07445 has a multitude of causes and contacting Oracle technical support on Metalink is always required.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Wednesday, August 12, 2009

Voting disk Backup and Recovery

Voting disk manages node membership information and it is used by Cluster synchronization services demon (CSSD).

Backing up Voting Disks:-
Run the below command to back up the voting disk.
$ dd if=voting_disk_name of=backup_file_name
or
$ dd if=voting_disk_name of=backup_file_name bs=4k

Recovering Voting Disks:-
Run the below command to recover a voting disk
$ dd if=backup_file_name of=voting_disk_name

You can change the Voting Disk Configuration dynamically after the installation.
Please note that you need to run the command as “root”.

Run the below command to add a voting disk:
# crsctl add css votedisk_path

You can have upto 32 Voting disks.

Run the following command to remove a voting disk:
# crsctl delete css votedisk_path

Click here to learn Backup and Recovery of OCR

Regards,
Satishbabu Gunukua
http://www.oracleracexpert.com/

Restore and Recover OCR from backup (Oracle Cluster Registry)

Step 1: Locate physical the OCR backups using –showbackup command.
#ocrconfig – showbackup

Step 2: Review the contents
#ocrdump –backupfile backup_file_name

Step 3: Stop the Oracle clusterware on all the nodes.
#crsctl stop crs

Step 4: Restore the OCR backup
# ocrconfig –restore $CRS_HOME/cdata/crs/day.ocr
OR
Restore the OCR from export/logical backup.
# ocrconfig –import export_file_name
For ex: - # ocrconfig –import /backup/oracle/exp_ocrbackup.dmp

Step 5: Restart the Clusterware on all nodes.
#crsctl start crs

Step 6: Check the OCR integrity
# cluvfy comp ocr –n all

Click here to learn How to backup and recover OCR.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/

Backup and Recovery of OCR - Oracle Cluster Registry

Oracle Cluster Registry (OCR) file is a key component of the Cluster. It maintains the information about cluster node list, instance to node mapping and application resources profiles such as VIP address, services...Etc.

There are two methods for OCR Backup (Oracle Cluster Registry)

1. Automatically generated OCR files under $CRS_HOME/cdata/crs
2. OCR export/logical backup

The Oracle Clusterware automatically creates OCR backups
-Every four hours: last three copies
-At the End of the Day: last two copies
-At the end of the week: last two copies.

To backup OCR file, copy the generated file from $CRS_HOME/cdata/crs to your backup directory (/backup/oracle).

You must run the backup as “root”.

Run the below command to take OCR export backup.
# ocrconfig -export export_file_name

Click here to learn Restore and Recovery OCR from Backup.

Click here to learn Voting disk Backup and Recovery.

Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/