Friday, December 14, 2018

ORA-16664 with ORA-01031 password file issue

Data Guard Broker switchover fails with an ORA-16664 error

DGMGRL> show configuration;
Configuration – ORCL_DG
Protection Mode: MaxPerformance
Databases:
orcl - Primary database
orcl_stdby - Physical standby database
Error: ORA-16664: unable to receive the result from a database

Fast-Start Failover: DISABLED
Configuration Status:
ERROR

Errors from Primary DRC log 2012-01-02 13:46:23.675 NSV2: database actually reached ORCL_STDBY
2012-01-02 13:46:23.677 NSV2: Failed to send message to site ORCL_STDBY. Error code is ORA-16642.
2012-01-02 13:46:23.678 03000000 973032160 DMON: Database ORCL_STDBYreturned ORA-16642

Errors from Standby DRC log 2012-01-03 13:51:24.902 Connection to database ORCL returns ORA-01031.
2012-01-03 13:51:24.903 Please check database ORCL is using a remote password file,
2012-01-03 13:51:24.904 its remote_login_passwordfile is set to SHARED or EXCLUSIVE,
2012-01-03 13:51:24.905 and the SYS password is the same as this database.
2012-01-03 13:51:24.905 NSV0: Failed to connect to remote database ORCL. Error is ORA-01031
2012-01-03 13:51:24.906 NSV0: Failed to send message to site ORCL. Error code is ORA-01031.

Solution : The password file mismatch at primary and standby cuased the issue. Make sure the same password file copied to all nodes

Also verify the connect identifier for for standby database is correct and you are able to tnsping and connect using sqlplus as sysdba.

Refer : Troubleshooting ORA-16191 and ORA-1017/ORA-1031 in Data Guard Log Transport Services or Data Guard Broker (Doc ID 1368170.1)

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracepxert.com



ORA-16664 error when using ASM filesystem

We have received ORA-16664: unable to receive the result from a database error when enabling the DataGuard broker config

DGMGRL> show configuration verbose;

Configuration - ORCL_DG

Protection Mode: MaxAvailability
Databases:
orcl - Primary database
orlc_stdby - Physical standby database
Error: ORA-16664: unable to receive the result from a database

1. In one of the senario the problem was in the Primary's REDO_TRANSPORT_USER init parameter
The value for this parameter had weird characters and removing helped.

SQL> show parameter redo_transport

NAME TYPE                               VALUE
------------------------------------ --------------------------------
redo_transport_user string          ???DG_USER???


2. The database is running on ASM and running on ASM and Data Guard Broker configuration files were created locally on filesystem.

In this case drop the existing broker configuration from primary and standby local filesystem and creating on broker configuration on ASM helped.

Regards
Satishbabu Gunukula
http://www.oracleracexpert.com



Wednesday, December 12, 2018

ORA-16664: unable to receive the result from a database with ORA-12514

We have received below error when trying to enabling Data guard Broker configuration

Error: ORA-16664: unable to receive the result from a database
Dataguard broker log has below errors

Failed to connect to remote database sat. Error is ORA-12514
Failed to send message to member sat. Error code is ORA-12514.


To identify the issue review the confirmation

DGMGRL> SHOW CONFIGURATION;
Configuration - orcl_dgbroker_fsfconf
Protection Mode: MaxAvailability
Members:
orcl - Primary database
orcl_stdby - Physical standby database
Error: ORA-16664: unable to receive the result from a member

Fast-Start Failover: DISABLED

DGMGRL> show database verbose 'ORCL';
Database - ORCL
Role: PRIMARY
Intended State: TRANSPORT-ON
Instance(s):
ORCL
Properties:
DGConnectIdentifier = 'ORCL'
ObserverConnectIdentifier = ''
LogXptMode = 'SYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'optional'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '30'
TransportLagThreshold = '30'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
ApplyInstances = '0'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DataGuardSyncLatency = '0'
DbFileNameConvert = ''
LogFileNameConvert = 'dummy, dummy'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
PreferredObserverHosts = ''
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=node1.oracleracexpert.com)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=ORCL_DGMGRL)(INSTANCE_NAME=ORCL)(SERVER=DEDICATED)))'
StandbyArchiveLocation = '/oraarch/ORCL'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ORCL_%r_%s_%t.arc'
TopWaitEvents = '(monitor)'

Log file locations:
Alert log : /oracle/diag/rdbms/ORCL/ORCL/trace/alert_ORCL.log
Data Guard Broker log : /oracle/diag/rdbms/ORCL/ORCL/trace/drcORCL.log

Database Status:
SUCCESS

DGMGRL> show database verbose 'ORCL_stdby';
Database - ORCL_stdby
Role: PHYSICAL STANDBY
Intended State: APPLY-ON
Transport Lag: (unknown)
Apply Lag: (unknown)
Average Apply Rate: (unknown)
Active Apply Rate: (unknown)
Maximum Apply Rate: (unknown)
Real Time Query: OFF
Instance(s):
ORCL
Properties:
DGConnectIdentifier = 'ORCL_stdby'
ObserverConnectIdentifier = ''
LogXptMode = 'SYNC'
RedoRoutes = ''
DelayMins = '0'
Binding = 'OPTIONAL'
MaxFailure = '0'
MaxConnections = '1'
ReopenSecs = '300'
NetTimeout = '30'
RedoCompression = 'DISABLE'
LogShipping = 'ON'
PreferredApplyInstance = ''
ApplyInstanceTimeout = '0'
ApplyLagThreshold = '30'
TransportLagThreshold = '30'
TransportDisconnectedThreshold = '30'
ApplyParallel = 'AUTO'
ApplyInstances = '0'
StandbyFileManagement = 'AUTO'
ArchiveLagTarget = '0'
LogArchiveMaxProcesses = '4'
LogArchiveMinSucceedDest = '1'
DataGuardSyncLatency = '0'
DbFileNameConvert = ''
LogFileNameConvert = 'dummy, dummy'
FastStartFailoverTarget = ''
InconsistentProperties = '(monitor)'
InconsistentLogXptProps = '(monitor)'
SendQEntries = '(monitor)'
LogXptStatus = '(monitor)'
RecvQEntries = '(monitor)'
PreferredObserverHosts = ''
StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)HOST=node2.oracleracexpert.com)(PORT=1521))(CONNECT_DATA=SERVICE_NAME=ORCL_STDBY_DGMGRL)(INSTANCE_NAME=ORCL)SERVER=DEDICATED)))'
StandbyArchiveLocation = '/oraarch/ORCL'
AlternateLocation = ''
LogArchiveTrace = '0'
LogArchiveFormat = 'arch_ORCL_%r_%s_%t.arc'
TopWaitEvents = '(monitor)'

Log file locations:
(Failed to retrieve log file locations.)
Database Status:
DGM-17016: failed to retrieve status for database "ORCL_stdby"
ORA-16664: unable to receive the result from a member


Solution:

1. Makes sure you have Static listener entry has SID_DGMGRL

cat $ORACLE_HOME/network/admin/listener.ora

(SID_DESC =
(GLOBAL_DBNAME = ORCL_DGMGRL) <----------------add _DGMGRL
(ORACLE_HOME = /<oracle_home>)
(SID_NAME = ORCL)
)

2. Same way change the static entry of current primary also for feature role transition. For non-default port set the local listener on the respective database where ORA-12514 was thrown.

SQL>alter system set local_listener='(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=<hostIP>)(PORT=port)))';

Refer Below oracle notes for more scenarios
1. Causes and Solutions for DGMGRL ORA-16664 (Data Guard Broker) (Doc ID 2494260.1) 

Regards
Satishbabu Gunukula

Saturday, December 8, 2018

RMAN-05001: auxiliary file name conflicts with a file used by the target database and RMAN-05501

The Duplicate db for standby failed with below error

RMAN-05501: aborting duplication of target database
RMAN-05001: auxiliary file name /oradata1/ORCL/system01.dbf conflicts with a file used by the target database

The duplicate script will look like below…

RMAN> run {
set until time "to_date('20018/12/08 14:00:00','yyyy/mm/dd HH24:MI:SS')";
allocate auxiliary channel aux1 type 'sbttape';
allocate auxiliary channel aux2 type 'sbttape';
duplicate target database for standby dorecover;
}

If primary and secondary has different directory structure then you will not see this issue. This is happening because the primary and secondary datafile directory structures are same.

To skip this check, use 'nofilenamecheck' clause. This will instructs rman not to check whether target database file names share the same name as auxiliary.

Now the script will look like below and you should not see this error.

RMAN> run {
set until time "to_date('20018/12/08 14:00:00','yyyy/mm/dd HH24:MI:SS')";
allocate auxiliary channel aux1 type 'sbttape';
allocate auxiliary channel aux2 type 'sbttape';
duplicate target database for standby dorecover nofilenamecheck;
}

Thanks
Satishbabu Gunukula, Oracle ACE

Friday, December 7, 2018

ORA-16053: DB_UNIQUE_NAME ORCL_STBY is not in the Data Guard Configuration with ORA-02097 error

The ORA-02097 is very generic error you need to check following error message to understand the issue.

ORA-02097: parameter cannot be modified because specified value is invalid
ORA-16053: DB_UNIQUE_NAME ORCL _STBY is not in the Data Guard Configuration

In my case I am trying to update DB_UNIQUE_NAME but it is failed, because the specified DB_UNIQUE_NAME is not in the Data Guard Configuration.

Before you update DB_UNIQUE_NAME parameter make sure LOG_ARCHIVE_CONFIG is enabled and it has valid DB_UNIQUE_NAME.

ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(ORCL,ORCL_STDBY)'

You can find list of valid DB_UNIQUE_NAMEs in V$DATAGUARD_CONFIG view

Thanks,
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Wednesday, November 14, 2018

ORA-00060: deadlock resolved & ORA-12012: error on auto execute of job

We have recently upgraded a dataset to 12c and started receiving below errors in alert.log

ORA-00060: deadlock resolved; details in file /home/oracle/diag/rdbms/ORCL/ORCL/trace/ORCL_j001_200521.trc
ORA-12012: error on auto execute of job "SYS"."ORA$AT_SA_SPC_SY_5723"
ORA-00060: deadlock detected while waiting for resource < ORA-06512: at "SYS.DBMS_SPACE", line 2741 < ORA-06512: at "SYS.DBMS_HEAT_MAP_INTERNAL", line 716 < ORA-06512: at "SYS.DBMS_HEAT_MAP_INTERNAL", line 1164 < ORA-06512: at "SYS.DBMS_HEAT_MAP", line 228 < ORA-06512: at "SYS.DBMS_SPACE", line 2747


After investigation found that there is a bug in 12.2 Release and Oracle working to it. You can refer below oracle support notes on this.

Bug 24687075 - SPACE ADVISOR TASKS/JOBS HITTING DEADLOCKS WITH GATHER DB STATS JOBS
In 12.2 Auto Space Advisor Job Fails With ORA-60 (Doc ID 2321020.1)


If you have setup alerts for ORA- errors then either you add exclusions to your script or you can run below command to disable the job Until Bug 24687075 gets fixed

SQL> EXEC DBMS_AUTO_TASK_ADMIN.DISABLE('AUTO SPACE ADVISOR',NULL, NULL);

Regards
Satishbabu Gunukula, Oracle ACE

Monday, November 12, 2018

ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib

Normally users see ORA-04030 error with shortage of RAM or small PGA and kernel parameter config.

ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib)
Non critical error ORA-48913 caught while writing to trace file "/u01/home/oracle/diag/rdbms/ORCL/ORCL/incident/incdir_8937/ORCL_ora_11258_i8937.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached
ORA-04030: out of process memory when trying to allocate 169040 bytes (pga heap,kgh stack)
ORA-04030: out of process memory when trying to allocate 4024 bytes (kxs-heap-b,kghsseg_1 : kokeibib)
Non critical error ORA-48913 caught while writing to trace file "/u01/home/oracle/diag/rdbms/ORCL/ORCL/incident/incdir_8938/ORCL_ora_11258_i8938.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [5242880] reached

Users can refer metalink Note 233869.1 titled "Diagnosing and Resolving ORA-4030 errors"

By looking the error it is clear that “out of process memory”, where oracle cannot get the required RAM. But note that you need to analyze the trace to find out the root cause

In my case I found that one of the un-optimized PL/SQL job consuming the resources and caused the issue.

Current SQL Statement for this session (sql_id=gjn38xnrxcfbj) -----
BEGIN orcl_server_bom_script.all_level(:1 , :2 , :3 , :4 , :5 , :6 , :7 , :8 , :9 , :10 , :11 , :12 , :13 , :14 , :15 , :16 , :17 , :18 , :19 , :20 , :21 ) ; END;
57% pid 67: 4089 MB used of 4092 MB allocated <= CURRENT PROC
Dump of Real-Free Memory Allocator Heap [0x7f2c640b4000]
mag=0xfefe0001 flg=0x5000003 fds=0x6 blksz=65536
blkdstbl=0x7f2c640b4010, iniblk=66560 maxblk=524288 numsegs=115
In-use num=65424 siz=4290117632, Freeable num=0 siz=0, Free num=1 siz=20512768


In our case max_map_count value is not enough and needs to investigate why the pl/sql processes requires such a large amount of memory. In this case adjusting PGA_AGGREGATE_TARGET or MEMORY_TARGET will not limit the size processes can grow and will not help.

To fix the issue increase the value of max_map_count. To find out current vale of max_map_count run below command

$cat /proc/sys/vm/max_map_count
65530

To change the value run below command
sysctl -w vm.max_map_count= <greater than 65530>

Follow below metalink note to change the value of vm.max_map_count.
Modification of "vm.max_map_count" on Oracle Linux 6 (Doc ID 2195159.1) 

Regards
Satishbabu Gunukula, Oracle ACE








Wednesday, October 17, 2018

opidcl aborting process unknown ospid (xxxx) as a result of ORA-1000


By looking the error message we can easily find out the issue

01000, 00000, "maximum open cursors exceeded"

Cause: A host language program attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.

Action: Modify the program to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.

First you need to work with application team and find out why the cursors are not closing.

If required you can increase the OPEN_CURSORS value by running below command.
ALTER SYSTEM SET open_cursors = 400 SCOPE=BOTH;

Note that to increase the value of OPEN_CURSORS no downtime required, but cursor storing require more memory.


Regards
Satishbabu Gunukula
htt://oracleracexpert.com

Thursday, September 6, 2018

Oracle patch failed with “CheckActiveFilesAndExecutables” failed error

We are applying patch on Oracle 12c and we got Prerequisite check “CheckActiveFilesAndExecutables” failed error. By looking the error we can see that files are active and used by some active processes.


$opatch apply
.....
[Sep 6, 2018 6:56:10 PM] [INFO] Finish fuser command /sbin/fuser /oracle/product/12.1.0.2/dbhome_1/lib/libsrvm12.so at Thu Sep 06 18:56:10 PDT 2018
[Sep 6, 2018 6:56:10 PM] [SEVERE] OUI-67073:UtilSession failed: Prerequisite check "CheckActiveFilesAndExecutables" failed.
[Sep 6, 2018 6:56:10 PM] [INFO] Finishing UtilSession at Thu Sep 06 18:56:10 PDT 2018
[Sep 6, 2018 6:56:10 PM] [INFO] Log file location: /oracle/product/12.1.0.2/dbhome_1/cfgtoollogs/opatch/opatch2018-09-06_18-55-32PM_1.log
[Sep 6, 2018 6:56:10 PM] [INFO] Stack Description: java.lang.RuntimeException: Prerequisite check "CheckActiveFilesAndExecutables" failed.
at oracle.opatch.OPatchSessionHelper.runApplyPrereqs(OPatchSessionHelper.java:6670)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:988)
at oracle.opatch.opatchutil.NApply.legacy_process(NApply.java:363)
at oracle.opatch.opatchutil.NApply.process(NApply.java:343)
at oracle.opatch.opatchutil.OUSession.napply(OUSession.java:1105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at oracle.opatch.UtilSession.process(UtilSession.java:359)
at oracle.opatch.OPatchSession.process(OPatchSession.java:2670)
at oracle.opatch.OPatch.process(OPatch.java:811)
at oracle.opatch.OPatch.main(OPatch.java:861)
Caused by: java.lang.RuntimeException: Prerequisite check "CheckActiveFilesAndExecutables" failed.
... 13 more
Caused by: oracle.opatch.PrereqFailedException: Prerequisite check "CheckActiveFilesAndExecutables" failed.
... 13 more

In the log file I see that below lib is active and used by some processes

[Sep 6, 2018 6:55:49 PM] [INFO] Following executables are active :
/oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so.12.1
[Sep 6, 2018 6:55:49 PM] [INFO] Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:
Following executables are active :
/oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so.12.1
[Sep 6, 2018 6:55:49 PM] [INFO] Start fuser command /sbin/fuser /oracle/product/12.1.0.2/dbhome_1/bin/adrci at Thu Sep 06 18:55:49 PDT 2018

To find which process using below lib run below command and you will get the process id

$ /sbin/fuser /oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so.12.1
/oracle/product/12.1.0.2/dbhome_1/lib/libclntsh.so.12.1: 8631m

Find out the process that is active using below command

$ps –ef |grep 8631     
oracle     8631      1  0 Aug23 ?        00:02:22 /oracle/product/12.1.0.2/dbhome_1/bin/tnslsnr LISTENER_ORCL-inherit

Either you stop the processes or kill the process. Once the processes is stopped you can proceed with opatch again and it should be successful.

Note that after patch is successful check for “SEVERE” errors if any in the log file.

Thanks
Satishbabu Gunukula
http://www.oracleracexpert.com


Wednesday, May 30, 2018

ORA-03206: maximum file size of (6553600) blocks in AUTOEXTEND clause is out of range

I received below error while extending the datafile. You might see below error when creating tables space too.

SQL> alter database datafile '/oradata1/ORCL/orcl101.dbf' autoextend ON maxsize 50g;
alter database datafile '/oradata1/ORCL/orcl101.dbf' autoextend ON maxsize 50g;
*
ERROR at line 1:
ORA-03206: maximum file size of (6553600) blocks in AUTOEXTEND clause is out of range


The reason is maximum file size for an autoenxtendable file has exceeded the maximum number of blocks allowed. Note that Oracle allows only up to 32GB datafile with 8k blocks. If you want to create a tablespace or datafile more than 32GB then you must use BIGFILE keyword.

To calicuate max datafile size:
max datafile size = db_block_size * maximum number of blocks

Solution: create a file less than 32 GB size or if you need to create datafile more than 32GB then use BIGFILE keyword.

SQL> alter database datafile '/oradata1/ORCL/orcl101.dbf' autoextend ON maxsize 32767M ;
Tablespace altered.

Hope this helps,

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

ERROR 1827 (HY000): The password hash doesn't have the expected format

When creating a user with Grant option we have received below error message

ERROR 1827 (HY000): The password hash doesn't have the expected format
That means you have to use hash password. Here is the simple solution that you can use

Enter the password that you want to set and you will get HASH password.
mysql> select password(‘enter password you want');
+-------------------------------------------+
| password(‘enter password you want') |
+-------------------------------------------+
| *B535BN128KK03E74BE2AC0EE23D07ABX6AD8165E |
+-------------------------------------------+
1 row in set (0.00 sec)

Use the has password while creating the user

mysql> GRANT ALL ON *.* TO 'admin'@'%' IDENTIFIED BY PASSWORD '*B535BN128KK03E74BE2AC0EE23D07ABX6AD8165E ' WITH GRANT OPTION;

Query OK, 0 rows affected (0.01 sec)

Thanks
Satishbabu Gunukula, Oracle ACE

Friday, May 25, 2018

ORA-01555: snapshot too old: rollback segment number

There are many reasons for this error. In this post I am providing all the possible scenarios and related Oracle Notes/Links.

Concepts/Definitions

The ORA-1555 errors can happen when a query is unable to access enough undo to build
a copy of the data at the time the query started. Committed "versions" of blocks are
maintained along with newer uncommitted "versions" of those blocks so that queries can
access data as it existed in the database at the time of the query. These are referred to as
"consistent read" blocks and are maintained using Oracle undo management.

See Document 40689.1 - ORA-1555 "Snapshot too old" - Detailed Explanation for more about
these errors.


Diagnosing
Due to space limitations, it is not always feasible to keep undo blocks on hand for the life of the instance. Oracle Automatic Undo Management (AUM) helps to manage the time frame that undo blocks are stored. The time frame is the "retention" time for those blocks.

There are several ways to investigate the ORA-1555 error. In most cases, the error is a legitimate problem with getting to an undo block that has been overwritten due to the undo "retention" period having passed.

AUM will automatically tune up and down the "retention period, but often space limitations or configuration of the undo tablespace will throttle back continuous increases to the "retention" period.
The error message is reported in the user session and often is not captured in the alert log. The user could see a message like

Using rollback segment functionality:

ORA-1555: snapshot too old (rollback segment too small)

or

Using AUM:

ORA-01555: snapshot too old: rollback segment number 9 with name "_SYSSMU9$" too small

If the error is captured in the alert.log, you would see something like

Tue May 26 16:16:57 2009

ORA-01555 caused by SQL statement below (SQL ID: 54yn3n36w24ft, Query Duration=922 sec, SCN: 0x0007.8a55f4e3)

Initial Investigation

Rollback Segments:

With Oracle 10g and later versions of Oracle, you can still use a Rollback Segments configuration. ORA-1555 errors in that environment still follow older guidelines as described in


Document 10579.1 - How many Rollback Segments to Have
Document 107085.1 - Tuning Rollback Segments
Document 69464.1 - Rollback Segment Configuration & Tips
Automatic Undo Management:

The database will be self tuning for undo when using Automatic Undo Management. This does not eliminate ORA-1555 completely, but does minimize ORA-1555 as long as there is adequate space in the undo tablespace and workloads tend to follow repeatable patterns. In some cases with periodic changes to workload (large data updates particularly with LOB data) the self tuning of undo can become aggressive and lead to undo issues.

Document 461480.1 - FAQ Automatic Undo Management (AUM) / System Managed Undo (SMU)
Document 135053.1 - How to Create a Database with Automatic Undo Management
Document 268870.1 - How to Shrink the datafile of Undo Tablespace
Document 231776.1 - How to switch a Database from Automatic Undo Management (AUM) back to using Rollback Segments
Document 396863.1 - How to Keep All UNDO Segments from Being Offlined in Oracle 10g - Fast Ramp-Up

LOB Issues:

Out-of-row LOB undo is maintained in the LOB segment. So the UNDO tablespace and undo retention is not associated with most LOB ORA-1555 issues. Instead the LOB column is created using either PCT_VERSION or RETENTION to manage how much space within blocks or time transpires before the LOB undo is overwritten. In environments with high updates, deletes on rows including LOBs, the chances of ORA-1555 on LOB undo is very high.

PCT_VERSION and RETENTION are not auto-tuned. To "tune" those configuration settings, you must change the values for PCT_VERSION or RETENTION. Changes to UNDO_RETENTION does not change LOB retention time frames.


Document 162345.1 - LOBS - Storage, Read-consistency and Rollback
Document 386341.1 - How to determine the actual size of the LOB segments and how to free the deleted/unused space above/below the HWM
Document 563470.1 'Lob retention not changing when undo_retention is changed
Document 422826.1 How to identify LOB Segment Use PCTVERSION or RETENTION from Data Dictionary

Error Tracing

Undo error tracing can be done for normal undo operations using the following events:

NOTE: Normal undo operations will be indicated in the error message in that the error message includes a segment name like

'¦. name "_SYSSMU1$" too small

If the error doesn't show a segment name

'¦ name "" too small

the problem is often related to LOB undo

If using pfile:
event="10442 trace name context forever, level 10"

If using spfile:
Alter system set events '10442 trace name context forever, level 10';

Reproduce the ORA-1555 error and upload the trace file to Oracle Support.

LOB undo error tracing is more difficult. Set additional tracing events as follows:

Start Session 1
Alter session set events '10046 trace name context forever, level 12';
Reproduce the error
Exit Session 1

Start Session 2
Alter session set events '10051 trace name context forever, level 1';
Reproduce the error
Exit Session 2

Start Session
Alter session set events '1555 trace name errorstack forever, level 3';
Reproduce the error
Exit Session 3

Additional resources to review:

Document 846079.1 - LOBs and ORA-1555 troubleshooting
Document 253131.1 - Concurrent Writes May Corrupt LOB Segment When Using Auto Segment Space Management
Document 467872.1 - TROUBLESHOOTING GUIDE (TSG) - ORA-1555
V$UNDOSTAT Analysis

The V$UNDOSTAT view holds undo statistics for 10 minute intervals. This view
represents statistics across instances, thus each begin time, end time, and
statistics value will be a unique interval per instance.

This does not track undo related to LOB
Document 262066.1 - How To Size UNDO Tablespace For Automatic Undo Management

Document 1112363.1 - When Does Undo Used Space Become Available?
Document 240746.1 - 10g NEW FEATURE on AUTOMATIC UNDO RETENTION

Diagnostics Scripts
Refer to Document 746173.1 : Common Diagnostic Scripts for AUM problems
and Document 877613.1 : AUM Common Analysis/Diagnostic Scripts
Common Causes/Solutions
Document 1555.1 - Known defects for ora-1555 error
Using Rollback Segments functionality:

* Problem happening on SYSTEM tablespace that still uses old Rollback Segment functionality even when configured for Automatic Undo Management (AUM).

* There are not enough rollback segments to manage the undo needed for long running queries.

* Rollback Segments are too small and undo is overwritten before long running queries complete.
Reference:

Document 69464.1 - Rollback Segment Configuration & Tips
Document 10630.1 - ORA-1555: 'Snapshot too old' - Overview
Document 862469.1 - ORA-604 & ORA-1555 Rollback Segment 0 with Name "System" Too Small

Using Automatic Undo Management (AUM):

* TUNED_UNDORETENTION in V$UNDOSTAT around the time of the error is lower than the QUERY DURATION indicated in the error message. This is a legitimate ORA-1555 and if queries are going to run for very long time frames, UNDO_RETENTION may need to be larger. Auto-tuned retention may not be able to keep up with the undo workload and staying within space limitations on the UNDO tablespace.

* LOB updates and/or deletes are frequent and a higher PCT_VERSION is required to provide enough space in the LOB Segment to accommodate the LOB undo. RETENTION on LOBs that are updated or deleted frequently can run into problems holding UNDO long enough for queries.

* QUERY DURATION shown in the error message is 30+ years and therefore, no amount of undo will satisfy the consistent read blocks.

Document 750195.1 - ORA-1555 Shows Unrealistic Query Duration (billions of seconds)

* QUERY DURATION shown in the error message is 0. NOTE: This has been filed as a bug on many release levels and has been very difficult to narrow down to a specific problem.

Document 761128.1 - ORA-1555 Error when Query Duration as 0 Seconds

* QUERY DURATION is lower than TUNED_UNDRETENTION. Undo header information can sometimes get overwritten or you could be seeing a bug.

* TUNED_UNDORETENTION stays very high and UNDO tablepsace continues to grow continuously or getting space errors.

Document 1112431.1 - Undo Remains Unexpired When Using Non-autoextensible Datafiles for Undo Tablespace.

* How to find the complete SQL statement caused ORA-1555 :
If the Database was not restarted after the error ORA-1555 , so the Statement can be obtained from :

select SQL_TEXT from SQL_TEXT where SQL_ID='<sql id from the error message>';

If the Database was restarted after the error ORA-1555 and an AWR snapshot was gathered before the restart , so the Statement can be obtained from :

select SQL_TEXT from DBA_HIST_SQLTEXT where SQL_ID='<sql id from the error message>';

Thanks
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Monday, April 23, 2018

Webinar: How to Convert Single Instance to RAC?

This Webinar helps you to understand the benefits of Oracle RAC, available methods to convert single instance to RAC.


Date and time: May 4th 2018 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)

This Webinar covers following Topics.
  • Oracle RAC and its Benefits 
  • Overview of different RAC conversion methods 
  • Overview of different Storage options 
  • Convert Single instance to Oracle RAC Using RMAN 
    • Overview 
    • Prerequisites 
    • Demonstration 
  • Check the logs 
  • Post Conversion Steps 
  • References 
  • Contact Info 
  • Q&A 
To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com.

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For presentation link Click here

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Tuesday, April 10, 2018

Oracle Flashback Technology and features

Oracle Flashback Technology is a group of Database features that that will help you to view past states of database objects or to return database objects to a previous state without using point-in-time media recovery.

With flashback features, you can do the following:
· Queries that return past data
· Recover rows or tables to a previous point in time
· Track and archive transactional data changes
· Roll back a transaction and its dependent transactions while the db online

Oracle flashback uses automatic undo management (AUM) for all flashback transactions.

Here are few Oracle flashback features.
· Flashback Database
· Flashback Table
· Flashback drop
· Flashback query
· Flashback Version/Transaction Query
· Flashback Data Archive (From Oracle 11g)
· Flashback Recovery Area:-

Flashback Database:- The FLASHBACK DATABASE is a fast alternative to performing an incomplete recovery. The database must be running in ARCHIVELOG mode, because archived logs are used in the Flashback Database operation. You must have a flash recovery area enabled, because flashback logs can only be stored in the flash recovery area.

To enable logging for Flashback Database, set the DB_FLASHBACK_RETENTION_TARGET initialization parameter and issue the ALTER DATABASE FLASHBACK ON statement.

DB_FLASHBACK_RETENTION_TARGET- length of the desired flashback window in minutes (1440 min)

SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320; # 3 days

For ex-
FLASHBACK DATABASE TO TIMESTAMP my_date;
FLASHBACK DATABASE TO SCN my_scn;

Limitations:
· It cannot be used to repair media failures, or to recover from accidental deletion of data files.
· You cannot use Flashback Database to undo a shrink data file operation.
· If the database control file is restored from backup or re-created, all accumulated flashback log information is discarded
· Flashback database cannot be used against block corruptions.

Flashback Table The FLASHBACK TABLE used to restore an earlier state of a table in the event of human or application error. The time in the past to which the table can be flashed back is dependent on the amount of undo data in the system and cannot restore a table to an earlier state across any DDL operation.

SQL> FLASHBACK TABLE test TO TIMESTAMP (SYSTIMESTAMP - INTERVAL '1' minute);

Flashback Drop:

The Oracle 10g provides the ability to reinstating an accidentally dropped table from recyclebin.
SQL> FLASHBACK TABLE test TO BEFORE DROP;

Flashback Query The feature allows the DBA to see the value of a column as of a specific time, as long as the before-image copy of the block is available in the undo segment.

For ex:-
SQL> SELECT comments FROM employee AS OF TIMESTAMP TO_TIMESTAMP ('2004-03-29 13:34:12', 'YYYY-MM-DD HH24:MI:SS');

SQL> SELECT comments FROM employee AS OF SCN 722452;

Flashback Version/Transaction Query Flashback Query only provides a fixed snapshot of the data as of a time. Use Flashback Version Query feature to see the changed data between two time points.

Ex: - The following query shows the changes made to the table:

SQL> select versions_starttime, versions_endtime, versions_xid, versions_operation, rate from rates versions between timestamp minvalue and maxvalue order by VERSIONS_STARTTIME

Flashback transaction query can be used to get extra information about the transactions listed by flashback version queries using FLASHBACK_TRANSACTION_QUERY view. The UNDO_SQL column in the table shows the actual SQL Statement.

For Ex:-
SQL> SELECT xid, operation, start_scn, commit_scn, logon_user, undo_sql FROM flashback_transaction_query

Oracle 11g has more enhancements in Flashback Transaction and LogMiner. The LogMiner Viewer has been incorporated into Enterprise Manager and integrated with the new Flashback Transaction feature, making it simple to recover transactions. Flashback Transaction allows the changes made by a transaction to be undone.

Flashback Data Archive (From Oracle 11g) : Flashback data archive allows long-term retention (for ex years) of changed data to a table or set of tables for a user-defined period of time. Flashback Data Archive (which can logically span one or more table spaces) specifies a space quota and retention period for the archive, and then creates the required tables in the archive.

User needs the FLASHBACK ARCHIVE ADMINISTER, FLASHBACK ARCHIVE privileges

SQL> CREATE FLASHBACK ARCHIVE flash_back_archive

TABLESPACE flash_back_archive QUOTA 10G RETENTION 5 YEARS;

Flashback Recovery Area:-
Flash recovery area is a disk location in which the database can store and manage files related to Backup and Recovery. To setup a flash recovery area, you must choose a directory location or Automatic Storage Management disk group to hold the files.

Flash recovery area simplifies the administration of your database by automatically naming recovery-related files, retaining the files as long as they are needed for restore and recovery activities, and deleting the files when they are no longer needed to restore your database and space is needed for some other backup and recovery-related purpose.

To Setup Flash Recovery Area (FRA), you just need to specify below two parameters.
1. DB_RECOVERY_FILE_DEST_SIZE (Specifies max space to use for FRA)

2. DB_RECOVERY_FILE_DEST (Location of FRA)


Performance Guidelines for Oracle Flashback Technology
· For Oracle Flashback Version Query, use index structures.

· Not to scan entire tables and use indexes to query small set of past data. If you need to scan a full table then use parallel hint to the query

· Keep the statistics current as cost-based optimized relies on statistics and use the DBMS_STATS package to generate statistics for tables involved in a Oracle Flashback Query.

· In a Oracle Flashback Transaction Query, the xid column is of the type RAW(8). Use the HEXTORAW conversion function: HEXTORAW(xid)to take advantage of inbuilt function

· The I/O performance cost is mainly paging in the data and undo blocks I the buffer cache. The CPU performance cost is to apply undo.

· A Oracle Flashback Query against a materialized view does not take advantage of query rewrite optimization.

Regards
Satishbabu Gunukula, Oracle ACE

Thursday, March 15, 2018

Oracle Application Express is INVALID and Remove Oracle Apex

We are trying to upgrade a database and noticed that “Oracle Application Express” component is “INVALID” .

SQL> select COMP_ID,VERSION,STATUS from dba_registry;
COMP_ID VERSION STATUS
----------------------------------- --------------- ------------------------------
APEX 3.2.1.00.12 INVALID
CATALOG 11.2.0.4.0 VALID
CATPROC 11.2.0.4.0 VALID

SQL> Select COMP_NAME,VERSION,STATUS from dba_registry where COMP_ID=’APEX’;
COMP_NAME STATUS VERSION
----------------------------------- --------------- ------------------------------
Oracle Application Express INVALID 3.2.1.00.10

To avoid issues during the upgrade we decided to remove the Oracle Apex as it is not in use. We have followed below steps to remove the Oracle Application Express.

$ sqlplus / as sysdba
SQL> @$ORACLE_HOME/apex/apxremov.sql


After removing Application Express started the upgrade but received warning that “Oracle Server” component is INVALID.

SQL> select COMP_ID,VERSION,STATUS from dba_registry;
COMP_ID VERSION STATUS
----------------------------------- --------------- ------------------------------
CATALOG 11.2.0.4.0 INVALID
CATPROC 11.2.0.4.0 INVALID

We noticed that object SYS.HTMLDB_SYSTEM is shown as invalid after removing the Oracle APEX. After investigation I have noticed that apexremov.sql script removes APEX, but leaves the HTMLDB_SYSTEM package and synonym.

I ran below command to drop the invalid packages
SQL> drop package htmldb_system;
SQL> drop public synonym htmldb_system;

Now I don’t see any more INVALID objects but I still see that CATALOG,CATPROC are still INVALID.

SQL> select object_name, object_type from dba_objects
2 where owner = 'SYS' and status = 'INVALID';
no rows selected

SQL> select COMP_ID,VERSION,STATUS from dba_registry;
COMP_ID VERSION STATUS
----------------------------------- --------------- ------------------------------
CATALOG 11.2.0.4.0 INVALID
CATPROC 11.2.0.4.0 INVALID

I ran below script to validate CATPROC and CATALOG. Note that run the same CATALOG you need to change the two entries in the script. (i.e. replace CATPROC with CATALOG before running).

sqlplus / as sysdba
set serveroutput on;
declare
start_time date;
end_time date;
object_name varchar(100);
object_id char(10);
begin
SELECT date_loading, date_loaded into start_time, end_time FROM registry$
WHERE cid = 'CATPROC';
SELECT obj#,name into object_id,object_name
FROM obj$
WHERE status > 1 AND
(ctime BETWEEN start_time AND end_time OR
mtime BETWEEN start_time AND end_time OR
stime BETWEEN start_time AND end_time) AND
ROWNUM <=1;
dbms_output.put_line('Please compile Invalid object '||object_name||'
Object_id '||object_id );
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('CATPROC can be validated now' );
end;
/

Ran utlrp.sql script and now the CATALOG and CATPROC showing as VALID.

SQL> @$ORACLE_HOME/rdbms/admin/utlrp.sql

SQL> select COMP_ID,VERSION,STATUS from dba_registry;
COMP_ID VERSION STATUS
----------------------------------- --------------- ------------------------------
CATALOG 11.2.0.4.0 VALID
CATPROC 11.2.0.4.0 VALID

We are able to upgrade the database successfully.

Thanks,
Satishbabu Gunukula, Oracle ACE

Webinar: Oracle Database high availability Solution – Oracle RAC

Oracle has many high availability features and Oracle RAC is one of the best HA solution. Join the Webinar to learn about the best practices, cost effective solutions to meet your business needs.

I am scheduling the session again as many uses didn’t get a chance to join due to limited registration and few users had issues with WebEx.

This is the opportunity for others user who are not able to make it for last WebEx session.
WebEx: “Why Oracle RAC is the best High Availability (HA) Solution?”

Date and time: Apr 6th 2018 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)

To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com.
Registrations are accepted on first come and first serve basis.

You will receive an email confirmation with meeting session link.

Don’t miss the second chance…!

Thanks,
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Webinar: Oracle Database Upgrade to 12c and Available Methods

Oracle Database 12c has many exciting new features and in order to take advantage of these features you need to upgrade the databases from older versions to Oracle 12c.

In this webinar, I will explain best practices and availble methods for you to upgrade/migrate your databases to 12c.

Date and time: Mar 30th 2018 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)


To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com.

Note that registrations are limited, first come and first serve basis only. You will receive an email confirmation with meeting session link.

For Presentation link "Click here". Don’t miss the chance…!

Thanks,
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Thursday, March 8, 2018

ORA-31613: Master process DM00 failed during startup

While taking export backup using data pump user received following error.

ORA-31637: cannot create job SYS_EXPORT_SCHEMA_01 for user
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95
ORA-06512: at "SYS.KUPV$FT", line 1587
ORA-39062: error creating master process DM00
ORA-31613: Master process DM00 failed during startup.

Normally user receives this error when maximum numbers of processes are exceeded. You will see below error in trace files.

ORA-00020: maximum number of processes (150) exceeded

Solution: To fix the issue increase PROCESSES initialization parameter and restart the database.

If you don’t see the error ORA-00020 trace file then look for other error messages which will help to fix the issue. 

I had faced similar issue and  but don't see ORA-00020 error. After looking the other error messages I see that child processes are terminated and in my case restarting the database resolved the issue.

Regards,
Satishbabu Gunukula, Oracle ACE

Thursday, March 1, 2018

One Database Solution for your Enterprise Business – Oracle 12c

Oracle 12c offers new architecture and features, you can manage many databases as ONE by efficient use of resources and reduce IT costs for your Enterprise Business. It also offers one solution for your traditional and cloud environments.

Join the Webinar to learn how this one database solution works for your enterprise business.

Date and time: Mar 16th 2018 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)

To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com.

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For Presentation link "Click here"

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Wednesday, February 28, 2018

Oracle Database Release Lifetime Support

Oracle Support info:

Oracle Premier Support provides a standard five-year support policy for Oracle Technology products. You can extend support for an additional three years with Extended Support for specific releases or receive indefinite technical support with Sustaining Support.


Oracle Release Extended support info.






Please review below link for more details about Oracle Lifetime Support Policy.
http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com


Thursday, February 22, 2018

CATPROC and CATALOG are INVALID in the Registry

We are working on pre-upgrade steps and noticed that “Oracle Database Packages and Types” and “Oracle Database Catalog Views “ components are  showing as INVALID.

SQL> select comp_id, comp_name,version, statusfrom dba_registry;
COMP_ID    COMP_NAME                                VERSION                        STATUS
---------- ---------------------------------------- ------------------------------ --------------------------------------------
ORDIM      Oracle Multimedia                                        11.2.0.4.0                     VALID
XDB        Oracle XML Database                                       11.2.0.4.0                     VALID
EXF        Oracle Expression Filter                                    11.2.0.4.0                     VALID
RUL        Oracle Rules Manager                                       11.2.0.4.0                     VALID
OWM        Oracle Workspace Manager                         11.2.0.4.0                     VALID
CATALOG    Oracle Database Catalog Views                 11.2.0.4.0                     INVALID
CATPROC    Oracle Database Packages and Types       11.2.0.4.0                     INVALID
JAVAVM     JServer JAVA Virtual Machine                     11.2.0.4.0                     VALID
XML        Oracle XDK                                                          11.2.0.4.0                     VALID
CATJAVA    Oracle Database Java Packages                   11.2.0.4.0                     VALID

I see that CATPROC, CATALOG  is INVALID in the registry and followed below steps to validate.

Step1: Validate ‘packages and types’ by running catalog and catproc scripts

$sqlplus "/as sysdba"
SQL> spool cata_logfile.txt
SQL> startup upgrade
SQL > @?/rdbms/admin/catalog.sql
SQL > @?/rdbms/admin/catproc.sql
SQL > @?/rdbms/admin/utlrp.sql
SQL> spool off

Note that you need to use Use 'startup migrate' instead of ‘startup upgrade’ if database version is lower than 10g.

This should resolve the issue. If CATPROC, CATALOG is still invalid in the registry then follow next step
Step2:- Run below script as SYSDBA until it returns 'CATPROC can be validated now'. This script recompiles all dependent objects. 
 Note that you need to replace CATPROC with CATALOG in below script and execute to recompile the objects related to CATALOG.
REM ***************
REM CHECKVALID.SQL
REM ***************
set serveroutput on;
declare
start_time date;
end_time date;
object_name varchar(100);
object_id char(10);
begin
SELECT date_loading, date_loaded into start_time, end_time FROM registry$ WHERE
cid = 'CATPROC';
SELECT obj#,name into object_id,object_name
FROM obj$
WHERE status > 1 AND
(ctime BETWEEN start_time AND end_time OR
mtime BETWEEN start_time AND end_time OR
stime BETWEEN start_time AND end_time) AND
ROWNUM <=1;
dbms_output.put_line('Please compile Invalid object '||object_name||'
Object_id '||object_id );
EXCEPTION
WHEN NO_DATA_FOUND THEN
dbms_output.put_line('CATPROC can be validated now' );
end;
/
Validate CATPROC. CATALOG by executing following command
SQL> execute DBMS_REGISTRY_SYS.VALIDATE_CATPROC;
SQL> execute DBMS_REGISTRY_SYS.VALIDATE_CATALOG;

Now I can see that all Database components are objects are VALID.

SQL> select comp_id, comp_name,version, statusfrom dba_registry;
COMP_ID    COMP_NAME                                VERSION                        STATUS
---------- ---------------------------------------- ------------------------------ --------------------------------------------
ORDIM      Oracle Multimedia                                        11.2.0.4.0                     VALID
XDB        Oracle XML Database                                       11.2.0.4.0                     VALID
EXF        Oracle Expression Filter                                    11.2.0.4.0                     VALID
RUL        Oracle Rules Manager                                       11.2.0.4.0                     VALID
OWM        Oracle Workspace Manager                         11.2.0.4.0                     VALID
CATALOG    Oracle Database Catalog Views                 11.2.0.4.0                     VALID
CATPROC    Oracle Database Packages and Types       11.2.0.4.0                     VALID
JAVAVM     JServer JAVA Virtual Machine                     11.2.0.4.0                     VALID
XML        Oracle XDK                                                          11.2.0.4.0                     VALID
CATJAVA    Oracle Database Java Packages                   11.2.0.4.0                     VALID

Thanks,
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Wednesday, February 21, 2018

Why Oracle RAC is the best High Availability (HA) Solution?

Join the Webinar to learn about the best practices, cost effective solutions to meet your business needs.

Date and time: Mar 5th 2018 8:00am-9:00am
Pacific Daylight Time (San Francisco, GMT-07:00)

To register for this Webinar, please send an email to SatishbabuGunukula@gmail.com.

Note that registrations are limited and first come and first serve basis.
You will receive an email confirmation with meeting session link.

For Presentation link "Click here"

Regards,
Satishbabu Gunukula, Oracle ACE
http://www.oracleracexpert.com

Wednesday, January 10, 2018

MySQL: Too many connection errors.

When user trying to connect MySQL Database, he got below error

ERROR 1040 (hy000): Too many connections
By looking the error we can see that the max_connections got exhausted. You can check the connection info by running below commands.

-- To find max_connections value run below command
SHOW VARIABLES LIKE '%max_connections%';

-- To see the all processes and connections run below command
SHOW FULL PROCESSLIST;

Note that when you get this error you will be able to login using root. Becoz My SQL by default will consider maximum allowed connections as MAX_CONNECTIONS +1 for super user.

But if you have used root to connect to any other app you will not able to connect to run the above commands. That means you should use root for only Administration purpose only.

You can run below command to change the MAX_CONNECTIONS

SET GLOBAL max_connections = 200;
But note that you need to find out is there any application issue which is creating more connections and causing this issue and fix that later.

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Best practices for running Tableau on a VM (Virtual Machine)

Tableau Server is resource-intensive and latency-sensitive, that’s why if you are running Tableau on a VM then dedicated resources required.

1.       Dedicated vCPU
100% dedicated vCPU allocation – should not be pooled or shared.
Core count is based on "physical" cores. Physical cores can represent actual server hardware or cores on a virtual machine (VM). Hyper-threading is ignored.
2.       Dedicated RAM
Fully (100%) pre-allocated, no pooling or sharing, no dynamic RAM.
Contiguous on the host server.
Suggested 128GB RAM per 8 core for Enterprise Deployments.
3.       Disk
Write speed is often the bottle neck, faster the better!
150MB/s or less = BAD
250MB/s+ WRITE = GOOD
400MB/s to 1GB/s+ = GREAT
250MB/s Write & 1GB/s Read = Good performance
Tiered SAN: Tableau should be on higher tier with better IO than typical storage level tier.
No underlying network attached storage (NAS) for disk.

Always note that Poorly tuned VM’s results “Poor performance” and Wasted $$ on core purchases

VM Tuning Tips Directly From VMWare
  
                 (http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf)
        See page 17 for recommended BIOS setting
       Deploying Extremely Latency-Sensitive Applications in vSphere 5.5 (http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf)
        See page 15 for best practices for latency-sensitive applications
       Upgrade network adapter driver in VM cluster

Other things needs to consider

       Virus Scan can impact performance
       Network latency between worker machines should be less than 10ms.
       Single network hop between servers (contiguous network access).

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com