Monday, January 29, 2018

Oracle Database Release Lifetime Support

Oracle Support info:

Oracle Premier Support provides a standard five-year support policy for Oracle Technology products. You can extend support for an additional three years with Extended Support for specific releases or receive indefinite technical support with Sustaining Support.


Oracle Release Extended support info.






Please review below link for more details about Oracle Lifetime Support Policy.
http://www.oracle.com/us/support/library/lifetime-support-technology-069183.pdf

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com


Wednesday, January 10, 2018

MySQ: Too many connection errors.

When user trying to connect MySQL Database, he got below error

ERROR 1040 (hy000): Too many connections
By looking the error we can see that the max_connections got exhausted. You can check the connection info by running below commands.

-- To find max_connections value run below command
SHOW VARIABLES LIKE '%max_connections%';

-- To see the all processes and connections run below command
SHOW FULL PROCESSLIST;

Note that when you get this error you will be able to login using root. Becoz My SQL by default will consider maximum allowed connections as MAX_CONNECTIONS +1 for super user.

But if you have used root to connect to any other app you will not able to connect to run the above commands. That means you should use root for only Administration purpose only.

You can run below command to change the MAX_CONNECTIONS

SET GLOBAL max_connections = 200;
But note that you need to find out is there any application issue which is creating more connections and causing this issue and fix that later.

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Best practices for running Tableau on a VM (Virtual Machine)

Tableau Server is resource-intensive and latency-sensitive, that’s why if you are running Tableau on a VM then dedicated resources required.

1.       Dedicated vCPU
100% dedicated vCPU allocation – should not be pooled or shared.
Core count is based on "physical" cores. Physical cores can represent actual server hardware or cores on a virtual machine (VM). Hyper-threading is ignored.
2.       Dedicated RAM
Fully (100%) pre-allocated, no pooling or sharing, no dynamic RAM.
Contiguous on the host server.
Suggested 128GB RAM per 8 core for Enterprise Deployments.
3.       Disk
Write speed is often the bottle neck, faster the better!
150MB/s or less = BAD
250MB/s+ WRITE = GOOD
400MB/s to 1GB/s+ = GREAT
250MB/s Write & 1GB/s Read = Good performance
Tiered SAN: Tableau should be on higher tier with better IO than typical storage level tier.
No underlying network attached storage (NAS) for disk.

Always note that Poorly tuned VM’s results “Poor performance” and Wasted $$ on core purchases

VM Tuning Tips Directly From VMWare
  
                 (http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.5.pdf)
        See page 17 for recommended BIOS setting
       Deploying Extremely Latency-Sensitive Applications in vSphere 5.5 (http://www.vmware.com/files/pdf/techpaper/latency-sensitive-perf-vsphere55.pdf)
        See page 15 for best practices for latency-sensitive applications
       Upgrade network adapter driver in VM cluster

Other things needs to consider

       Virus Scan can impact performance
       Network latency between worker machines should be less than 10ms.
       Single network hop between servers (contiguous network access).

Regards
Satishbabu Gunukula, Oracle ACE
http://oracleracexpert.com

Monday, December 11, 2017

Oracle Database Upgrade from 10.2.0.4 to 11.2.0.4

Before you upgrade make sure you have backup taken backup of the database/

1. Run Pre-Upgrade Information Tool on 10.2.0.4 database

It is under 11ghome/rdbms/admin/ utlu112i.sql
sqlplus '/ as sysdba'
spool pre_upg_11201_info.log
@/oracle/upgrade/utlu112i.sql
spool off
Make the changes as suggested by pre-upgrade toll.

Sample:
SQL> @/oracle/upgrade/utlu112i.sql
Oracle Database 11.2 Pre-Upgrade Information Tool    05-22-2013 19:23:20
**********************************************************************
Database:
**********************************************************************
--> name:          ORCLDB
--> version:       10.2.0.4.0
--> compatible:    10.2.0.4.0
--> blocksize:     8192
--> platform:      Linux x86 64-bit
--> timezone file: V4
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 624 MB
.... AUTOEXTEND additional space required: 144 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 16 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 547 MB
.... AUTOEXTEND additional space required: 97 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
**********************************************************************
Flashback: OFF
**********************************************************************
**********************************************************************
Update Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
WARNING: --> "sga_target" needs to be increased to at least 672 MB
WARNING: --> "java_pool_size" needs to be increased to at least 128 MB
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.2 init.ora or spfile]
**********************************************************************
--> background_dump_dest         11.1       DEPRECATED   replaced by
"diagnostic_dest"
--> user_dump_dest               11.1       DEPRECATED   replaced by
"diagnostic_dest"
--> core_dump_dest               11.1       DEPRECATED   replaced by
"diagnostic_dest"
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> Database is using a timezone file older than version 11.
.... After the release migration, it is recommended that DBMS_DST package
.... be used to upgrade the 10.2.0.4.0 database timezone version
.... to the latest version which comes with the new release.
WARNING: --> Database contains schemas with stale optimizer statistics.
.... Refer to the Upgrade Guide for instructions to update
.... schema statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
....   SYS
....   CTXSYS
WARNING: --> Database contains INVALID objects prior to upgrade.
.... The list of invalid SYS/SYSTEM objects was written to
.... registry$sys_inv_objs.
.... The list of non-SYS/SYSTEM objects was written to
.... registry$nonsys_inv_objs.
.... Use utluiobj.sql after the upgrade to identify any new invalid
.... objects due to the upgrade.
.... USER PUBLIC has 4 INVALID objects.
.... USER JEFFC has 3 INVALID objects.
.... USER SYS has 2 INVALID objects.
WARNING: --> Database contains schemas with objects dependent on network
packages.
.... Refer to the Upgrade Guide for instructions to configure Network ACLs.
WARNING:--> recycle bin in use.
.... Your recycle bin turned on.
.... It is REQUIRED
.... that the recycle bin is empty prior to upgrading
.... your database.
.... The command:  PURGE DBA_RECYCLEBIN
.... must be executed immediately prior to executing your upgrade.
PL/SQL procedure successfully completed.
SQL> spool off;



2: Make Init parameter changes and copy the init.ora file to 11g oracle home
Put compatible = 10.2.0.4.0 to avoid the unnecessary errors.

3: run utlrp.sql Twice to compile invalid objects
cd $ORACLE_HOME/rdbms/admin/
SQL> @utlrp.sql

4: check timezone, it needs to be upgraded
SQL> SELECT version FROM v$timezone_file;
   VERSION
         4

5: Gather Dictionary stats
SQL> EXECUTE dbms_stats.gather_dictionary_stats;
PL/SQL procedure successfully completed.

6 Purge Recycle bin
SQL> PURGE DBA_RECYCLEBIN;
DBA Recyclebin purged.

7: Put database in no-archive log mode
shut immediate
startup mount
alter database noarchivelog;
8: shut down database and listener
Shut immediate
lsnrctl stop LISTENER_ORCLDB

9: prepare listener file or add this database entry in 11gr2 home

10: set the env, we are using CSH

setenv ORACLE_BASE /oracle
setenv ORACLE_HOME /oracle/product/11.2.0/dbhome_1
setenv PATH $ORACLE_HOME/bin:$ORACLE_HOME/perl/bin:$PATH
setenv LD_LIBRARY_PATH $ORACLE_HOME/lib
setenv ORACLE_SID ORCLDB
setenv ORACLE_BASE /oracle


11: runupgrade
cd $ORACLE_HOME/rdbms/admin
spool /oracle/upgrade/upgrade_ORCLDB_11201.log
@catupgrd.sql

12: post upgrade
Run below scripts:
SQL> @utlu112s.sql
SQL> @catuppst.sql
SQL> @utlrp.sql
Upgrade is done

13. Make sure to check the all components are valid and upgrade to 11.2.0.4 and send screen shot

14: Upgrading timezone
Make sure the exiting timezone using below commands
SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME LIKE 'DST_%' ORDER BY PROPERTY_NAME;
SQL> SELECT version FROM v$timezone_file;

conn / as sysdba
shutdown immediate;
startup upgrade;
set serveroutput on
SQL> purge dba_recyclebin;
DBA Recyclebin purged.
TRUNCATE TABLE SYS.DST$TRIGGER_TABLE;
TRUNCATE TABLE sys.dst$affected_tables;
TRUNCATE TABLE sys.dst$error_table;

SQL> EXEC DBMS_DST.BEGIN_UPGRADE(11);
An upgrade window has been successfully started.
PL/SQL procedure successfully completed.

-- check if this select
SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;

PROPERTY_NAME                  VALUE
------------------------------ --------------
DST_PRIMARY_TT_VERSION         11
DST_SECONDARY_TT_VERSION       1
DST_UPGRADE_STATE              UPGRADE

-- you can check what tables need to updated using
SQL> SELECT OWNER, TABLE_NAME, UPGRADE_IN_PROGRESS FROM ALL_TSTZ_TABLES where UPGRADE_IN_PROGRESS='YES';
no rows selected

-- restart the database
shutdown immediate
startup

--Timezone should be upgraded to V11 now, run the following command to verify
SQL> select * from v$timezone_file;
FILENAME                VERSION
-------------------- ----------
timezlrg_11.dat              11

-- now upgrade the tables who need action
set serveroutput on
VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/

Failures:0
PL/SQL procedure successfully completed.

-- If there are no failures then end the upgrade
VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/
An upgrade window has been successfully ended.
Failures:0
PL/SQL procedure successfully completed.

15. After update take Database backup

Refer : Complete Checklist for Manual Upgrades to 11gR2 (Doc ID 837570.1)


HANA Index Server is not coming up

We have seen very high memory usage and user queries are taking longer and memory is not releasing. To release memory we have restarted HANA using below commands

sapcontrol -nr 00 -function StopSystem
sapcontrol -nr 00 -function StartSystem

HANA Database is not coming online and we see that Index Server is Initializing and we waited for 25 mins and no luck

$ sapcontrol -nr 00 -function GetProcessList

11.12.2017 08:29:31
GetProcessList
OK
name, description, dispstatus, textstatus, starttime, elapsedtime, pid
hdbdaemon, HDB Daemon, GREEN, Running, 2017 12 04 21:58:59, 154:30:32, 5259
hdbcompileserver, HDB Compileserver, GREEN, Running, 2017 12 04 21:59:31, 154:30:00, 5490
hdbindexserver, HDB Indexserver, YELLOW, Initializing, 2017 12 04 21:59:33, 154:29:58, 5528
hdbnameserver, HDB Nameserver, GREEN, Running, 2017 12 04 21:59:03, 154:30:28, 5283
hdbpreprocessor, HDB Preprocessor, YELLOW, Initializing, 2017 12 04 21:59:31, 154:30:00, 5492
hdbwebdispatcher, HDB Web Dispatcher, YELLOW, Running, 2017 12 04 22:37:54, 153:51:37, 11186
hdbxsengine, HDB XSEngine, YELLOW, Initializing, 2017 12 04 21:59:33, 154:29:58, 5530
In meanwhile when I look into the logs I see below errors

1. HANA OUT OF MEMORY, that is expected

indexserver_scllpa0000012.30003.rtedump.20171204-173900.014195.compositelimit_oom.trc
Composite limit violation (OUT OF MEMORY) occurred.

2. This below error is new and not seen in HANA logs before

Unknown pending exception
Throw location of the exception unknown
[4216]{-1}[-1/-1] 2017-12-04 19:58:44.751308 d Crypto           RootKeyStoreAccessorBase.cpp(00093) : Caught RootKeyStoreRecordNotFoundException reading SSFS version /usr/sap/HANADBX/SYS/global/hdb/security/ssfs:HDB_SERVER/PERSISTENCE/ROOTKEY/VERSION
[4216]{-1}[-1/-1] 2017-12-04 19:58:44.751320 a Crypto           RootKeyStoreAccessorBase.cpp(00072) : ExcExit time_t Crypto::SecureStore::RootKeyAccess::RootKeyStoreAccessorBase::readValueForContentKey(const string&, Crypto::Buffer&)(1596usec)
exception  1: no.301103  (Crypto/Ssfs/RootKeyAccess/RawAccess/RawRootKeyStoreReaderWriter.cpp:52)
    RawRootKeyStoreReader::read: SSFS-4218: Record with key "HDB_SERVER/PERSISTENCE/ROOTKEY/VERSION" not found in secure storage <-- SSFS-4215: Data file "/usr/sap/HANADBX/SYS/global/hdb/security/ssfs/SSFS_HANADBX.DAT" exists, but does not contain the requested entry
exception throw location:
 1: 0x00007fd869fe4721 in Crypto::SecureStore::RootKeyAccess::RawAccess::RawRootKeyStoreReader::read(Crypto::Buffer&)+0x40 at RawRootKeyStoreReaderWriter.cpp:86 (libhdbbasement.so)
 2: 0x00007fd869fe2f06 in Crypto::SecureStore::RootKeyAccess::RawAccess::RawRootKeyStore<false, false>::read(ltt::basic_string<char, ltt::char_traits<char> > const&, ltt::basic_string<char,

3. We see that HANA trying to recovery and but not coming online.

[4216]{-1}[-1/-1] 2017-12-04 19:59:59.100341 i Logger           PersistenceManagerImpl.cpp(06361) : Starting log replay at position 0x2b6f7dd9d02
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.100362 i Logger           RecoveryHandlerImpl.cpp(01681) : Triggering recovery of remaining log
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.100399 i Logger           RecoveryHandlerImpl.cpp(01741) : Finishing log recovery, waiting for in-process and in-load segments
[4607]{-1}[-1/-1] 2017-12-04 19:59:59.100685 i Logger           PersistenceManagerImpl.cpp(05084) : Found savepoint 253169 log record
[4607]{-1}[-1/-1] 2017-12-04 19:59:59.143111 i Logger           RecoveryHandlerImpl.cpp(02271) : Log recovery cannot expect any more data at position 0x2b6f7decec0, load states:
  - LogPartition[/hana/log/HANADBX/mnt00001/hdb00004/:0]: #segments 0/1 processed, 1/1 loaded, 1 total, last loaded LogSegment[0:0x2b6f7dd6d80/Writing,ts=2017-12-04 19:55:56.333631][GUID=5575D4BD-004F-20171119-045408-3E00001BA3/PrevGUID=5575D4BD-004F-20171119-045408-3E00001BA2/PersGUID=47A64522-004F-20150528-232756-3C50000001/RestoreGUID=74736948-6469-7547-0000-002000000000]
[4607]{-1}[-1/-1] 2017-12-04 19:59:59.143164 i Logger           RecoveryHandlerImpl.cpp(02553) : Recovery finished at log position 0x2b6f7decec0
[4597]{-1}[-1/-1] 2017-12-04 19:59:59.582532 i Logger           RecoveryHandlerImpl.cpp(03921) : Signaling finish wait barrier
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.586365 i Logger           LogSegment.cpp(00672) : Closing open log segments after log recovery completed at position 0x2b6f7decec0
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.586411 i Logger           LogSegment.cpp(00760) : Closing open on-disk segment LogSegment[0/18:0x2b6f7dd6d80-0x2b6f7decec0(0x585000B)/GUID=5575D4BD-004F-20171119-045408-3E00001BA3/PrevGUID=5575D4BD-004F-20171119-045408-3E00001BA2,TS=2017-12-04 19:55:56.333631,Hole=0x2b6f7dd6d80/Writing/0x0]@0x00007f43f1a385c0 after log recovery completed at position 0x2b6f7decec0
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.587204 i Logger           LoggerImpl.cpp(01468) : Replayed 5009280B (4MB) of log in 1.522 seconds; 3.13877MB/s; max known TID=379629377
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.587299 i Logger           LoggerImpl.cpp(01064) : Starting logger with 8 log buffers per partition, 1024KB each, segment size 1024MB, log mode normal
[4511]{-1}[-1/-1] 2017-12-04 20:00:39.628427 i Logger           RecoveryHandlerImpl.cpp(00701) : Termination of indoubt transactions. (raw page size in bytes to be still rolled back 136717467648 or garbage collected 0)
[4512]{-1}[-1/-1] 2017-12-04 20:01:29.664585 i Logger           RecoveryHandlerImpl.cpp(00701) : Termination of indoubt transactions. (raw page size in bytes to be still rolled back 133028642816 or garbage collected 0)

We have seen this type of behavior earlier but not same errors, restarting HANA helped to resolve the issue. We restarted HANA again and to see where it helps, but still “HANA Index Server is Initializing” and the log stopped at below error and no further update..

[4216]{-1}[-1/-1] 2017-12-04 19:59:05.121382 i Logger           LoggerImpl.cpp(00525) : Setting log segment backup timeout to 900s, original value was 0s
[4216]{-1}[-1/-1] 2017-12-04 19:59:05.190851 i PersistenceManag PersistenceManagerImpl.cpp(05984) : BACKUP DATA seems to have been executed, database should be recoverable from backup
[4216]{-1}[-1/-1] 2017-12-04 19:59:05.190879 i Logger           PersistenceManagerImpl.cpp(06226) : Deactivating REDO logging for log replay
[4216]{-1}[-1/-1] 2017-12-04 19:59:05.190881 i Logger           PersistenceManagerImpl.cpp(06237) : Start termination of rollback(s) open in restart/backup savepoint
[4216]{-1}[-1/-1] 2017-12-04 19:59:32.523954 i Logger           PersistenceManagerImpl.cpp(06245) : Termination of rollback(s) open in restart/backup savepoint finished in 27.2676 seconds;

[4607]{-1}[-1/-1] 2017-12-04 19:59:59.143164 i Logger           RecoveryHandlerImpl.cpp(02553) : Recovery finished at log position 0x2b6f7decec0
[4597]{-1}[-1/-1] 2017-12-04 19:59:59.582532 i Logger           RecoveryHandlerImpl.cpp(03921) : Signaling finish wait barrier
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.586365 i Logger           LogSegment.cpp(00672) : Closing open log segments after log recovery completed at position 0x2b6f7decec0
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.586411 i Logger           LogSegment.cpp(00760) : Closing open on-disk segment LogSegment[0/18:0x2b6f7dd6d80-0x2b6f7decec0(0x585000B)/GUID=5575D4BD-004F-20171119-045408-3E00001BA3/PrevGUID=5575D4BD-004F-20171119-045408-3E00001BA2,TS=2017-12-04 19:55:56.333631,Hole=0x2b6f7dd6d80/Writing/0x0]@0x00007f43f1a385c0 after log recovery completed at position 0x2b6f7decec0
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.587204 i Logger           LoggerImpl.cpp(01468) : Replayed 5009280B (4MB) of log in 1.522 seconds; 3.13877MB/s; max known TID=379629377
[4216]{-1}[-1/-1] 2017-12-04 19:59:59.587299 i Logger           LoggerImpl.cpp(01064) : Starting logger with 8 log buffers per partition, 1024KB each, segment size 1024MB, log mode normal
[4511]{-1}[-1/-1] 2017-12-04 20:00:39.628427 i Logger           RecoveryHandlerImpl.cpp(00701) : Termination of indoubt transactions. (raw page size in bytes to be still rolled back 136717467648 or garbage collected 0)

[4607]{-1}[-1/-1] 2017-12-04 19:59:59.143164 i Logger           RecoveryHandlerImpl.cpp(02553) : Recovery finished at log position 0x2b6f7decec0
[4597]{-1}[-1/-1] 2017-12-04 19:59:59.582532 i Logger           RecoveryHandlerImpl.cpp(03921) : Signaling finish wait barrier

After reviewing the logs I see that many errors are DEVIATING the problem and it looks like issue with log segment. Normally you will see this behavior when your log file system is full or backup backup file system is full.

I have verified all locations and everything looks good except log backup location. When I am trying to list or view the log backups it just hung, I see that there is some issue with backup location.

We have unmounted and remounted back and also renamed old backup location and created new. When I look into the logs I see that HANA coming online..

As you see below log HANA was just hanging and after fixing issue immediately HANA coming online.

[5699]{-1}[-1/-1] 2017-12-04 22:07:28.699548 i Basis            Helper.cpp(00514) : Using 'x64_64 ABI unwind' for stack tracing
[5576]{-1}[-1/-1] 2017-12-04 22:37:20.216535 i Logger           LogSegment.cpp(00672) : Closing open log segments after log recovery completed at position 0x2b6f7decf00
[5576]{-1}[-1/-1] 2017-12-04 22:37:20.216830 i Logger           LogSegment.cpp(00760) : Closing open on-disk segment LogSegment[0/32:0x2b6f7decec0-

Note that during HANA index server initialization, it needs all required resources for rollback,recovery,backup..etc. But log backup mount point was timing out and not accessible from HANA side which is causing the issue.