Friday, February 15, 2019

oratop utility for oracle database performance tuning



oratop utility for oracle database performance tuning 

[oracle@pri ~]$ ./oratop -h
oratop: Release 14.1.2
Usage:
         oratop [ [Options] [Logon] ]

         Logon:
                {username[/password][@connect_identifier] | / }
                [AS {SYSDBA|SYSOPER}]

                connect_identifier:
                     o Net Service Name, (TNS) or
                     o Easy Connect (host[:port]/[service_name])
         Options:
             -d : real-time (RT) wait events, section 3 (default is Cumulative)
             -k : FILE#:BLOCK#, section 4 lt is (EVENT/LATCH)
             -m : MODULE/ACTION, section 4 (default is USERNAME/PROGRAM)
             -s : SQL mode, section 4 (default is process mode)
             -c : database service mode (default is connect string)
             -f : detailed format, 132 columns (default: standard, 80 columns)
             -b : batch mode (default is text-based user interface)
             -n : maximum number of iterations (requires number)
             -i : interval delay, requires value in seconds (default: 5s)
             -v : oratop release version number
             -h : this help

[oracle@pri ~]$


ln -s /u01/app/oracle/product/18.0.0/db/lib/libclntshcore.so.18.1 /u01/app/oracle/product/18.0.0/db/lib/libclntshcore.so.12.1


[oracle@pri ~]$ ./oratop / as sysdba
./oratop: error while loading shared libraries: libclntsh.so.12.1: cannot open shared object file: No such file or directory
[oracle@pri ~]$


ln -s /u01/app/oracle/product/18.0.0/db/lib/libclntshcore.so.18.1 /u01/app/oracle/product/18.0.0/db/lib/libclntsh.so.12.1



[oracle@pri ~]$ chmod 777 oratop.RDBMS_11.2_LINUX_X64
[oracle@pri ~]$ ./oratop.RDBMS_11.2_LINUX_X64 / as sysdba

oratop: Release 14.1.2 Production on Fri Feb 15 18:32:24 2019
Copyright (c) 2011, Oracle.  All rights reserved.

Connecting ...
Oracle 18c - IND 18:32:27 up: 0.4h,   1 ins,    0 sn,   0 us, 1.7G mt, 10.0% db
ID %CPU LOAD %DCU   AAS  ASC  ASI  ASW  AST IOPS %FR   PGA UTPS UCPS SSRT  %DBT
 1    0    0    0     9    0    0    0    0    3  15  254M    0    0    0   100

EVENT (C)                        TOT WAITS   TIME(s)  AVG_MS  PCT    WAIT_CLASS
db file sequential read               6389        96    15.1   56      User I/O
DB CPU                                            36           21
db file scattered read                 360        15    43.3    9      User I/O
external table read                      1        15 15072.8    9      User I/O
control file parallel write            577         9    17.2    6    System I/O

ID   SID     SPID USR PROG S  PGA SQLID/BLOCKER OPN  E/T STA STE EVENT/*LA  W/T
 1    60     3836 SYS orat D 5.1M 7qj5jsdnpsn1a SEL    0 ACT CPU cpu runqu   8u


oratop - Utility for Near Real-time Monitoring of Databases, RAC and Single Instance (Doc ID 1500864.1)

Saturday, February 2, 2019

How to identify the Master Node in RAC

How to identify the Master Node in RAC
============================================

In RAC only Masternode is responsible for take backup of OCR.

[grid@rac1 bin]$ ./oclumon manage -get MASTER

Master = rac1
[grid@rac1 bin]$


[root@rac1 bin]# ./ocrconfig -manualbackup

rac2     2019/02/02 22:49:23     +DATA:/rac/OCRBACKUP/backup_20190202_224923.ocr.327.999211765     70732493
[root@rac1 bin]#

[root@rac1 bin]# ./ocrconfig -showbackup

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/backup00.ocr.283.994709331     70732493

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/day.ocr.284.994709345     70732493

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/week.ocr.285.994709347     70732493

rac2     2019/02/02 22:49:23     +DATA:/rac/OCRBACKUP/backup_20190202_224923.ocr.327.999211765     70732493
[root@rac1 bin]#
[root@rac1 bin]# ./ocrconfig -showbackup auto

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/backup00.ocr.283.994709331     70732493

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/day.ocr.284.994709345     70732493

rac2     2018/12/12 20:09:04     +DATA:/rac/OCRBACKUP/week.ocr.285.994709347     70732493
[root@rac1 bin]#
[root@rac1 bin]# ./ocrconfig -showbackup manual

rac2     2019/02/02 22:49:23     +DATA:/rac/OCRBACKUP/backup_20190202_224923.ocr.327.999211765     70732493
[root@rac1 bin]#



[grid@rac1 trace]$ cat ocssd.trc |grep 'master node'
2019-01-21 12:39:42.772 :    CSSD:2787034880: clssgmCMReconfig: GM master node for incarnation 443540379 is node rac1, number 1, with birth incarnation 443540379, the old master is 65535 and new master is 1
2019-01-21 12:39:42.976 :    CSSD:2787034880: clssgmCMReconfig: reconfiguration successful, incarnation 443540379 with 2 nodes, local node number 1, master node rac1, number 1
2019-01-21 12:39:49.250 :    CSSD:2788611840: clssgmCMReconfig: GM master node for incarnation 443540380 is node rac1, number 1, with birth incarnation 443540379, the old master is 1 and new master is 1
2019-01-21 12:39:49.255 :    CSSD:2788611840: clssgmCMReconfig: reconfiguration successful, incarnation 443540380 with 1 nodes, local node number 1, master node rac1, number 1
2019-01-21 12:48:05.570 :    CSSD:832739072: clssgmCMReconfig: GM master node for incarnation 443540883 is node rac1, number 1, with birth incarnation 443540883, the old master is 65535 and new master is 1
2019-01-21 12:48:05.571 :    CSSD:832739072: clssgmCMReconfig: reconfiguration successful, incarnation 443540883 with 1 nodes, local node number 1, master node rac1, number 1
2019-01-21 13:13:57.075 :    CSSD:834316032: clssgmCMReconfig: GM master node for incarnation 443540884 is node rac1, number 1, with birth incarnation 443540883, the old master is 1 and new master is 1
2019-01-21 13:13:59.038 :    CSSD:834316032: clssgmCMReconfig: reconfiguration successful, incarnation 443540884 with 2 nodes, local node number 1, master node rac1, number 1
2019-01-22 23:46:52.347 :    CSSD:2292303616: clssgmCMReconfig: GM master node for incarnation 443666809 is node rac1, number 1, with birth incarnation 443666809, the old master is 65535 and new master is 1
2019-01-22 23:46:52.350 :    CSSD:2292303616: clssgmCMReconfig: reconfiguration successful, incarnation 443666809 with 1 nodes, local node number 1, master node rac1, number 1
2019-02-02 21:20:12.712 :    CSSD:4233729792: clssgmCMReconfig: GM master node for incarnation 444608391 is node <null>, number 2, with birth incarnation 444608390, the old master is 65535 and new master is 2
2019-02-02 21:20:13.044 :    CSSD:4233729792: clssgmCMReconfig: reconfiguration successful, incarnation 444608391 with 2 nodes, local node number 1, master node rac2, number 2
[grid@rac1 trace]$



[grid@rac1 trace]$ cat crsd.trc |grep 'master'
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: th_master_check_hashids_helper: Comparing device hash IDs between local and master.
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: th_master_check_hashids_helper: Local dev (987031272, 1028247821, 0, 0, 0)
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: th_master_check_hashids_helper: Master dev (987031272, 1028247821, 0, 0, 0)
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: th_connect_master: Using GIPC type to connect
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: th_connect_master:10: Master host name [rac2]
2019-02-02 21:21:34.370 :  OCRMAS:1811937024: proath_connect_master: Attempting to connect to master at address [rac2:1494-d8b5-fb40-8a6a]
2019-02-02 21:21:34.826 :  OCRMAS:1811937024: proath_master: SUCCESSFULLY CONNECTED TO THE MASTER
2019-02-02 21:21:34.826 :  OCRMAS:1811937024: th_master: NEW OCR MASTER IS 2
2019-02-02 21:21:34.829 :  OCRSRV:2758700032: th_reg_master_change: Master change callback registered. Client:[1]
2019-02-02 21:21:34.829 :  OCRSRV:2758700032: th_reg_master_change: Notified master change
2019-02-02 21:21:34.829 :  OCRAPI:2758700032: a_reg_master_change: Registered master change callback. flags:[4]
2019-02-02 21:21:34.842 : CRSMAIN:2758700032:  Registering for mastership change events...
2019-02-02 21:21:34.842 :  OCRSRV:2758700032: th_reg_master_change: Master change callback registered. Client:[0]
2019-02-02 21:21:34.842 :  OCRSRV:2758700032: th_reg_master_change: Notified master change
2019-02-02 21:21:34.843 :  OCRAPI:2758700032: a_reg_master_change: Registered master change callback. flags:[2]
2019-02-02 21:24:24.651 :UiServer:1595901696: {1:60436:2} Master change notification has received. New master: 2
[grid@rac1 trace]$
[grid@rac1 trace]$

[grid@rac1 trace]$ pwd
/u01/app/grid/product/18.0.0.0/grid_base/diag/crs/rac1/crs/trace
[grid@rac1 trace]$



Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR) - OCRCHECK : Oracle Cluster Registry Check utility in RAC


Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR)

ocrcheck  command in RAC
==============================



[root@rac1 bin]# ./ocrcheck -help
Name:
        ocrcheck - Displays health of Oracle Cluster/Local Registry.

Synopsis:
        ocrcheck [-config | -backupfile <backupfilename>] [-details] [-local]

  -config       Displays the configured locations of the Oracle Cluster Registry.
                This can be used with the -local option to display the configured
                location of the Oracle Local Registry
  -details      Displays detailed configuration information.
  -local        The operation will be performed on the Oracle Local Registry.
  -backupfile <backupfilename>  The operation will be performed on the backup file.

Notes:
        * This command for Oracle Cluster Registry is not supported from a Leaf node.

[root@rac1 bin]#

OCR (Oracle Cluster Registry) information
================================================

The Oracle Clusterware (Oracle Grid Infrastructure GI stack in 11gR2) uses OCR to manage resources and node membership information. 
It contains the following information shared across the nodes in the cluster

ASM diskgroups, volumes, filesystems, and instances
RAC databases and instances information
SCAN listeners and local listeners
SCAN VIPs and Local VIPs
Nodes and node applications

User defined resources


[root@rac1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84796
         Available space (kbytes) :     406888
         ID                       : 1718087688
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#




[root@rac1 bin]#
[root@rac1 bin]# ./ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :      +DATA
[root@rac1 bin]# ./ocrcheck -details
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84796
         Available space (kbytes) :     406888
         ID                       : 1718087688
         Device/File Name         : +DATA/rac/OCRFILE/registry.255.993952649
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#


[root@rac1 bin]# cat /etc/oracle/ocr.loc
#Device/file +DATA getting replaced by device +DATA/rac/OCRFILE/registry.255.993952649
ocrconfig_loc=+DATA/rac/OCRFILE/registry.255.993952649
local_only=false[root@rac1 bin]#

[root@rac1 bin]#


OLR(Oracle Local Registry) Information 
=========================================
Its contains node-specific information required by OHASD . Every node has its own dedicated OLR file.(not shared between the nodes)


[root@rac1 bin]# ./ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      83164
         Available space (kbytes) :     408520
         ID                       :  161118435
         Device/File Name         : /u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#


[root@rac1 bin]#
[root@rac1 bin]# ./ocrcheck  -local -details
Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      83164
         Available space (kbytes) :     408520
         ID                       :  161118435
         Device/File Name         : /u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#

[root@rac1 bin]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
crs_home=/u01/app/grid/product/18.0.0.0/grid
orplus_config=FALSE

[root@rac1 bin]#

Friday, January 25, 2019

DB Instance terminated after kill VKTM background process

Instance terminated after kill VKTM background process




[oracle@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 25 22:49:39 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1526725696 bytes
Fixed Size                  8657984 bytes
Variable Size             503316480 bytes
Database Buffers         1006632960 bytes
Redo Buffers                8118272 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

[oracle@localhost ~]$
[oracle@localhost ~]$ ps -ef | grep -i ora
root      2715  2241  0 22:49 ?        00:00:00 sshd: oracle [priv]
oracle    2725  2715  0 22:49 ?        00:00:00 sshd: oracle@pts/0
oracle    2726  2725  0 22:49 pts/0    00:00:00 -bash
root      2773  2241  0 22:49 ?        00:00:00 sshd: oracle [priv]
oracle    2776  2773  0 22:49 ?        00:00:00 sshd: oracle@pts/1
oracle    2777  2776  0 22:49 pts/1    00:00:00 -bash
oracle    2835  2777  0 22:50 pts/1    00:00:00 tail -500f alert_IND.log
oracle    2839     1  0 22:50 ?        00:00:00 ora_pmon_IND
oracle    2841     1  0 22:50 ?        00:00:00 ora_clmn_IND
oracle    2843     1  0 22:50 ?        00:00:00 ora_psp0_IND
oracle    2845     1  2 22:50 ?        00:00:01 ora_vktm_IND
oracle    2849     1  0 22:50 ?        00:00:00 ora_gen0_IND
oracle    2851     1  0 22:50 ?        00:00:00 ora_mman_IND
oracle    2855     1  0 22:50 ?        00:00:00 ora_gen1_IND
oracle    2858     1  0 22:50 ?        00:00:00 ora_diag_IND
oracle    2860     1  0 22:50 ?        00:00:00 ora_ofsd_IND
oracle    2863     1  0 22:50 ?        00:00:00 ora_dbrm_IND
oracle    2865     1  0 22:50 ?        00:00:00 ora_vkrm_IND
oracle    2867     1  0 22:50 ?        00:00:00 ora_svcb_IND
oracle    2869     1  0 22:50 ?        00:00:00 ora_pman_IND
oracle    2871     1  0 22:50 ?        00:00:00 ora_dia0_IND
oracle    2873     1  0 22:50 ?        00:00:00 ora_dbw0_IND
oracle    2875     1  0 22:50 ?        00:00:00 ora_lgwr_IND
oracle    2877     1  0 22:50 ?        00:00:00 ora_ckpt_IND
oracle    2879     1  0 22:50 ?        00:00:00 ora_smon_IND
oracle    2881     1  0 22:50 ?        00:00:00 ora_smco_IND
oracle    2883     1  0 22:50 ?        00:00:00 ora_reco_IND
oracle    2885     1  0 22:50 ?        00:00:00 ora_w000_IND
oracle    2887     1  0 22:50 ?        00:00:00 ora_lreg_IND
oracle    2889     1  0 22:50 ?        00:00:00 ora_w001_IND
oracle    2891     1  0 22:50 ?        00:00:00 ora_pxmn_IND
oracle    2895     1  2 22:50 ?        00:00:02 ora_mmon_IND
oracle    2897     1  0 22:50 ?        00:00:00 ora_mmnl_IND
oracle    2899     1  0 22:50 ?        00:00:00 ora_d000_IND
oracle    2901     1  0 22:50 ?        00:00:00 ora_s000_IND
oracle    2903     1  0 22:50 ?        00:00:00 ora_tmon_IND
oracle    2906     1  0 22:50 ?        00:00:00 ora_m000_IND
oracle    2921     1  0 22:51 ?        00:00:00 ora_tt00_IND
oracle    2923     1  0 22:51 ?        00:00:00 ora_tt01_IND
oracle    2925     1  0 22:51 ?        00:00:00 ora_tt02_IND
oracle    2927     1  0 22:51 ?        00:00:00 ora_w002_IND
oracle    2929     1  0 22:51 ?        00:00:00 ora_w003_IND
oracle    2931     1  0 22:51 ?        00:00:00 ora_aqpc_IND
oracle    2933     1  0 22:51 ?        00:00:00 ora_w004_IND
oracle    2937     1  0 22:51 ?        00:00:00 ora_p000_IND
oracle    2939     1  0 22:51 ?        00:00:00 ora_p001_IND
oracle    2941     1  0 22:51 ?        00:00:00 ora_qm02_IND
oracle    2943     1  0 22:51 ?        00:00:00 ora_qm03_IND
oracle    2945     1  0 22:51 ?        00:00:00 ora_q002_IND
oracle    2947     1  0 22:51 ?        00:00:00 ora_q003_IND
oracle    2949     1  0 22:51 ?        00:00:00 ora_q004_IND
oracle    2951     1  0 22:51 ?        00:00:00 ora_q005_IND
oracle    2953     1  5 22:51 ?        00:00:01 ora_cjq0_IND
oracle    2955     1  0 22:51 ?        00:00:00 ora_s001_IND
oracle    2957     1  0 22:51 ?        00:00:00 /bin/sh /u01/app/oracle/product/18.0.0/db/QOpatch/qopiprep.bat /u01/app/oracle/product/18.0.0/db/QOpatch/qopiprep.bat
oracle    2965  2957  0 22:51 ?        00:00:00 /bin/sh /u01/app/oracle/product/18.0.0/db/OPatch/opatch lsinventory -customLogDir /u01/app/oracle/product/18.0.0/db/rdbms/log -xml /u01/app/oracle/product/18.0.0/db/rdbms/log/xml_file_IND_2957.xml -retry 0 -invPtrLoc /u01/app/oracle/product/18.0.0/db/oraInst.loc
oracle    3094  2965  8 22:51 ?        00:00:00 /u01/app/oracle/product/18.0.0/db/OPatch/jre/bin/java -d64 -Xmx3072m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/u01/app/oracle/product/18.0.0/db/rdbms/log/opatch -cp /u01/app/oracle/product/18.0.0/db/oui/jlib/OraInstaller.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/OraInstallerNet.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/OraPrereq.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/share.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/orai18n-mapping.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/xmlparserv2.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/emCfg.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/ojmisc.jar:/u01/app/oracle/product/18.0.0/db/OPatch/jlib/oracle.opatch.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/auto/core/modules/features/oracle.glcm.oplan.core.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/auto/core/modules/features/oracle.glcm.osys.core.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/ocm/lib/emocmclnt.jar -DOPatch.ORACLE_HOME=/u01/app/oracle/product/18.0.0/db -DOPatch.DEBUG=false -DOPatch.MAKE=false -DOPatch.RUNNING_DIR=/u01/app/oracle/product/18.0.0/db/OPatch -DOPatch.MW_HOME= -DOPatch.WL_HOME= -DOPatch.COMMON_COMPONENTS_HOME= -DOPatch.OUI_LOCATION=/u01/app/oracle/product/18.0.0/db/oui -DOPatch.FMW_COMPONENT_HOME= -DOPatch.OPATCH_CLASSPATH=/u01/app/oracle/product/18.0.0/db/JRE:/u01/app/oracle/product/18.0.0/db/jlib:/u01/app/oracle/product/18.0.0/db/rdbms/jlib -DOPatch.WEBLOGIC_CLASSPATH= -DOPatch.SKIP_OUI_VERSION_CHECK= -DOPatch.PARALLEL_ON_FMW_OH= oracle/opatch/OPatch lsinventory -customLogDir /u01/app/oracle/product/18.0.0/db/rdbms/log -xml /u01/app/oracle/product/18.0.0/db/rdbms/log/xml_file_IND_2957.xml -retry 0 -invPtrLoc /u01/app/oracle/product/18.0.0/db/oraInst.loc
oracle    3105     1  2 22:51 ?        00:00:00 ora_j000_IND
oracle    3107     1  6 22:51 ?        00:00:00 ora_j001_IND
oracle    3109     1  1 22:51 ?        00:00:00 ora_j002_IND
oracle    3111     1  0 22:51 ?        00:00:00 ora_j003_IND
oracle    3113     1  1 22:51 ?        00:00:00 ora_j004_IND
oracle    3115     1 11 22:51 ?        00:00:00 ora_j005_IND
oracle    3117     1  0 22:51 ?        00:00:00 ora_j006_IND
oracle    3119     1  1 22:51 ?        00:00:00 ora_j007_IND
oracle    3121     1  1 22:51 ?        00:00:00 ora_j008_IND
oracle    3123     1  0 22:51 ?        00:00:00 ora_j009_IND
oracle    3125     1  5 22:51 ?        00:00:00 ora_j00a_IND
oracle    3127     1  0 22:51 ?        00:00:00 ora_j00b_IND
oracle    3129     1  0 22:51 ?        00:00:00 ora_j00c_IND
oracle    3131     1  0 22:51 ?        00:00:00 ora_j00d_IND
oracle    3133     1  0 22:51 ?        00:00:00 ora_q006_IND
oracle    3135     1  1 22:51 ?        00:00:00 ora_q007_IND
oracle    3137     1  1 22:51 ?        00:00:00 ora_q008_IND
oracle    3139     1  1 22:51 ?        00:00:00 ora_q009_IND
oracle    3141     1  1 22:51 ?        00:00:00 ora_q00a_IND
oracle    3143     1  1 22:51 ?        00:00:00 ora_q00b_IND
oracle    3145     1  1 22:51 ?        00:00:00 ora_q00c_IND
oracle    3147     1  1 22:51 ?        00:00:00 ora_q00d_IND
oracle    3149     1  1 22:51 ?        00:00:00 ora_q00e_IND
oracle    3151     1  1 22:51 ?        00:00:00 ora_q00f_IND
oracle    3153     1  0 22:51 ?        00:00:00 ora_q00g_IND
oracle    3155     1  1 22:51 ?        00:00:00 ora_q00h_IND
oracle    3157     1  1 22:51 ?        00:00:00 ora_q00i_IND
oracle    3159     1  1 22:51 ?        00:00:00 ora_q00j_IND
oracle    3161     1  1 22:51 ?        00:00:00 ora_q00k_IND
oracle    3163     1  1 22:51 ?        00:00:00 ora_q00l_IND
oracle    3165     1  1 22:51 ?        00:00:00 ora_q00m_IND
oracle    3166  2726  0 22:51 pts/0    00:00:00 ps -ef
oracle    3167  2726  0 22:51 pts/0    00:00:00 grep -i ora
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$ ps -ef | grep -i vktm
root      2632     1  0 22:46 ?        00:00:00 /usr/libexec/devkit-power-daemon
oracle    2845     1  2 22:50 ?        00:00:01 ora_vktm_IND
oracle    3175  2726  0 22:52 pts/0    00:00:00 grep -i vk
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$ kill -9 2845
[oracle@localhost ~]$ ps -ef | grep -i pmon
oracle    3234  2726  0 22:53 pts/0    00:00:00 grep -i pmon
[oracle@localhost ~]$
[oracle@localhost ~]$


in alertlog


========================================================================

2019-01-25T22:52:57.882320+05:30
PMON (ospid: 2839): terminating the instance due to ORA error 56703
Cause - 'Instance is being terminated due to fatal process death (pid: 5, ospid: 2845, VKTM)'
2019-01-25T22:52:57.935374+05:30
System state dump requested by (instance=1, osid=2839 (PMON)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/ind/IND/trace/IND_diag_2858_20190125225257.trc
2019-01-25T22:52:59.755634+05:30
Dumping diagnostic data in directory=[cdmp_20190125225257], requested by (instance=1, osid=2839 (PMON)), summary=[abnormal instance termination].
2019-01-25T22:53:01.126079+05:30
Instance terminated by PMON, pid = 2839