Saturday, February 2, 2019

Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR) - OCRCHECK : Oracle Cluster Registry Check utility in RAC


Oracle Cluster Registry (OCR) and Oracle Local Registry (OLR)

ocrcheck  command in RAC
==============================



[root@rac1 bin]# ./ocrcheck -help
Name:
        ocrcheck - Displays health of Oracle Cluster/Local Registry.

Synopsis:
        ocrcheck [-config | -backupfile <backupfilename>] [-details] [-local]

  -config       Displays the configured locations of the Oracle Cluster Registry.
                This can be used with the -local option to display the configured
                location of the Oracle Local Registry
  -details      Displays detailed configuration information.
  -local        The operation will be performed on the Oracle Local Registry.
  -backupfile <backupfilename>  The operation will be performed on the backup file.

Notes:
        * This command for Oracle Cluster Registry is not supported from a Leaf node.

[root@rac1 bin]#

OCR (Oracle Cluster Registry) information
================================================

The Oracle Clusterware (Oracle Grid Infrastructure GI stack in 11gR2) uses OCR to manage resources and node membership information. 
It contains the following information shared across the nodes in the cluster

ASM diskgroups, volumes, filesystems, and instances
RAC databases and instances information
SCAN listeners and local listeners
SCAN VIPs and Local VIPs
Nodes and node applications

User defined resources


[root@rac1 bin]# ./ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84796
         Available space (kbytes) :     406888
         ID                       : 1718087688
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#




[root@rac1 bin]#
[root@rac1 bin]# ./ocrcheck -config
Oracle Cluster Registry configuration is :
         Device/File Name         :      +DATA
[root@rac1 bin]# ./ocrcheck -details
Status of Oracle Cluster Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      84796
         Available space (kbytes) :     406888
         ID                       : 1718087688
         Device/File Name         : +DATA/rac/OCRFILE/registry.255.993952649
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#


[root@rac1 bin]# cat /etc/oracle/ocr.loc
#Device/file +DATA getting replaced by device +DATA/rac/OCRFILE/registry.255.993952649
ocrconfig_loc=+DATA/rac/OCRFILE/registry.255.993952649
local_only=false[root@rac1 bin]#

[root@rac1 bin]#


OLR(Oracle Local Registry) Information 
=========================================
Its contains node-specific information required by OHASD . Every node has its own dedicated OLR file.(not shared between the nodes)


[root@rac1 bin]# ./ocrcheck -local
Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      83164
         Available space (kbytes) :     408520
         ID                       :  161118435
         Device/File Name         : /u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#


[root@rac1 bin]#
[root@rac1 bin]# ./ocrcheck  -local -details
Status of Oracle Local Registry is as follows :
         Version                  :          4
         Total space (kbytes)     :     491684
         Used space (kbytes)      :      83164
         Available space (kbytes) :     408520
         ID                       :  161118435
         Device/File Name         : /u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
                                    Device/File integrity check succeeded

         Local registry integrity check succeeded

         Logical corruption check succeeded

[root@rac1 bin]#

[root@rac1 bin]# cat /etc/oracle/olr.loc
olrconfig_loc=/u01/app/grid/product/18.0.0.0/grid/cdata/rac1.olr
crs_home=/u01/app/grid/product/18.0.0.0/grid
orplus_config=FALSE

[root@rac1 bin]#

Friday, January 25, 2019

DB Instance terminated after kill VKTM background process

Instance terminated after kill VKTM background process




[oracle@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 18.0.0.0.0 - Production on Fri Jan 25 22:49:39 2019
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area 1526725696 bytes
Fixed Size                  8657984 bytes
Variable Size             503316480 bytes
Database Buffers         1006632960 bytes
Redo Buffers                8118272 bytes
Database mounted.
Database opened.
SQL> exit
Disconnected from Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

[oracle@localhost ~]$
[oracle@localhost ~]$ ps -ef | grep -i ora
root      2715  2241  0 22:49 ?        00:00:00 sshd: oracle [priv]
oracle    2725  2715  0 22:49 ?        00:00:00 sshd: oracle@pts/0
oracle    2726  2725  0 22:49 pts/0    00:00:00 -bash
root      2773  2241  0 22:49 ?        00:00:00 sshd: oracle [priv]
oracle    2776  2773  0 22:49 ?        00:00:00 sshd: oracle@pts/1
oracle    2777  2776  0 22:49 pts/1    00:00:00 -bash
oracle    2835  2777  0 22:50 pts/1    00:00:00 tail -500f alert_IND.log
oracle    2839     1  0 22:50 ?        00:00:00 ora_pmon_IND
oracle    2841     1  0 22:50 ?        00:00:00 ora_clmn_IND
oracle    2843     1  0 22:50 ?        00:00:00 ora_psp0_IND
oracle    2845     1  2 22:50 ?        00:00:01 ora_vktm_IND
oracle    2849     1  0 22:50 ?        00:00:00 ora_gen0_IND
oracle    2851     1  0 22:50 ?        00:00:00 ora_mman_IND
oracle    2855     1  0 22:50 ?        00:00:00 ora_gen1_IND
oracle    2858     1  0 22:50 ?        00:00:00 ora_diag_IND
oracle    2860     1  0 22:50 ?        00:00:00 ora_ofsd_IND
oracle    2863     1  0 22:50 ?        00:00:00 ora_dbrm_IND
oracle    2865     1  0 22:50 ?        00:00:00 ora_vkrm_IND
oracle    2867     1  0 22:50 ?        00:00:00 ora_svcb_IND
oracle    2869     1  0 22:50 ?        00:00:00 ora_pman_IND
oracle    2871     1  0 22:50 ?        00:00:00 ora_dia0_IND
oracle    2873     1  0 22:50 ?        00:00:00 ora_dbw0_IND
oracle    2875     1  0 22:50 ?        00:00:00 ora_lgwr_IND
oracle    2877     1  0 22:50 ?        00:00:00 ora_ckpt_IND
oracle    2879     1  0 22:50 ?        00:00:00 ora_smon_IND
oracle    2881     1  0 22:50 ?        00:00:00 ora_smco_IND
oracle    2883     1  0 22:50 ?        00:00:00 ora_reco_IND
oracle    2885     1  0 22:50 ?        00:00:00 ora_w000_IND
oracle    2887     1  0 22:50 ?        00:00:00 ora_lreg_IND
oracle    2889     1  0 22:50 ?        00:00:00 ora_w001_IND
oracle    2891     1  0 22:50 ?        00:00:00 ora_pxmn_IND
oracle    2895     1  2 22:50 ?        00:00:02 ora_mmon_IND
oracle    2897     1  0 22:50 ?        00:00:00 ora_mmnl_IND
oracle    2899     1  0 22:50 ?        00:00:00 ora_d000_IND
oracle    2901     1  0 22:50 ?        00:00:00 ora_s000_IND
oracle    2903     1  0 22:50 ?        00:00:00 ora_tmon_IND
oracle    2906     1  0 22:50 ?        00:00:00 ora_m000_IND
oracle    2921     1  0 22:51 ?        00:00:00 ora_tt00_IND
oracle    2923     1  0 22:51 ?        00:00:00 ora_tt01_IND
oracle    2925     1  0 22:51 ?        00:00:00 ora_tt02_IND
oracle    2927     1  0 22:51 ?        00:00:00 ora_w002_IND
oracle    2929     1  0 22:51 ?        00:00:00 ora_w003_IND
oracle    2931     1  0 22:51 ?        00:00:00 ora_aqpc_IND
oracle    2933     1  0 22:51 ?        00:00:00 ora_w004_IND
oracle    2937     1  0 22:51 ?        00:00:00 ora_p000_IND
oracle    2939     1  0 22:51 ?        00:00:00 ora_p001_IND
oracle    2941     1  0 22:51 ?        00:00:00 ora_qm02_IND
oracle    2943     1  0 22:51 ?        00:00:00 ora_qm03_IND
oracle    2945     1  0 22:51 ?        00:00:00 ora_q002_IND
oracle    2947     1  0 22:51 ?        00:00:00 ora_q003_IND
oracle    2949     1  0 22:51 ?        00:00:00 ora_q004_IND
oracle    2951     1  0 22:51 ?        00:00:00 ora_q005_IND
oracle    2953     1  5 22:51 ?        00:00:01 ora_cjq0_IND
oracle    2955     1  0 22:51 ?        00:00:00 ora_s001_IND
oracle    2957     1  0 22:51 ?        00:00:00 /bin/sh /u01/app/oracle/product/18.0.0/db/QOpatch/qopiprep.bat /u01/app/oracle/product/18.0.0/db/QOpatch/qopiprep.bat
oracle    2965  2957  0 22:51 ?        00:00:00 /bin/sh /u01/app/oracle/product/18.0.0/db/OPatch/opatch lsinventory -customLogDir /u01/app/oracle/product/18.0.0/db/rdbms/log -xml /u01/app/oracle/product/18.0.0/db/rdbms/log/xml_file_IND_2957.xml -retry 0 -invPtrLoc /u01/app/oracle/product/18.0.0/db/oraInst.loc
oracle    3094  2965  8 22:51 ?        00:00:00 /u01/app/oracle/product/18.0.0/db/OPatch/jre/bin/java -d64 -Xmx3072m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/u01/app/oracle/product/18.0.0/db/rdbms/log/opatch -cp /u01/app/oracle/product/18.0.0/db/oui/jlib/OraInstaller.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/OraInstallerNet.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/OraPrereq.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/share.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/orai18n-mapping.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/xmlparserv2.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/emCfg.jar:/u01/app/oracle/product/18.0.0/db/oui/jlib/ojmisc.jar:/u01/app/oracle/product/18.0.0/db/OPatch/jlib/oracle.opatch.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/auto/core/modules/features/oracle.glcm.oplan.core.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/auto/core/modules/features/oracle.glcm.osys.core.classpath.jar:/u01/app/oracle/product/18.0.0/db/OPatch/ocm/lib/emocmclnt.jar -DOPatch.ORACLE_HOME=/u01/app/oracle/product/18.0.0/db -DOPatch.DEBUG=false -DOPatch.MAKE=false -DOPatch.RUNNING_DIR=/u01/app/oracle/product/18.0.0/db/OPatch -DOPatch.MW_HOME= -DOPatch.WL_HOME= -DOPatch.COMMON_COMPONENTS_HOME= -DOPatch.OUI_LOCATION=/u01/app/oracle/product/18.0.0/db/oui -DOPatch.FMW_COMPONENT_HOME= -DOPatch.OPATCH_CLASSPATH=/u01/app/oracle/product/18.0.0/db/JRE:/u01/app/oracle/product/18.0.0/db/jlib:/u01/app/oracle/product/18.0.0/db/rdbms/jlib -DOPatch.WEBLOGIC_CLASSPATH= -DOPatch.SKIP_OUI_VERSION_CHECK= -DOPatch.PARALLEL_ON_FMW_OH= oracle/opatch/OPatch lsinventory -customLogDir /u01/app/oracle/product/18.0.0/db/rdbms/log -xml /u01/app/oracle/product/18.0.0/db/rdbms/log/xml_file_IND_2957.xml -retry 0 -invPtrLoc /u01/app/oracle/product/18.0.0/db/oraInst.loc
oracle    3105     1  2 22:51 ?        00:00:00 ora_j000_IND
oracle    3107     1  6 22:51 ?        00:00:00 ora_j001_IND
oracle    3109     1  1 22:51 ?        00:00:00 ora_j002_IND
oracle    3111     1  0 22:51 ?        00:00:00 ora_j003_IND
oracle    3113     1  1 22:51 ?        00:00:00 ora_j004_IND
oracle    3115     1 11 22:51 ?        00:00:00 ora_j005_IND
oracle    3117     1  0 22:51 ?        00:00:00 ora_j006_IND
oracle    3119     1  1 22:51 ?        00:00:00 ora_j007_IND
oracle    3121     1  1 22:51 ?        00:00:00 ora_j008_IND
oracle    3123     1  0 22:51 ?        00:00:00 ora_j009_IND
oracle    3125     1  5 22:51 ?        00:00:00 ora_j00a_IND
oracle    3127     1  0 22:51 ?        00:00:00 ora_j00b_IND
oracle    3129     1  0 22:51 ?        00:00:00 ora_j00c_IND
oracle    3131     1  0 22:51 ?        00:00:00 ora_j00d_IND
oracle    3133     1  0 22:51 ?        00:00:00 ora_q006_IND
oracle    3135     1  1 22:51 ?        00:00:00 ora_q007_IND
oracle    3137     1  1 22:51 ?        00:00:00 ora_q008_IND
oracle    3139     1  1 22:51 ?        00:00:00 ora_q009_IND
oracle    3141     1  1 22:51 ?        00:00:00 ora_q00a_IND
oracle    3143     1  1 22:51 ?        00:00:00 ora_q00b_IND
oracle    3145     1  1 22:51 ?        00:00:00 ora_q00c_IND
oracle    3147     1  1 22:51 ?        00:00:00 ora_q00d_IND
oracle    3149     1  1 22:51 ?        00:00:00 ora_q00e_IND
oracle    3151     1  1 22:51 ?        00:00:00 ora_q00f_IND
oracle    3153     1  0 22:51 ?        00:00:00 ora_q00g_IND
oracle    3155     1  1 22:51 ?        00:00:00 ora_q00h_IND
oracle    3157     1  1 22:51 ?        00:00:00 ora_q00i_IND
oracle    3159     1  1 22:51 ?        00:00:00 ora_q00j_IND
oracle    3161     1  1 22:51 ?        00:00:00 ora_q00k_IND
oracle    3163     1  1 22:51 ?        00:00:00 ora_q00l_IND
oracle    3165     1  1 22:51 ?        00:00:00 ora_q00m_IND
oracle    3166  2726  0 22:51 pts/0    00:00:00 ps -ef
oracle    3167  2726  0 22:51 pts/0    00:00:00 grep -i ora
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$ ps -ef | grep -i vktm
root      2632     1  0 22:46 ?        00:00:00 /usr/libexec/devkit-power-daemon
oracle    2845     1  2 22:50 ?        00:00:01 ora_vktm_IND
oracle    3175  2726  0 22:52 pts/0    00:00:00 grep -i vk
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$ kill -9 2845
[oracle@localhost ~]$ ps -ef | grep -i pmon
oracle    3234  2726  0 22:53 pts/0    00:00:00 grep -i pmon
[oracle@localhost ~]$
[oracle@localhost ~]$


in alertlog


========================================================================

2019-01-25T22:52:57.882320+05:30
PMON (ospid: 2839): terminating the instance due to ORA error 56703
Cause - 'Instance is being terminated due to fatal process death (pid: 5, ospid: 2845, VKTM)'
2019-01-25T22:52:57.935374+05:30
System state dump requested by (instance=1, osid=2839 (PMON)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/oracle/diag/rdbms/ind/IND/trace/IND_diag_2858_20190125225257.trc
2019-01-25T22:52:59.755634+05:30
Dumping diagnostic data in directory=[cdmp_20190125225257], requested by (instance=1, osid=2839 (PMON)), summary=[abnormal instance termination].
2019-01-25T22:53:01.126079+05:30
Instance terminated by PMON, pid = 2839

Wednesday, January 23, 2019

Increase tmpfs filesystem space on linux

Increase tmpfs filesystem space  on linux


run time increase using below command:-

mount -t tmpfs tmpfs -o size=3g /dev/shm




[root@rac1 ctssd]# cat /etc/fstab

#
# /etc/fstab
# Created by anaconda on Tue Nov 27 15:48:22 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg_rac1-lv_root /                       ext4    defaults        1 1
UUID=1b52ef29-f02f-48e3-9908-6ed29c02ce5a /boot                   ext4    defaults        1 2
/dev/mapper/vg_rac1-lv_home /home                   ext4    defaults        1 2
/dev/mapper/vg_rac1-lv_swap swap                    swap    defaults        0 0
tmpfs                   /dev/shm                tmpfs   defaults,size=3G        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
[root@rac1 ctssd]#



Need reboot



[root@rac1 ctssd]# df -h
Filesystem                   Size  Used Avail Use% Mounted on
/dev/mapper/vg_rac1-lv_root   50G   34G   14G  71% /
tmpfs                        3.0G  1.1G  2.0G  37% /dev/shm
/dev/sda1                    477M   56M  397M  13% /boot
/dev/mapper/vg_rac1-lv_home   26G   47M   24G   1% /home
[root@rac1 ctssd]#

[root@rac1 ctssd]#

Thursday, January 17, 2019

UDEV SCSI Rules Configuration In Oracle Linux 6


UDEV SCSI Rules Configuration In Oracle Linux 6 




[root@rac1 ~]# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sdd1
[root@rac1 ~]#



[root@rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdb
1ATA_VBOX_HARDDISK_VB6673148e-70a3b8f4
[root@rac1 dev]# /sbin/scsi_id -g -u -d /dev/sdc
1ATA_VBOX_HARDDISK_VBc778fc49-01bbefbb
[root@rac1 dev]#
[root@rac1 ~]# /sbin/scsi_id -g -u -d /dev/sdd
1ATA_VBOX_HARDDISK_VBf8c61d16-a391798e
[root@rac1 ~]#


/sbin/scsi_id -g -u -d /dev/sdd

[root@rac2 dev]#
[root@rac2 dev]# /sbin/scsi_id -g -u -d /dev/sdb
1ATA_VBOX_HARDDISK_VB6673148e-70a3b8f4
[root@rac2 dev]# /sbin/scsi_id -g -u -d /dev/sdc
1ATA_VBOX_HARDDISK_VBc778fc49-01bbefbb
[root@rac2 dev]#
[root@rac2 ~]# /sbin/scsi_id -g -u -d /dev/sdd
1ATA_VBOX_HARDDISK_VBf8c61d16-a391798e
[root@rac2 ~]#




/etc/udev/rules.d/99-oracle-asmdevices.rules

For Oracle Linux 6


KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VB6673148e-70a3b8f4", NAME="DISK1", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBc778fc49-01bbefbb", NAME="DISK2", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd?1", BUS=="scsi", PROGRAM=="/sbin/scsi_id -g -u -d /dev/$parent", RESULT=="1ATA_VBOX_HARDDISK_VBf8c61d16-a391798e", NAME="DISK3", OWNER="grid", GROUP="asmadmin", MODE="0660"


/sbin/udevadm test /block/sdb/sdb1

/sbin/udevadm test /block/sdc/sdc1

/sbin/udevadm test /block/sdd/sdd1





https://oracle-base.com/articles/linux/udev-scsi-rules-configuration-in-oracle-linux