Friday, November 14, 2014

ASM MONITORING COMMAND

///////////TO CHECK DISKGROUP SPACE INFORMATION

SQL> SELECT name, free_mb/1024, total_mb/1024, free_mb/total_mb*100 as percentage FROM v$asm_diskgroup;


////////////////////IDENTIFY CANDIDATE DISK



 SELECT
    NVL(a.name, '[CANDIDATE]') as disk_group_name
  , b.path as disk_file_path
  , b.name as disk_file_name
  , b.failgroup as disk_file_fail_group
 FROM
    v$asm_diskgroup a RIGHT OUTER JOIN v$asm_disk b USING (group_number)
 ORDER BY
    a.name;


/////////////////////CHECK ASM FILE INFORMATION

select file_number , sum(bytes)/(1024*1024) from v$asm_file group by file_number;

ORACLE RAC MONITORING COMMANDS

/////////////////check services

dbserver2:oracle$
dbserver2:oracle$
dbserver2:oracle$
dbserver2:oracle$ crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....SM1.asm application    ONLINE    ONLINE    dbserver1
ora....R1.lsnr application    ONLINE    ONLINE    dbserver1
ora....er1.vip application    ONLINE    ONLINE    dbserver1
ora....SM2.asm application    ONLINE    ONLINE    dbserver2
ora....R2.lsnr application    ONLINE    ONLINE    dbserver2
ora....er2.gsd application    ONLINE    ONLINE    dbserver2
ora....er2.ons application    ONLINE    ONLINE    dbserver2
ora....er2.vip application    ONLINE    ONLINE    dbserver2
ora....ogdb.db application    ONLINE    ONLINE    dbserver1
ora....ebdb.cs application    ONLINE    ONLINE    dbserver1
ora....db1.srv application    ONLINE    ONLINE    dbserver1
ora....db2.srv application    ONLINE    ONLINE    dbserver2
ora....b1.inst application    ONLINE    ONLINE    dbserver1
ora....b2.inst application    ONLINE    ONLINE    dbserver2
dbserver2:oracle$
dbserver2:oracle$
dbserver2:oracle$




/////////////////////////////cluster health status


dbserver1:oracle$ crsctl check crs
CSS appears healthy
CRS appears healthy
EVM appears healthy
dbserver1:oracle$
dbserver1:oracle$



/////////////////ocr backup info


dbserver2:oracle$ ocrconfig -showbackup

dbserver1     2014/10/07 11:47:21     /u01/app/oracle/product/10.2.0/crs/cdata/crs

dbserver1     2014/10/07 07:47:21     /u01/app/oracle/product/10.2.0/crs/cdata/crs

dbserver1     2014/10/07 03:47:19     /u01/app/oracle/product/10.2.0/crs/cdata/crs

dbserver1     2014/10/05 19:47:11     /u01/app/oracle/product/10.2.0/crs/cdata/crs

dbserver1     2014/09/29 07:46:33     /u01/app/oracle/product/10.2.0/crs/cdata/crs
dbserver2:oracle$





//////////////////////all node cluster database status

dbserver1:oracle$
dbserver1:oracle$ srvctl status database -d weblogdb
Instance weblogdb1 is running on node dbserver1
Instance weblogdb2 is running on node dbserver2
dbserver1:oracle$


///////////////////////////instance staus

////node1

srvctl status instance -d weblogdb -i weblogdb1

dbserver1:oracle$ srvctl status instance -d weblogdb -i weblogdb1
Instance weblogdb1 is running on node dbserver1
dbserver1:oracle$


////node2

srvctl status instance -d weblogdb -i weblogdb2

dbserver2:oracle$
dbserver2:oracle$ srvctl status instance -d weblogdb -i weblogdb2
Instance weblogdb2 is running on node dbserver2
dbserver2:oracle$
dbserver2:oracle$




////////////////////asm instance status/////////////////////////////////






////node1

srvctl status asm -n dbserver1

dbserver1:oracle$
dbserver1:oracle$ srvctl status asm -n dbserver1
ASM instance +ASM1 is running on node dbserver1.
dbserver1:oracle$
dbserver1:oracle$


////////node2

srvctl status asm -n dbserver2

dbserver2:oracle$
dbserver2:oracle$ srvctl status asm -n dbserver2
ASM instance +ASM2 is running on node dbserver2.
dbserver2:oracle$


///////////////////cluster nodeapps status/////////////////////////////////////////////





////node1

srvctl status nodeapps -n dbserver1
dbserver1:oracle$
dbserver1:oracle$ srvctl status nodeapps -n dbserver1
VIP is running on node: dbserver1

///node2

srvctl status nodeapps -n dbserver2
dbserver2:oracle$
dbserver2:oracle$ srvctl status nodeapps -n dbserver2
VIP is running on node: dbserver2
GSD is running on node: dbserver2
Listener is running on node: dbserver2
ONS daemon is running on node: dbserver2
dbserver2:oracle$


/////////////////////

Tuesday, July 29, 2014

CRON SCHEDULING



1. Scheduling a Job For a Specific Time

The basic usage of cron is to execute a job in a specific time as shown below. This will execute the Full backup shell script (full-backup) on 10th June 08:30 AM.

Please note that the time field uses 24 hours format. So, for 8 AM use 8, and for 8 PM use 20.

30 08 10 06 * /home/ramesh/full-backup
30 – 30th Minute
08 – 08 AM
10 – 10th Day
06 – 6th Month (June)
* – Every day of the week
2. Schedule a Job For More Than One Instance (e.g. Twice a Day)

The following script take a incremental backup twice a day every day.

This example executes the specified incremental backup shell script (incremental-backup) at 11:00 and 16:00 on every day. The comma separated value in a field specifies that the command needs to be executed in all the mentioned time.

00 11,16 * * * /home/ramesh/bin/incremental-backup
00 – 0th Minute (Top of the hour)
11,16 – 11 AM and 4 PM
* – Every day
* – Every month
* – Every day of the week
3. Schedule a Job for Specific Range of Time (e.g. Only on Weekdays)

If you wanted a job to be scheduled for every hour with in a specific range of time then use the following.

Cron Job everyday during working hours
This example checks the status of the database everyday (including weekends) during the working hours 9 a.m – 6 p.m

00 09-18 * * * /home/ramesh/bin/check-db-status
00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
* – Every day of the week
Cron Job every weekday during working hours
This example checks the status of the database every weekday (i.e excluding Sat and Sun) during the working hours 9 a.m – 6 p.m.

00 09-18 * * 1-5 /home/ramesh/bin/check-db-status
00 – 0th Minute (Top of the hour)
09-18 – 9 am, 10 am,11 am, 12 am, 1 pm, 2 pm, 3 pm, 4 pm, 5 pm, 6 pm
* – Every day
* – Every month
1-5 -Mon, Tue, Wed, Thu and Fri (Every Weekday)
4. How to View Crontab Entries?

View Current Logged-In User’s Crontab entries
To view your crontab entries type crontab -l from your unix account as shown below.




# * * * * *  command to execute
# ┬ ┬ ┬ ┬ ┬
# │ │ │ │ │
# │ │ │ │ │
# │ │ │ │ └───── day of week (0 - 6) (0 to 6 are Sunday to Saturday, or use names; 7 is Sunday, the same as 0)
# │ │ │ └────────── month (1 - 12)
# │ │ └─────────────── day of month (1 - 31)
# │ └──────────────────── hour (0 - 23)
# └───────────────────────── min (0 - 59)                                         


http://en.wikipedia.org/wiki/Cron

Monday, July 28, 2014

ORA-19809 ERROR CHANGE DB_RECOVERY_FILE_DEST_SIZE

SQL> show parameter db_recovery



SQL > alter system set db_recovery_file_dest_size=15G scope=both;


ORA-19809 : limit exceeded for recovery  due to  lack of space 




Friday, June 27, 2014

CHECK HOW MANY DML OPERATION PERFORM ON TABLE IN DATABASE


SQL> execute dbms_stats.flush_database_monitoring_info;

SQL> select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name='TEST';

TABLE_OWNER                    TABLE_NAME                        INSERTS
------------------------------ ------------------------------ ----------
   UPDATES    DELETES
---------- ----------
ANURAG                         TEST                                    1
         0          0


SQL> execute dbms_stats.flush_database_monitoring_info;

PL/SQL procedure successfully completed.

SQL>  select table_owner,table_name,inserts,updates,deletes from dba_tab_modifications where table_name='TEST';

TABLE_OWNER                    TABLE_NAME                        INSERTS
------------------------------ ------------------------------ ----------
   UPDATES    DELETES
---------- ----------
ANURAG                         TEST                                    2
         0          0


Tuesday, June 24, 2014

SHARE FOLDER REMOTELY IN SOLARIS

bash-3.2#
bash-3.2# mkdir db_exp_backup
bash-3.2#
bash-3.2#
bash-3.2# chmod -R 777 /opt/db_exp_backup/
bash-3.2#
bash-3.2#
bash-3.2# df -h
Filesystem             size   used  avail capacity  Mounted on
/dev/dsk/c0t0d0s0      7.9G   441M   7.4G     6%    /
/devices                 0K     0K     0K     0%    /devices
ctfs                     0K     0K     0K     0%    /system/contract
proc                     0K     0K     0K     0%    /proc
mnttab                   0K     0K     0K     0%    /etc/mnttab
swap                    22G   996K    22G     1%    /etc/svc/volatile
objfs                    0K     0K     0K     0%    /system/object
sharefs                  0K     0K     0K     0%    /etc/dfs/sharetab
/dev/dsk/c0t0d0s4      7.9G   3.3G   4.5G    43%    /usr
/usr/lib/libc/libc_hwcap1.so.1
                       7.9G   3.3G   4.5G    43%    /lib/libc.so.1
fd                       0K     0K     0K     0%    /dev/fd
/dev/dsk/c0t0d0s3      7.9G   180M   7.6G     3%    /var
swap                    22G    64K    22G     1%    /tmp
swap                    22G    28K    22G     1%    /var/run
/dev/dsk/c0t0d0s5       90G    48G    42G    54%    /opt
bash-3.2# share
bash-3.2# vi /etc/dfs/dfstab
"/etc/dfs/dfstab" 12 lines, 397 characters

#       Place share(1M) commands here for automatic execution
#       on entering init state 3.
#
#       Issue the command 'svcadm enable network/nfs/server' to
#       run the NFS daemon processes and the share commands, after adding
#       the very first entry to this file.
#
#       share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
#       .e.g,
#       share  -F nfs  -o rw=engineering  -d "home dirs"  /export/home2
share -F nfs -o rw,anon=0 /opt/db_exp_backup


bash-3.2# svcs -a |grep -i nfs
disabled       Feb_15   svc:/network/nfs/server:default
online         Feb_15   svc:/network/nfs/cbd:default
online         Feb_15   svc:/network/nfs/mapid:default
online         Feb_15   svc:/network/nfs/status:default
online         Feb_15   svc:/network/nfs/nlockmgr:default
online         Feb_15   svc:/network/nfs/client:default
online         Feb_15   svc:/network/nfs/rquota:default
bash-3.2# svcadm enable nfs/server
bash-3.2# svcs -a |grep -i nfs
online         Feb_15   svc:/network/nfs/cbd:default
online         Feb_15   svc:/network/nfs/mapid:default
online         Feb_15   svc:/network/nfs/status:default
online         Feb_15   svc:/network/nfs/nlockmgr:default
online         Feb_15   svc:/network/nfs/client:default
online         Feb_15   svc:/network/nfs/rquota:default
online         17:39:17 svc:/network/nfs/server:default
bash-3.2# share
-               /opt/db_exp_backup   rw,anon=0   ""
bash-3.2#
bash-3.2#




///////////////////////////////////check shared folder detail

bash-3.2# cat /etc/dfs/dfstab

#       Place share(1M) commands here for automatic execution
#       on entering init state 3.
#
#       Issue the command 'svcadm enable network/nfs/server' to
#       run the NFS daemon processes and the share commands, after adding
#       the very first entry to this file.
#
#       share [-F fstype] [ -o options] [-d "<text>"] <pathname> [resource]
#       .e.g,
#       share  -F nfs  -o rw=engineering  -d "home dirs"  /export/home2
share -F nfs -o rw,anon=0 /opt/db_exp_backup




///////////////////////On target server

bash-3.2# mkdir  db_exp_backup
bash-3.2# mount 10.1.1.130:/opt/db_exp_backup  /db_exp_backup


Monday, May 12, 2014

STEPS TO TAKE BACKUP ON TAPE

[root@iptvdb2]mt -f /dev/rmt/0n status
Vendor 'HP      ' Product 'Ultrium 4-SCSI ' tape drive:
   sense key(0x0)= No Additional Sense   residual= 0   retries= 0
   file no= 2   block no= 0
[root@iptvdb2]


[root@iptvdb2]tar -cvf /dev/rmt/0n /nasbackup/RMANBACKUP/Nov_04_2013_backup/

a /nasbackup/RMANBACKUP/Nov_04_2013_backup// 0 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_10_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_11_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_12_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_13_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_14_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_15_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_16_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_17_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_18_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_19_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_1_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_20_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_21_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_22_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_23_1.bak 1080960 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_2_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_3_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_4_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_5_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_6_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_7_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_8_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j6oo3v2s_9_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j7oo42i2_1_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j7oo42i2_2_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j7oo42i2_3_1.bak 5101856 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j8oo42u3_1_1.bak 10240000 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j8oo42u3_2_1.bak 1047248 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_j9oo4341_1_1.bak 9100064 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//weekly_ORCL_jaoo4396_1_1.bak 6004224 tape blocks
a /nasbackup/RMANBACKUP/Nov_04_2013_backup//archive_d_jboo43de_1_1.bak 5870552 tape blocks
[root@iptvdb2]
[root@iptvdb2]




SOME OTHER ATTRIBUTES

# mt status    Print status information about the tape unit.

  # mt rewind    Rewind the tape.

  # mt erase     Erase the tape.

  # mt retension Re-tension the tape (one full wind forth and back.

  # mt fsf 1     Forward space count by one file. One can be any number.

  -f option can be used with mt to specify the different device. For
  solaris /dev/rmt/0 is the default device.

  # mt -f /dev/rmt/1n fsf 3

Oracle 26 ai database free

 https://www.oracle.com/in/database/26ai/ Oracle 26 ai release  Oracle AI Database Free Want to get hands-on with Oracle AI Database 26ai—ab...