Friday, January 26, 2018

Introduction Flashback Technology

Oracle Flashback Technology


you can use the various features of Oracle Flashback to view past states of data and rewind your database without restoring backups or performing point-in-time recovery.In general, flashback features are more efficient and less disruptive than media recovery in most situations in which they apply.



Flashback technology is a set of features in Oracle database that make your work easier to view past states of data or to move your database objects to a previous state without using point in time media recovery.


View past states of data or move database objects to previous state means you have performed some operations like  DML + COMMIT and now you want to rollback that operation, this can be done easily through FLASHBACK technology without using point in time media recovery.





Flashback Database depends on the flashback logs to perform flashback. Flashback Drop uses the recycle bin. All other techniques
use undo data.Not all flashback features modify the database. Some are simply methods to query other versions of data. these are tools to investigate a problem and aid in recovery. The results of flashback queries help you do one of two things:
  • Determine the type of database-modifying flashback operation to perform to fix the problem.
  • Feed the result set of these queries into an INSERT, UPDATE, or DELETE statement that enables you to easily repair the erroneous data.
  • Flashback Data Archive enables you to use the preceding logical flashback features to access data from far back in the past.



Below Feature support by Flashback:


  • Flashback Query
  • Flashback Version Query
  • Flashback Transaction Query
  • Flashback Table
  • Flashback Drop (Recycle Bin)
  • Flashback Database
  • Flashback Query Functions

Flashback Query


Using Oracle flashback query, you can retrieve the committed data as it was at a past point in time. With the Flashback Query feature, you can perform queries as of a certain time. By using the AS OF clause of the SELECT statement, you can specify the time stamp for which to view the data. This is useful for analyzing a data discrepancy.

Note: TIMESTAMP and SCN are valid options for the AS OF clause


Query table data from specific point in time using timestamp and scn:

SQL> SELECT current_scn, SCN_TO_TIMESTAMP(current_scn),TO_CHAR(SYSDATE,'dd-mon-yyyy hh24:mi:ss') AS current_time FROM v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)                                               CURRENT_TIME
----------- --------------------------------------------------------------------------- -----------------------------
    1622677 26-JAN-18 03.33.06.000000000 AM                                             26-jan-2018 03:33:08

SQL> select * from emp;

     EMPNO NAME
---------- --------------------------------------------------
       111 ABDD
       222 BCFF
       333 DDGG
       444 EEE
       555 FFFF
       555 FFFF

6 rows selected.

SQL> insert into emp values(666,'GGGG');

1 row created.

SQL> /
1 row created.
SQL> /
1 row created.
SQL> select * from emp;

     EMPNO NAME
---------- --------------------------------------------------
       111 ABDD
       222 BCFF
       333 DDGG
       444 EEE
       555 FFFF
       555 FFFF
       666 GGGG
       666 GGGG
       666 GGGG

9 rows selected.

SQL> commit;

Commit complete.

SQL> SELECT current_scn, SCN_TO_TIMESTAMP(current_scn),TO_CHAR(SYSDATE,'dd-mon-yyyy hh24:mi:ss') AS current_time FROM v$database;

CURRENT_SCN SCN_TO_TIMESTAMP(CURRENT_SCN)                                               CURRENT_TIME
----------- --------------------------------------------------------------------------- -----------------------------
    1622718 26-JAN-18 03.34.21.000000000 AM                                             26-jan-2018 03:34:24

SQL> SELECT * FROM EMP AS OF scn 1622677;

     EMPNO NAME
---------- --------------------------------------------------
       111 ABDD
       222 BCFF
       333 DDGG
       444 EEE
       555 FFFF
       555 FFFF

6 rows selected.

SQL> SELECT * FROM EMP AS OF TIMESTAMP TO_TIMESTAMP('26-jan-2018 03:33:08','dd-mon-yyyy hh24:mi:ss');

     EMPNO NAME
---------- --------------------------------------------------
       111 ABDD
       222 BCFF
       333 DDGG
       444 EEE
       555 FFFF
       555 FFFF

6 rows selected.

SQL>
SQL> SELECT * FROM EMP AS OF TIMESTAMP SYSTIMESTAMP - INTERVAL '10'  MINUTE;

     EMPNO NAME
---------- --------------------------------------------------
       111 ABDD
       222 BCFF
       333 DDGG
       444 EEE
       555 FFFF
       555 FFFF

6 rows selected.

SQL>


Flashback Version Query


The Flashback Version Query feature enables you to use the VERSIONS clause to retrieve all the versions of the rows that exist between two points in time or two SCNs.

The rows returned by Flashback Version Query represent a history of changes for the rows across transactions. Flashback Version Query retrieves only committed occurrences of the rows. Uncommitted row versions within a transaction are not shown. The rows returned also include deleted and subsequently reinserted versions of the rows.You can use Flashback Version Query to retrieve row history.


The Flashback Version Query returns a table with a row for each version of the row that existed at any time during the time interval you specify. Each row in the table includes pseudocolumns of metadata about the row version. The pseudocolumns available are

VERSIONS_XID                  :Identifier of the transaction that created the row version
VERSIONS_OPERATION            :Operation Performed. I for Insert, U for Update, D for Delete
VERSIONS_STARTSCN             :Starting System Change Number when the row version was created
VERSIONS_STARTTIME            :Starting System Change Time when the row version was created
VERSIONS_ENDSCN               :SCN when the row version expired.
VERSIONS_ENDTIME              :Timestamp when the row version expired



column versions_starttime format a16
column versions_endtime format a16
set linesize 120
select versions_xid,versions_starttime,versions_endtime,
versions_operation,name from emp versions between
timestamp TO_TIMESTAMP ('26-01-2018 04:08:19','DD-MM-YYYY HH24:MI:SS')
and TO_TIMESTAMP ('26-01-2018 04:19:33','DD-MM-YYYY HH24:MI:SS') where empno=999;

SQL>
SQL> column versions_starttime format a16
column versions_endtime format a16
set linesize 120
select versions_xid,versions_starttime,versions_endtime,
versions_operation,name from emp versions between
timestamp TO_TIMESTAMP ('26-01-2018 04:08:19','DD-MM-YYYY HH24:MI:SS')
and TO_TIMESTAMP ('26-01-2018 04:15:19','DD-MM-YYYY HH24:MI:SS') where empno=999;
SQL> SQL> SQL>   2    3    4
VERSIONS_XID     VERSIONS_STARTTI VERSIONS_ENDTIME V NAME
---------------- ---------------- ---------------- - --------------------------------------------------
0200010096040000 26-JAN-18 04.12.                  I KUMAR
                 57 AM


SQL>
SQL>
SQL> update emp set name='ANURAG' WHERE EMPNO=999;

1 row updated.

SQL> COMMIT;

Commit complete.

SQL> SELECT TO_CHAR(sysdate, 'DD-MM-YYYY HH24:MI:SS')   from dual;

TO_CHAR(SYSDATE,'DD
-------------------
26-01-2018 04:19:33

SQL>
column versions_starttime format a16
column versions_endtime format a16
set linesize 120
select versions_xid,versions_starttime,versions_endtime,
versions_operation,name from emp versions between
timestamp TO_TIMESTAMP ('26-01-2018 04:08:19','DD-MM-YYYY HH24:MI:SS')
and TO_TIMESTAMP ('26-01-2018 04:19:33','DD-MM-YYYY HH24:MI:SS') where empno=999;
SQL> SQL> SQL> SQL>   2    3    4
VERSIONS_XID     VERSIONS_STARTTI VERSIONS_ENDTIME V NAME
---------------- ---------------- ---------------- - --------------------------------------------------
0500010097040000 26-JAN-18 04.19.                  U ANURAG
                 13 AM

0200010096040000 26-JAN-18 04.12. 26-JAN-18 04.19. I KUMAR
                 57 AM            13 AM


SQL> select timestamp_to_scn(to_timestamp('26/01/2018 04:12:57','DD/MM/YYYY HH24:MI:SS')) as scn from dual;

       SCN
----------
   1624146

SQL> select timestamp_to_scn(to_timestamp('26/01/2018 04:19:13','DD/MM/YYYY HH24:MI:SS')) as scn from dual;

       SCN
----------
   1624327

SQL>

  
SELECT versions_startscn, versions_starttime, versions_endscn, versions_endtime, versions_xid, versions_operation, empno,name
     FROM emp
     VERSIONS BETWEEN SCN 1624146 AND 1624327
WHERE empno = 999;
 
VERSIONS_STARTSCN VERSIONS_STARTTI VERSIONS_ENDSCN VERSIONS_ENDTIME VERSIONS_XID     V      EMPNO NAME
----------------- ---------------- --------------- ---------------- ---------------- - ---------- ----------
          1624328 26-JAN-18 04.19.                                  0500010097040000 U        999 ANURAG
                  13 AM

          1624147 26-JAN-18 04.12.         1624328 26-JAN-18 04.19. 0200010096040000 I        999 KUMAR
                  57 AM                            13 AM



The VERSIONS clause cannot be used to query the following types of tables:
External tables
Temporary tables

Fixed tables

Tuesday, January 23, 2018

Database backup using RMAN in NoArchivelog mode / Cold backup




If a database runs in NOARCHIVELOG mode, then the only valid database backup is a consistent backup. For the backup to be consistent, the database must be mounted after a consistent shutdown. No recovery is required after restoring the backup.


1. Shutdown database
2. Start database in mount state
3. connect using rman 
4. Perform database backup using "backup database" command
5. Open database


SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG

SQL>
SQL>
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$
[oracle@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 21 22:43:55 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1620115456 bytes
Fixed Size                  2253704 bytes
Variable Size             989858936 bytes
Database Buffers          620756992 bytes
Redo Buffers                7245824 bytes
Database mounted.
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
[oracle@localhost ~]$


[oracle@localhost ~]$ rman target /

Recovery Manager: Release 11.2.0.4.0 - Production on Sun Jan 21 22:57:56 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DUCAT (DBID=3782973220, not open)

RMAN> 

RMAN> backup database;

Starting backup at 21-JAN-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=20 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/u01/app/oracle/oradata/ducat/test01.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/ducat/system01.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ducat/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/ducat/example01.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ducat/undotbs01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ducat/users01.dbf
channel ORA_DISK_1: starting piece 1 at 21-JAN-18
channel ORA_DISK_1: finished piece 1 at 21-JAN-18
piece handle=/u01/app/oracle/fast_recovery_area/DUCAT/backupset/2018_01_21/o1_mf_nnndf_TAG20180121T230212_f69mnx3g_.bkp tag=TAG20180121T230212 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:02:16
Finished backup at 21-JAN-18

Starting Control File and SPFILE Autobackup at 21-JAN-18
piece handle=/u01/app/oracle/fast_recovery_area/DUCAT/autobackup/2018_01_21/o1_mf_s_966032932_f69ms6k7_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 21-JAN-18

RMAN>

RMAN>

RMAN> exit


Recovery Manager complete.
[oracle@localhost ~]$
[oracle@localhost ~]$
[oracle@localhost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 11.2.0.4.0 Production on Sun Jan 21 23:08:25 2018

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter database open;

Database altered.

SQL>
SQL>  select name ,open_mode,log_mode from v$database;

NAME      OPEN_MODE            LOG_MODE
--------- -------------------- ------------
DUCAT     READ WRITE           NOARCHIVELOG

SQL>

x

Sunday, January 21, 2018

Backup and Recovery : #03


Whole database backup:

The whole backup that is taken when the database is closed (after the database is shut down using the NORMAL, IMMEDIATE, or TRANSACTIONAL options) is called a consistent backup. In such a backup, all the database file headers are consistent with the control file, and when restored completely, the database can be opened without any recovery. When the database is operated in Noarchivelog mode, only a consistent whole database backup is valid for restore and recovery.

When the database is open and operational, the datafile headers are not consistent with the control file unless the database is open in read-only mode. When the database is shut down with the ABORT option this inconsistency persists. Backups of the database in such a state are termed as an inconsistent backup. Inconsistent backups need recovery to bring the database into a consistent state. When databases need to be available 7 days a week, 24 hours a day, you have no option but to use an inconsistent backup, and this can be performed only on databases running in Archivelog mode.



Tablespace Backup:
A tablespace backup is a backup of the datafiles that make up a tablespace. Tablespace backups are valid only if the database is in Archivelog mode because redo entries will be required to make the datafiles consistent with the rest of the database. You can make tablespace backups when the tablespace is read-only or offline-normal in Noarchivelog mode.

Datafile Backups:
You can make backups of a single datafile if your database is in Archivelog mode.

Control File Backups:
You can configure RMAN for automatic backups of the control file after a BACKUP or COPY command is issued. The control file can also be backed up through SQL commands.

User-Managed Backup and Recovery

User-managed backup and recovery does not use Recovery Manager. Operating system commands are used to make backups of the database files and to restore them in a recovery situation. The recovery commands are issued in a SQL*Plus session.

Querying Views to Obtain Database File Information:
Obtain information about the files of the database by querying the V$DATAFILE, V$CONTROLFILE, V$LOGFILE, and V$TABLESPACE views

Backup Method:
A user-managed database backup is an operating system backup of database files while the database is open or closed.

Physical Backup Methods
Operating system backup without archiving is used to recover to the point of the last backup after a media failure.
Operating system backup with archiving is used to recover to the point of failure after a media failure

Consistent Whole Database Backup
A consistent whole database backup, also known as a closed database backup, is a backup that is taken of all the datafiles and control files that constitute an Oracle database while the database is closed. It can also include the online redo log files, parameter file, and the password file.Ensure that the complete pathnames of the files are noted and used appropriately in the backup.

* It is not necessary to include the online redo log files as part of a whole database backup, if the database has been shut down cleanly, by using a normal, transactional, or immediate option. However, in cases where it is necessary to restore the entire database, the process is simplified if they have been backed up.

Advantages of Making Consistent Whole Database Backups:

A consistent whole database backup is conceptually simple because all you need to do is:

– Shut down the database
– Copy all required files to the backup location
– Open the database

Disadvantages of Making Consistent Whole Database Backups:
  • For business operations where the database must be continuously available, a consistent whole database backup is unacceptable because the database is shutdown and unavailable during the backup
  • The amount of time that the database is unavailable is affected by the size of the database,the number of datafiles, and the speed with which the copy operations on the data files can be performed. If this amount of time exceeds the allowable down time, you must choose another type of backup.
  • The recovery point is only to the last full consistent whole database backup, and lost transactions may have to be entered manually following a recovery operation

Performing a Consistent Whole Database Backup:

Perform a consistent whole database backup while the Oracle server instance is shut down.

1. Compile an up-to-date listing of all relevant files to back up.
2. Shut down the Oracle instance with the SHUTDOWN NORMAL, SHUTDOWN IMMEDIATE, or SHUTDOWN TRANSACTIONAL command.
3. Back up all datafiles and control files by using an operating system backup utility. You can also include the redo log files although it is not required. You should also backup the parameter file and the password file
4. Restart the Oracle instance.

Backup and Recovery : #02

Instance and Media Recovery Structures



Oracle Instance

An Oracle instance consists of memory areas (mainly System Global Area [SGA]) and background processes, namely PMON, SMON, DBWn, LGWR, and CKPT. An instance is created during the nomount stage of the database startup after the parameter file has been read.If any of these processes terminate, the instance shuts down.

Memory Structures


Database buffer cache:
Memory area used to store blocks read from data files.Data is read into the blocks by server processes and written out by DBWn asynchronously.

Shared pool:
Stores parsed versions of SQL statements, PL/SQL procedures, and data dictionary information

Log buffer:
Memory containing before and after image copies of changed data to be written to the redo logs.

Large pool:
An optional area in the SGA that provides large memory allocations for backup and restore operations, I/O server processes, and session memory for the shared server and Oracle XA.

Background Processes


Database writer(DBWn):
Writes dirty buffers from the data buffer cache to the data files. This activity is asynchronous.

Log writer (LGWR):
Writes data from the redo log buffer to the redo log files

System monitor(SMON):
Performs automatic instance recovery. Recovers space in temporary segments when they are no longer in use. Merges contiguous areas of free space depending on parameters that are set.

Process monitor(PMON):
Cleans up the connection/server process dedicated to an abnormally terminated user process. Performs rollback and releases the resources held by the failed process.

Checkpoint (CKPT):
Synchronizes the headers of the data files and control files with the current redo log and checkpoint numbers.

Archiver (ARCn)(optional):
A process that automatically copies redo logs that have been marked for archiving.

The User Process:
The user process is created when a user starts a tool such as SQL*Plus, Oracle Forms Developer, Oracle Reports Developer, Oracle Enterprise Manager, and so on. This process might be on the client or server, and provides an interface for the user to enter commands that interact with the database.

The Server Process
The server process accepts commands from the user process and performs steps to complete user requests. If the database is not in a shared server configuration, a server process is created on the machine containing the instance when a valid connection is established.

Oracle Database
An Oracle database consists of the following physical files:

Datafiles(Binary):
Physical storage of data. At least one file is required per database. This file stores the system tablespace.

Redo logs file (Binary):
Contain before and after image copies of changed data, for recovery purposes. At least two groups are required

Control files(Binary):
Record the status of the database, physical structure, and RMAN meta data

Parameter file (Text):
Store parameters required for instance startup

Server parameter file(Binary):
Store persistent parameters required for instance startup

Password file(optional)(Binary):
Store information on users who can start, stop, and recover the database

Archive logs(optional)(Binary):
Physical copies of the online redo log files.Created when the database is set in Archivelog mode. Used in recovery.


Large Pool


The large pool is used to allocate sequential I/O buffers from shared memory. For I/O slaves and Oracle backup and restore.
Recovery Manager (RMAN) uses the large pool for backup and restore when you set the DBWR_IO_SLAVES or BACKUP_TAPE_IO_SLAVES parameters to simulate asynchronous I/O.

Sizing the Large Pool:

If LARGE_POOL_SIZE is set, then Oracle attempts to get memory from the large pool
If the LARGE_POOL_SIZE initialization parameter is not set, then the Oracle server attempts to allocate shared memory buffers from the shared pool in the SGA.

DBWR_IO_SLAVES: This parameter specifies the number of I/O slaves used by the DBWn process. The DBWn process and its slaves always write to disk. By default, the value is 0 and I/O slaves are not used.

BACKUP_TAPE_IO_SLAVES: It specifies whether I/O slaves are used by the Recovery Manager to backup, copy, or restore data to tape

To determine how the large pool is being used, query V$SGASTAT


Database Buffer Cache, DBWn and Datafiles:



Function of the Database Buffer Cache:

The database buffer cache is an area in the SGA that is used to store the most recently used data blocks.
The server process reads tables, indexes, and rollback segments from the data files into the buffer cache where it makes changes to data blocks when required.
The Oracle server uses a least recently used (LRU) algorithm to determine which buffers can be overwritten to accommodate new blocks in the buffer cache.

Function of the DBWn Background Process:

The database writer process (DBWn) writes the dirty buffers from the database buffer cache to the data files. It ensures that sufficient numbers of free buffers— buffers that can be overwritten when server processes need to read in blocks from the data files—are available in the database buffer cache.

The database writer regularly synchronizes the database buffer cache and the data files, this is the checkpoint event triggered in various situations

Although one database writer process is adequate for most systems, you can configure additional processes to improve write performance if your system modifies data heavily

Data Files
Data files store both system and user data on a disk. This data may be committed or uncommitted

Data Files Containing Only Committed Data:
This is normal for a closed database, except when failure has occurred or the "shutdown abort" option has been used. If the instance is shutdown using the normal, immediate or transactional option, the data files contain only committed data. This is because all uncommitted data is rolled back, and a checkpoint is issued to force all committed data to a disk.

Data Files Containing Uncommitted Data:
While an instance is started, datafiles can contain uncommitted data. This happens when data has been changed but not committed (the changed data is now in the cache), and more space is needed in the cache, forcing the uncommitted data off to disk. Only when all users eventually commit will the data files contain only committed data. In the event of failure, during subsequent recovery, the redo logs and rollback segments are used to synchronize the datafiles.

Configuring Tablespaces:
Tablespaces contain one or more datafiles. It is important that tablespaces are created carefully to provide a flexible and manageable backup and recovery strategy


System: Backup and recovery is more flexible if system and user data is contained in different tablespaces.
Temporary: If the tablespace containing temporary segments (used in sort, and so on) is lost, it can be re-created, rather than recovered.
Undo: The procedures for backing up undo tablespaces are exactly the same as for backing up any other read/write tablespace. Because the automatic undo tablespace is so important for recovery and for read consistency, you should back it up frequently.
Read-only data: Backup time can be reduced because a tablespace must be backed up only when the tablespace is made read-only.
Highly volatile data: This tablespace should be backed up more frequently, also reducing recovery time.
Index data: Tablespaces to store only index segments should be created. These tablespaces can often be re-created instead of recovered.


Redo Log Buffer, LGWR and Redo Log Files:



Function of the Redo Log Buffer
The redo log buffer is a circular buffer that holds information about changes made to the database. This information is stored in redo entries.
Redo entries contain the information necessary to reconstruct, or redo, changes made to the database by INSERT, UPDATE, DELETE, CREATE, ALTER, or DROP operations. Redo entries are used for database recovery, if necessary.
Redo entries are copied by Oracle server processes from the user’s memory space to the redo log buffer.

Function of the LGWR Background Process
The log writer (LGWR) writes redo entries from the redo log buffer to the redo log files as follows:
  • When the redo log buffer is one-third full
  • When a timeout occurs (every three seconds)
  • When there is 1 MB of redo
  • Before DBWn writes modified blocks in the database buffer cache to the data files 
  • When a transaction commits


Redo Log Files:

Redo log files store all changes made to the database. If the database is to be recovered to a point in time when it was operational, redo logs are used to ensure that all committed transactions are committed to disk, and all uncommitted transactions are rolled back. The important points relating to redo log files are as follows:
LGWR writes to redo log files in a circular fashion. This behavior results in all members of a log file group being overwritten.
Although it is mandatory to have at least two log groups to support the cyclic nature, in most cases, you would need more than two redo log groups.

Redo Log Switches
  • At a log switch, the current redo log group is assigned a log sequence number that identifies the
  • information stored in that redo log group and is also used for synchronization.
  • log switch occurs when LGWR stops writing to one redo log group and begins writing to another.
  • log switch occurs when LGWR has filled one log file group.
  • DBA can force a log switch by using the ALTER SYSTEM SWITCH LOGFILE command.
  • Checkpoint occurs automatically at a log switch.


CKPT Process

Database checkpoints ensure that all modified database buffers are written to the database files.
The database header files are then marked current, and the checkpoint sequence number is recorded in the control file. Checkpoints synchronize the buffer cache by writing all buffers to disk whose corresponding redo entries were part of the log file being checkpointed.

Checkpoint Process (CKPT) Features
  • The CKPT process is always enabled.
  • The CKPT process updates file headers at checkpoint completion.
  • More frequent checkpoints reduce the time needed for recovering from instance failure at the possible expense of performance.

Synchronization

  • At each checkpoint, the checkpoint number is updated in every database file header and in the control file.
  • The checkpoint number acts as a synchronization marker for redo, control, and data files. If they have the same checkpoint number, the database is considered to be in a consistent state.
  • Information in the control file is used to confirm that all files are at the same checkpoint number during database startup. Any inconsistency between the checkpoint numbers in the various file headers results in a failure, and the database cannot be opened. Recovery is required.