You will see any of these errors, in case of redo log corruption
ORA-16038 log %s sequence# %s cannot be archived
ORA-367 checksum error in log file header
ORA-368 checksum error in redo log block
ORA-354 corrupt redo log block header
ORA-353 log corruption near block change time
Solution:- Try to clear the log file without shutdown the database.
You have to be careful when using 'alter database clear logfile', because the command erases all data in the logfile.
eg: alter database clear logfile group 1;
alter database clear unarchived logfile group 1;
Dropping/clearing the redo logs is not possible, if there are only two
log groups and the corrupt logfile belongs to CURRENT/ACTIVE, it may be
needed for instance recovery. You may receive ORA-1624 error.
If you receive ORA-1624 then you have to perform incomplete recovery stopping just before the redo log file which was corrupted.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
2012年5月23日星期三
Oracle data pump expdp/impdp
Oracle Data Pump utility is used for exporting data and metadata into
set of operating system files and it is newer, faster and flexible
alternative to “export/import” utilities.
Oracle Datapump utility introduced in Oracle 10g Release1 and this utility can be invoked using expdp(export) and impdb(import) commands. User need to specify the export/import parameters to determine the operation and you can specify the parameters on the command line or in a parameter file.
The expdp and impdp uses the procedures provided in the DBMS_DATAPUMP package to execute the commands and DBMS_METADATA package is used to move the data.
Please note that it is not possible to start or restart data pump jobs on one instance in Oracle RAC if jobs currently running on other instances.
Oracle Data Pump Export :-
1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;
2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;
3. Take Data Pump Export
Click here to see Roles/privileges required for Export modes.
Oracle data pump export examples for all 5 modes.
(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log
(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log
If you want to export more than one schema then specify the schema names separated by comma.
(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log
You can specify more than one table.
(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log
You can specify more than one tablespace.
(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log
Click here to learn more on Transportable Tablespace with examples.
Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.
Copy the dump file to the target system where you to import.
1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';
2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;
3. Import the data using Data Pump Import.
Oracle data pump import examples for all 5 modes.
(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log
(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log
(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND
(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log
Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.
(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.
Common Errors with Data pump import (impdp) utility:-
1. ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.
2. ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.
3. ORA-01950: no privileges on tablespace "string"
Cause: User does not have privileges to allocate an extent in the specified tablespace.
Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.
Click here to learn Roles/ privileges required for Data pump Export and Import.
4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 3113:
"BEGIN "
"SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb
5. Import failed with below errors
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'CTXSYS.DRIIMP' must be declared
Cause: The user that you are importing does not have privileages on CTXSYS.DRIIMP package or CTXSYS user does not exists
Action: Create CTXSYS user or grant required permissions
Please see the Data pump Export and Import related documents:
Click here for Data Pump Export modes and Interfaces.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Oracle Datapump utility introduced in Oracle 10g Release1 and this utility can be invoked using expdp(export) and impdb(import) commands. User need to specify the export/import parameters to determine the operation and you can specify the parameters on the command line or in a parameter file.
The expdp and impdp uses the procedures provided in the DBMS_DATAPUMP package to execute the commands and DBMS_METADATA package is used to move the data.
Please note that it is not possible to start or restart data pump jobs on one instance in Oracle RAC if jobs currently running on other instances.
Oracle Data Pump Export :-
1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;
2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;
3. Take Data Pump Export
Click here to see Roles/privileges required for Export modes.
Oracle data pump export examples for all 5 modes.
(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log
(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log
If you want to export more than one schema then specify the schema names separated by comma.
(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log
You can specify more than one table.
(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log
You can specify more than one tablespace.
(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log
Click here to learn more on Transportable Tablespace with examples.
Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.
Copy the dump file to the target system where you to import.
1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';
2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;
3. Import the data using Data Pump Import.
Oracle data pump import examples for all 5 modes.
(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log
(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log
(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND
(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log
Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.
(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.
Common Errors with Data pump import (impdp) utility:-
1. ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.
2. ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.
3. ORA-01950: no privileges on tablespace "string"
Cause: User does not have privileges to allocate an extent in the specified tablespace.
Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.
Click here to learn Roles/ privileges required for Data pump Export and Import.
4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 3113:
"BEGIN "
"SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb
5. Import failed with below errors
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'CTXSYS.DRIIMP' must be declared
Cause: The user that you are importing does not have privileages on CTXSYS.DRIIMP package or CTXSYS user does not exists
Action: Create CTXSYS user or grant required permissions
Please see the Data pump Export and Import related documents:
Click here for Data Pump Export modes and Interfaces.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Transportable tablespace export and impot
Transportable tablespaces export and import is manageable across
platforms and only Meta data will be exported. In Cross platform
transportable tablespace the data movement is simpler and faster.
This mode requires that you have the EXP_FULL_DATABASE role.
Please note that
1. source and target database must use the same character set/national character set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.
Transportable tablespace export and import on same endian platforms
Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ---------------------- ------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little
For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.
Step 2:- Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;
Step 3: Export metadata
(i) Using export utility$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log
(ii) Using Export data pump utility$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log
TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.
The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.
Step 4: Copy the files to Target system
If you are using FTP use binary option.
Step 5: Initiate Import to plug the tablespace into the database.
(i) Using import utility $ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log
(ii) Using impdp utility
Click here to see the Instructions to create Directory and grant privileges.
$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log
You can use REMAP_SCHEMA= (source: target), if you want to import into another schema.
Step6: - Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;
Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.
Transport Tablespace Import Common Errors:-
1. Oracle Error : EXP-00044: must be connected "AS SYSDBA" to do Point-in-time Recovery or Transportable Tablespace import
Cause: The user must log in "as SYSDBA" to perform transportable tablespace imports or Point-In-Time Recovery imports.
Action: Ask your database administrator to perform the Transportable Tablespace import or
the Tablespace Point-in-time Recovery import.
2. IMP-00017: following statement failed with ORACLE error
19721:IMP-00003: ORACLE error 19721 encountered
ORA-06512: at "SYS.DBMS_PLUGTS", line 2065
ORA-06512: at line 1
Cause: A duplicated data file name in the import parameters file was causing the issue
Action: Modify the import parameters file with the right datafile name
Please see the Data pump Export/Import related documents:
Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
This mode requires that you have the EXP_FULL_DATABASE role.
Please note that
1. source and target database must use the same character set/national character set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.
Transportable tablespace export and import on same endian platforms
Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ---------------------- ------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little
For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.
Step 2:- Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;
Step 3: Export metadata
(i) Using export utility$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log
(ii) Using Export data pump utility$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log
TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.
The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.
Step 4: Copy the files to Target system
If you are using FTP use binary option.
Step 5: Initiate Import to plug the tablespace into the database.
(i) Using import utility $ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log
(ii) Using impdp utility
Click here to see the Instructions to create Directory and grant privileges.
$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log
You can use REMAP_SCHEMA= (source: target), if you want to import into another schema.
Step6: - Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;
Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.
Transport Tablespace Import Common Errors:-
1. Oracle Error : EXP-00044: must be connected "AS SYSDBA" to do Point-in-time Recovery or Transportable Tablespace import
Cause: The user must log in "as SYSDBA" to perform transportable tablespace imports or Point-In-Time Recovery imports.
Action: Ask your database administrator to perform the Transportable Tablespace import or
the Tablespace Point-in-time Recovery import.
2. IMP-00017: following statement failed with ORACLE error
19721:IMP-00003: ORACLE error 19721 encountered
ORA-06512: at "SYS.DBMS_PLUGTS", line 2065
ORA-06512: at line 1
Cause: A duplicated data file name in the import parameters file was causing the issue
Action: Modify the import parameters file with the right datafile name
Please see the Data pump Export/Import related documents:
Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Flashback before and recyclebin
The Oracle 10g provides the ability to reinstating an accidentally dropped table, which is called Flashback Drop.
When a table is dropped, the database does not immediately remove the space associated with that table. Instead, the table is renamed to a system-defined name and placed in the Recycle bin. The flashback drop operations recovers from recycle bin.
SQL> DROP TABLE employee_tbl;
You can see the system-defined name of the table.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE
You can see the dropped table in the recycle bin using
SQL> show Recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
-------------- ------------------------------ ------------ ----------------
EMPLOYEE_TBL BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE 2010-03-01:09:10:00
Recover the dropped table from Recycle bin using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP;
Recover the dropped table from Recycle bin with new name using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP RENAME TO employee_tmp;
You can also drop a table completely with out needing a flashback using
SQL> DROP TABLE employee_tbl PURGE;
The tablespace will not free up space until the table will remain until recycle bin is purged. You can purge table explicitly from recycle bin using
SQL> PURGE TABLE employee_tbl;
Purge all tables in recycle bin using
SQL> PURGE RECYCLEBIN;
As a DBA can purge all the objects in any tablespace using
SQL> PURGE DBA_RECYCLEBIN;
Purge all the objects in recycle bin in a tablespace using
PURGE TABLESPACE employee_tbs;
Purge all the objects in recycle bin in a tablespace specific to a user only using
PURGE TABLESPACE employee_tbs USER emp_user;
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
When a table is dropped, the database does not immediately remove the space associated with that table. Instead, the table is renamed to a system-defined name and placed in the Recycle bin. The flashback drop operations recovers from recycle bin.
SQL> DROP TABLE employee_tbl;
You can see the system-defined name of the table.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE
You can see the dropped table in the recycle bin using
SQL> show Recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
-------------- ------------------------------ ------------ ----------------
EMPLOYEE_TBL BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE 2010-03-01:09:10:00
Recover the dropped table from Recycle bin using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP;
Recover the dropped table from Recycle bin with new name using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP RENAME TO employee_tmp;
You can also drop a table completely with out needing a flashback using
SQL> DROP TABLE employee_tbl PURGE;
The tablespace will not free up space until the table will remain until recycle bin is purged. You can purge table explicitly from recycle bin using
SQL> PURGE TABLE employee_tbl;
Purge all tables in recycle bin using
SQL> PURGE RECYCLEBIN;
As a DBA can purge all the objects in any tablespace using
SQL> PURGE DBA_RECYCLEBIN;
Purge all the objects in recycle bin in a tablespace using
PURGE TABLESPACE employee_tbs;
Purge all the objects in recycle bin in a tablespace specific to a user only using
PURGE TABLESPACE employee_tbs USER emp_user;
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Labels:
Administration,
Backup/Recovery,
Oracle 10g New Feat
Flashback Table Feature in Oracle 10g
In Oracle 9i Database, we have concept of Flashback Query option to
retrieve data from a point in time in the past. The Oracle 10g provides
the ability to recover a table or set of tables to a specified point in
time in the past, this concept is called “Flashback table”.
Oracle Flashback Table operation is very quick and you do not have to restore any data from backups, and the rest of your database remains available while the Flashback Table operation is being performed.
The Flashback table depends on Undo information retained in the undo tablespace. If you set UNDO_RETENTION=1 hr, Oracle will not overwritten the data in undo tablespace until 1 hr. User can recover from their mistakes until specified time only.
Flashback table feature has some prerequisites:
•Row movement must be enabled on the table.
SQL> ALTER TABLE table_name ENABLE ROW MOVEMENT;
•You must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
•You must have FLASHBACK ANY TABLE privilege or the FLASHBACK object privilege on the table.
Use below commands to restore the table to its state when the database was at the time specified by SCN or point in time.
SQL> FLASHBACK TABLE employee_tbl TO SCN 786;
or
SQL> FLASHBACK TABLE employee_tbl TO TIMESTAMP TO_TIMESTAMP ('2010-03-01 09:00:00', 'YYYY-MM-DD HH:MI:SS')
Regards
Satishbabu Gunukula
http://www.oracleracexpert.com
Oracle Flashback Table operation is very quick and you do not have to restore any data from backups, and the rest of your database remains available while the Flashback Table operation is being performed.
The Flashback table depends on Undo information retained in the undo tablespace. If you set UNDO_RETENTION=1 hr, Oracle will not overwritten the data in undo tablespace until 1 hr. User can recover from their mistakes until specified time only.
Flashback table feature has some prerequisites:
•Row movement must be enabled on the table.
SQL> ALTER TABLE table_name ENABLE ROW MOVEMENT;
•You must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
•You must have FLASHBACK ANY TABLE privilege or the FLASHBACK object privilege on the table.
Use below commands to restore the table to its state when the database was at the time specified by SCN or point in time.
SQL> FLASHBACK TABLE employee_tbl TO SCN 786;
or
SQL> FLASHBACK TABLE employee_tbl TO TIMESTAMP TO_TIMESTAMP ('2010-03-01 09:00:00', 'YYYY-MM-DD HH:MI:SS')
Regards
Satishbabu Gunukula
http://www.oracleracexpert.com
How to recover undo or rollback tablespace
While handing with undo tablespace you need to be extra cautious due to
active transactions in the undo segments. You need to follow different
approach depend upon scenario.
Scenario 1: Undo/rollback datafile damaged or dropped accidently when database is up
In this scenario the lost or damaged datafile may contain the active transactions and you cannot able to offline or drop the undo/rollback datafile.
• Startup mount
SQL> STARTUP MOUNT
• Check the status of the datfile
SQL> SELECT FILE#, NAME, STATUS FROM V$DATAFILE;
If the datafile is offline you must bring the datafile online before you recover
• Restore and Recover the datafile
$ rman TARGET / CATALOG rman/*****@rman
RMAN> Restore datafile 'fullpath_and_filename'
RMAN> Recover datafile 'fullpath_and_filename'
• Open the database.
SQL> ALTER DATABASE OPEN;
Scenario 2: Undo/rollback datafile damaged or dropped accidently when database is down
• If using automatic UNDO_MANAGMENT then comment out the parameter in init.ora file. If using rollback segments then comment out ROLLBACK_SEMGNETS parameter
• Mount the database in restricted mode
SQL> STARTUP RESTRICT MOUNT
• Offline the undo or rollback datafile and drop the file
SQL> ALTER DATABASE DATAFILE 'fullpath_and_filename' OFFLINE DROP;
• Open the database and drop the UNDO tablespace or the tablespace which contains the rollback segments
SQL> ALTER DATABASE OPEN
SQL> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
• Recreate the undo tablespace. If you are using rollback segments, recreate the rollback segment tablespace and rollback segments. Make sure to bring the rollback segments online.
If using Undo tablespace
SQL> CREATE UNDO TABLESPACE undotbs2 DATAFILE '/oradata/undotbs2_01.dbf' SIZE 100M REUSE AUTOEXTEND ON;
If using rollback segment tablespace
SQL> CREATE TABLESPACE rollback_tbs DATAFILE '/oradata/rollback_tbs01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL;
SQL> CREATE ROLLBACK SEGMENT rollback1 TABLESPACE rollback_tbs;
SQL> ALTER ROLLBACK SEGMENT rollback1 ONLINE;
• Modify the parameter file settings
If using UNDO tablespace
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=new_undo_tablespace_name
If you are using the rollback segment tablespace, include the rollback segments that you created in previous step in ROLLBACK_SEMGNETS parameter in init.ora file
• Take the database out of restricted mode.
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
Oracle 11g introduced a new feature called RMAN UNDO Bypass. The RMAN backup command no longer backs up the UNDO data that is not needed for recovery. Prior to Oracle 11g, all UNDO transactions that were already committed also backed up. This backup undo optimization minimizes the backup time and storage.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Scenario 1: Undo/rollback datafile damaged or dropped accidently when database is up
In this scenario the lost or damaged datafile may contain the active transactions and you cannot able to offline or drop the undo/rollback datafile.
• Startup mount
SQL> STARTUP MOUNT
• Check the status of the datfile
SQL> SELECT FILE#, NAME, STATUS FROM V$DATAFILE;
If the datafile is offline you must bring the datafile online before you recover
• Restore and Recover the datafile
$ rman TARGET / CATALOG rman/*****@rman
RMAN> Restore datafile 'fullpath_and_filename'
RMAN> Recover datafile 'fullpath_and_filename'
• Open the database.
SQL> ALTER DATABASE OPEN;
Scenario 2: Undo/rollback datafile damaged or dropped accidently when database is down
• If using automatic UNDO_MANAGMENT then comment out the parameter in init.ora file. If using rollback segments then comment out ROLLBACK_SEMGNETS parameter
• Mount the database in restricted mode
SQL> STARTUP RESTRICT MOUNT
• Offline the undo or rollback datafile and drop the file
SQL> ALTER DATABASE DATAFILE 'fullpath_and_filename' OFFLINE DROP;
• Open the database and drop the UNDO tablespace or the tablespace which contains the rollback segments
SQL> ALTER DATABASE OPEN
SQL> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
• Recreate the undo tablespace. If you are using rollback segments, recreate the rollback segment tablespace and rollback segments. Make sure to bring the rollback segments online.
If using Undo tablespace
SQL> CREATE UNDO TABLESPACE undotbs2 DATAFILE '/oradata/undotbs2_01.dbf' SIZE 100M REUSE AUTOEXTEND ON;
If using rollback segment tablespace
SQL> CREATE TABLESPACE rollback_tbs DATAFILE '/oradata/rollback_tbs01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL;
SQL> CREATE ROLLBACK SEGMENT rollback1 TABLESPACE rollback_tbs;
SQL> ALTER ROLLBACK SEGMENT rollback1 ONLINE;
• Modify the parameter file settings
If using UNDO tablespace
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=new_undo_tablespace_name
If you are using the rollback segment tablespace, include the rollback segments that you created in previous step in ROLLBACK_SEMGNETS parameter in init.ora file
• Take the database out of restricted mode.
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
Oracle 11g introduced a new feature called RMAN UNDO Bypass. The RMAN backup command no longer backs up the UNDO data that is not needed for recovery. Prior to Oracle 11g, all UNDO transactions that were already committed also backed up. This backup undo optimization minimizes the backup time and storage.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Labels:
Backup/Recovery,
Oracle 11g New Feat,
RMAN Recovery
Thursday, May 27, 2010
Controlfile and Server parameter file (spfile) autobackup
RMAN can be configured to automatically backup the control file and server parameter file (spfile) whenever the database structure in the control file changes and whenever a backup record is added. The autobackup feature enables RMAN to recover the database even if the current control file, catalog, and server parameter file are lost.The RMAN can search the autobackup path and restore the server parameter file from backup. Once the instance is started with restored spfile, use RMAN to restore the controlfile from autobackup. After you mount the DB using restored control file, use RMAN repository information from control file to restore and recover the data files.
You can turn ON or OFF the autobackup feature by using the following commands:
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP OFF;
If control file autobackups is ON and the backup includes datafile 1, RMAN writes the control file and SPFILE to a separate backup set. If autobackup is OFF and the backup includes datafile 1, then RMAN includes the current control file and SPFILE in the same backup set as the datafiles. After backup completion the database writes a message containing the complete path of the backup piece and the device type to the alert log.
The control file autobackup filename has a default format of %F for all device types, so that the RMAN can guess the file location and restore the controlfile without a repository. All autobackup formats must include the %F variable.
The format %F translates into c-IIIIIIIIII-YYYYMMDD-QQ, where
IIIIIIIIII - stands for DBID.
YYYYMMDD - Time stamp of the day the backup is generated
QQ is the hex sequence starts with 00 and has a maximum of FF
Use the following command to configure the Control file Autobackup format
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/cf _spfile_%F';
Use following command to write to an Automatic Storage Management disk group
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FOR DEVICE TYPE DISK TO '+DGROUP1/%F';
Use the following commands to clear control file autobackup formats for a device:
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE sbt CLEAR;
You can use SET CONTROLFILE AUTOBACKUP FORMAT command to override the configured autobackup format at session level
RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/cf_spfile_%F';
RMAN> BACKUP DATABASE;
If you have configured control file autobackup, you do not need a recovery catalog or target database control file to restore the control file in case if you lost all your control files.
Use the below command to restore the control file autobackup.
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;
DATA PUMP sample
To Use
Data Pump, DBA
has to create a directory in Server Machine and create a Directory Object in
the database mapping to the directory created in the file system.
The
following example creates a directory in the filesystem and creates a directory
object in the database and grants privileges on the Directory Object to the
SCOTT user.
$mkdir
my_dump_dir
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;
Now
grant access on this directory object to SCOTT user
SQL> grant read,write on directory data_pump_dir to scott;
To
Export Full Database, give the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp
LOGFILE=myfullexp.log JOB_NAME=myfullJob
LOGFILE=myfullexp.log JOB_NAME=myfullJob
The
above command will export the full database and it will create the dump file
full.dmp in the directory on the server /u01/oracle/my_dump_dir
In some
cases where the Database is in Terabytes the above command will not feasible
since the dump file size will be larger than the operating system limit, and
hence export will fail. In this situation you can create multiple dump files by
typing the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp
FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
This
will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so
on. The FILESIZE parameter specifies how much larger the dump file should be.
To
export all the objects of SCOTT’S schema you can run the following export data
pump command.
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT
SCHEMAS=SCOTT
You can
omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only.
If you
want to export objects of multiple schemas you can specify the following
command
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT,HR,ALI
SCHEMAS=SCOTT,HR,ALI
You can
use Data Pump Export utility to export individual tables. The following example
shows the syntax to export tables
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments
Exporting
Tables located in a Tablespace
If
you want to export tables located in a particular tablespace you can type the
following command
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4,
tbs_5, tbs_6
The
above will export all the objects located in tbs_4,tbs_5,tbs_6
You can
exclude objects while performing a export by using
EXCLUDE option of Data Pump utility. For example you are exporting a schema and
don’t want to export tables whose name starts with “A” then you can type the
following command
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”
SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”
Then all
tables in Scott’s Schema whose name starts with “A “ will
not be exported.
Similarly
you can also INCLUDE option to only export certain objects like this
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”
SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”
This is
opposite of EXCLUDE option i.e. it will export only those tables of Scott’s
schema whose name starts with “A”
Similarly
you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA
You can
use QUERY option to export only required rows. For Example, the following will
export only those rows of employees tables whose
salary is above 10000 and whose dept id is 10.
expdp
hr/hr QUERY=emp:'"WHERE dept_id
> 10 AND sal > 10000"'
NOLOGFILE=y DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp
You can
suspend running export jobs and later on resume these jobs or kill these jobs
using Data Pump Export. You can start a job in one client machine and then, if
because of some work, you can suspend it. Afterwards when your work has been
finished you can continue the job from the same client, where you stopped the
job, or you can restart the job from another client machine.
For
Example, suppose a DBA starts a full database export by typing the following
command at one client machine CLNT1 by typing the following command
$expdp
scott/tiger@mydb FULL=y
DIRECTORY=data_pump_dir
DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob
DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob
After
some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C
to enter into interactive mode. Then he will get the Export> prompt where he
can type interactive commands
Now he
wants to stop this export job so he will type the following command
Export>
STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state
and exits the client.
After finishing his other work, the
DBA wants to resume the export job and the client machine from where he
actually started the job is locked because, the user has locked his/her cabin.
So now the DBA will go to another client machine and he reattach to the job by typing
the following command
$expdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed,
he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export>
CONTINUE_CLIENT
A message is displayed that the job
has been reopened, and processing status is output to the client.
Note: After reattaching to the Job a
DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue
with the export job.
订阅:
博文 (Atom)