You will see any of these errors, in case of redo log corruption
ORA-16038 log %s sequence# %s cannot be archived
ORA-367 checksum error in log file header
ORA-368 checksum error in redo log block
ORA-354 corrupt redo log block header
ORA-353 log corruption near block change time
Solution:- Try to clear the log file without shutdown the database.
You have to be careful when using 'alter database clear logfile', because the command erases all data in the logfile.
eg: alter database clear logfile group 1;
alter database clear unarchived logfile group 1;
Dropping/clearing the redo logs is not possible, if there are only two
log groups and the corrupt logfile belongs to CURRENT/ACTIVE, it may be
needed for instance recovery. You may receive ORA-1624 error.
If you receive ORA-1624 then you have to perform incomplete recovery stopping just before the redo log file which was corrupted.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
2012年5月23日星期三
Oracle data pump expdp/impdp
Oracle Data Pump utility is used for exporting data and metadata into
set of operating system files and it is newer, faster and flexible
alternative to “export/import” utilities.
Oracle Datapump utility introduced in Oracle 10g Release1 and this utility can be invoked using expdp(export) and impdb(import) commands. User need to specify the export/import parameters to determine the operation and you can specify the parameters on the command line or in a parameter file.
The expdp and impdp uses the procedures provided in the DBMS_DATAPUMP package to execute the commands and DBMS_METADATA package is used to move the data.
Please note that it is not possible to start or restart data pump jobs on one instance in Oracle RAC if jobs currently running on other instances.
Oracle Data Pump Export :-
1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;
2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;
3. Take Data Pump Export
Click here to see Roles/privileges required for Export modes.
Oracle data pump export examples for all 5 modes.
(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log
(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log
If you want to export more than one schema then specify the schema names separated by comma.
(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log
You can specify more than one table.
(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log
You can specify more than one tablespace.
(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log
Click here to learn more on Transportable Tablespace with examples.
Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.
Copy the dump file to the target system where you to import.
1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';
2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;
3. Import the data using Data Pump Import.
Oracle data pump import examples for all 5 modes.
(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log
(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log
(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND
(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log
Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.
(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.
Common Errors with Data pump import (impdp) utility:-
1. ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.
2. ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.
3. ORA-01950: no privileges on tablespace "string"
Cause: User does not have privileges to allocate an extent in the specified tablespace.
Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.
Click here to learn Roles/ privileges required for Data pump Export and Import.
4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 3113:
"BEGIN "
"SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb
5. Import failed with below errors
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'CTXSYS.DRIIMP' must be declared
Cause: The user that you are importing does not have privileages on CTXSYS.DRIIMP package or CTXSYS user does not exists
Action: Create CTXSYS user or grant required permissions
Please see the Data pump Export and Import related documents:
Click here for Data Pump Export modes and Interfaces.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Oracle Datapump utility introduced in Oracle 10g Release1 and this utility can be invoked using expdp(export) and impdb(import) commands. User need to specify the export/import parameters to determine the operation and you can specify the parameters on the command line or in a parameter file.
The expdp and impdp uses the procedures provided in the DBMS_DATAPUMP package to execute the commands and DBMS_METADATA package is used to move the data.
Please note that it is not possible to start or restart data pump jobs on one instance in Oracle RAC if jobs currently running on other instances.
Oracle Data Pump Export :-
1. Create directory object as SYS user.
SQL> create or replace directory export_dir as '/oradata/export’;
2. Grant Read/Write privilege on the directory to the user, who invokes the Data pump export.
SQL> grant read,write on directory export_dir to test_user;
3. Take Data Pump Export
Click here to see Roles/privileges required for Export modes.
Oracle data pump export examples for all 5 modes.
(i) Full Database Export
$ expdp test_user/test123 full=y directory=export_dir dumpfile=expdp_fulldb.dmp logfile=expdp_fulldb.log
(ii) Schema Export
$expdp test_user/test123 schemas=test_user directory= export _dir dumpfile=expdp_test_user.dmp logfile=expdp_test_user.log
If you want to export more than one schema then specify the schema names separated by comma.
(iii)Table Export
$ expdp test_user/test123 tables=emp,dept directory= export _dir dumpfile=expdp_tables.dmp logfile=expdp_tables.log
You can specify more than one table.
(iv) Tablespace Export
$ expdp test_user/test123 tablespaces=test_user_tbs directory= export _dir dumpfile=expdp_tbs.dmp logfile=expdp_tbs.log
You can specify more than one tablespace.
(v) Transportable tablespace
$ expdp test_user/test123 transport_tablespaces=test_user_tbs transport_full_check=y directory= export _dir dumpfile=expdp_trans_tbs.dmp logfile=expdp_trans_tbs.log
Click here to learn more on Transportable Tablespace with examples.
Oracle Data Pump Import :-
Data Pump Import utility is used for loading an export dump files into a target system and we can load one or more files.
Copy the dump file to the target system where you to import.
1. Create directory object as SYS user.
SQL> create directory import_dir as '/oradata/import';
2. Grant Read/Write privilege on the Directory to the user, who invokes the Data Pump import.
SQL> grant read,write on directory import_dir to test_user;
3. Import the data using Data Pump Import.
Oracle data pump import examples for all 5 modes.
(i) Full Database Import
$ impdp test_user/test123 full=Y directory=imp_dir dumpfile=expdp_fulldb.dmp logfile=imp_fulldb.log
(ii) Schema Import
$impdp test_user/test123 schemas=test_user directory=imp_dir dumpfile=expdp_test_user.dmp Logfile=impdp_test_user.log
(iii) Table Import
$ impdp test_user/test123 tables=emp,dept directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
From 11g, you can reaname a table during the import
REMAP_TABLE=[schema.]old_tablename[.partition]:new_tablename
$ impdp test_user/test123 remap_table=test_user.emp:emp1 directory=imp_dir dumpfile=expdp_tables.dmp logfile=impdp_tables.log
Tables will not be remapped if they already exist even if the TABLE_EXISTS_ACTION is set to TRUNCATE or APPEND
(iv) Tablespace Import
$ impdp test_user/test123 tablespaces=test_user_tbs directory=imp_dir dumpfile=expdp_tbs.dmp logfile=impdp_tbs.log
Above example imports all tables that have data in tablespaces test_user_tbs and it assumes that the tablespaces already exist.
(v) Transportable Tablespace
Click here to to import data using Transportable Tablespace method.
Common Errors with Data pump import (impdp) utility:-
1. ORA-31631: privileges are required
ORA-39122: Unprivileged users may not perform REMAP_SCHEMA remapping
Cause: A user attempted to remap objects during an import but lacked the IMPORT_FULL_DATABASE privilege.
Action: Retry the job from a schema that owns the IMPORT_FULL_DATABASE privilege.
2. ORA-31631: privileges are required
ORA-39161: Full database jobs require privileges
Cause: Either an attempt to perform a full database export without the EXP_FULL_DATABASE role or an attempt to perform a full database import over a network link without the IMP_FULL_DATABASE role.
Action: Retry the operation in a schema that has the required roles.
3. ORA-01950: no privileges on tablespace "string"
Cause: User does not have privileges to allocate an extent in the specified tablespace.
Action: Grant the user the appropriate system privileges or grant the user space resource on the tablespace.
Click here to learn Roles/ privileges required for Data pump Export and Import.
4. import done in WE8MSWIN1252 character set and AL16UTF16 NCHAR character set
IMP-00017: following statement failed with ORACLE error 3113:
"BEGIN "
"SYS.DBMS_RULE_ADM.GRANT_SYSTEM_PRIVILEGE SYS.DBMS_RULE_ADM.CREATE_EVALUATIO" "N_CONTEXT_OBJ, 'SYS',TRUE);"
Cause: Import fails while executing the following command.
Action: Login as sys and run the following scripts
$ORACLE_HOME/rdbms/admin/dbmsread.sql
$ORACLE_HOME/rdbms/admin/prvtread.plb
5. Import failed with below errors
IMP-00003: ORACLE error 6550 encountered
ORA-06550: line 2, column 1:
PLS-00201: identifier 'CTXSYS.DRIIMP' must be declared
Cause: The user that you are importing does not have privileages on CTXSYS.DRIIMP package or CTXSYS user does not exists
Action: Create CTXSYS user or grant required permissions
Please see the Data pump Export and Import related documents:
Click here for Data Pump Export modes and Interfaces.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Transportable tablespace export and impot
Transportable tablespaces export and import is manageable across
platforms and only Meta data will be exported. In Cross platform
transportable tablespace the data movement is simpler and faster.
This mode requires that you have the EXP_FULL_DATABASE role.
Please note that
1. source and target database must use the same character set/national character set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.
Transportable tablespace export and import on same endian platforms
Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ---------------------- ------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little
For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.
Step 2:- Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;
Step 3: Export metadata
(i) Using export utility$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log
(ii) Using Export data pump utility$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log
TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.
The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.
Step 4: Copy the files to Target system
If you are using FTP use binary option.
Step 5: Initiate Import to plug the tablespace into the database.
(i) Using import utility $ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log
(ii) Using impdp utility
Click here to see the Instructions to create Directory and grant privileges.
$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log
You can use REMAP_SCHEMA= (source: target), if you want to import into another schema.
Step6: - Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;
Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.
Transport Tablespace Import Common Errors:-
1. Oracle Error : EXP-00044: must be connected "AS SYSDBA" to do Point-in-time Recovery or Transportable Tablespace import
Cause: The user must log in "as SYSDBA" to perform transportable tablespace imports or Point-In-Time Recovery imports.
Action: Ask your database administrator to perform the Transportable Tablespace import or
the Tablespace Point-in-time Recovery import.
2. IMP-00017: following statement failed with ORACLE error
19721:IMP-00003: ORACLE error 19721 encountered
ORA-06512: at "SYS.DBMS_PLUGTS", line 2065
ORA-06512: at line 1
Cause: A duplicated data file name in the import parameters file was causing the issue
Action: Modify the import parameters file with the right datafile name
Please see the Data pump Export/Import related documents:
Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
This mode requires that you have the EXP_FULL_DATABASE role.
Please note that
1. source and target database must use the same character set/national character set
2. You cannot transport a tablespace to a target database which already exists.
3. Transportable tablespace exports cannot be restarted once stopped
4. Target database must at same or higher release level as the source database.
Transportable tablespace export and import on same endian platforms
Step 1: Find the Operating system byte order on Source and Target Database
SQL > select * from v$transportable_platform order by platform_id;
PLATFORM_ID PLATFORM_NAME ENDIAN_FORMAT
----------------- ---------------------- ------- ----------------------
1 Solaris[tm] OE (32-bit) Big
2 Solaris[tm] OE (64-bit) Big
3 HP-UX (64-bit) Big
4 HP-UX IA (64-bit) Big
5 HP Tru64 UNIX Little
6 AIX-Based Systems (64-bit) Big
7 Microsoft Windows IA (32-bit) Little
8 Microsoft Windows IA (64-bit) Little
9 IBM zSeries Based Linux Big
10 Linux IA (32-bit) Little
11 Linux IA (64-bit) Little
12 Microsoft Windows 64-bit for AMD Little
For example, if you want to transport a tablespace “test_user_tbs” from a Linux 64 bit(Little endian) machine TESTLINUX to Microsoft Windows 64 bit (Little endian) machine TESTWIN. Both the source and target platforms are of LITTLE endian type. The data file for the tablespace “test_user_tbs” is “test_user_tbs01.dbff.
Step 2:- Make the tablespace “READ ONLY”
SQL> alter tablespace test_user_tbs read only;
Step 3: Export metadata
(i) Using export utility$ exp testuser/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp log=exp_test_user_tbs.log
(ii) Using Export data pump utility$ expdp system/password TRANSPORT_TABLESPACES=test_user_tbs TRANSPORT_FULL_CHECK=Y DIRECTORY=export_dir DUMPFILE=expdp_test_user_tbs.dmp logfile= expdp_test_user_tbs.log
TRANSPORT_FULL_CHECK= Y Specifies that check for dependencies between those objects inside the transportable Tablespace and those outside the transportable Tablespace.
The file “exp_test_user_tbs.dmp” or ““expdp_test_user_tbs.dmp” contains only metadata.
Step 4: Copy the files to Target system
If you are using FTP use binary option.
Step 5: Initiate Import to plug the tablespace into the database.
(i) Using import utility $ imp test_user/test123 tablespaces=test_user_tbs transport_tablespace=y file=exp_test_user_tbs.dmp datafiles='test_user_tbs01.dbf' log=imp_test_user_tbs.log
(ii) Using impdp utility
Click here to see the Instructions to create Directory and grant privileges.
$ impdp test_user/test123 TRANSPORT_DATAFILES='test_user_tbs01.dbf' DIRECTORY=import_dir DUMPFILE=expdp_test_user_tbs.dmp log= impdp_test_user_tbs.log
You can use REMAP_SCHEMA= (source: target), if you want to import into another schema.
Step6: - Put the tablespace in read/write mode.
SQL> ALTER TABLESPACE TEST_USER_TBS READ WRITE;
Now the database has a tablespace named “test_user_tbs” and the objects of the tablespace will be available.
Transport Tablespace Import Common Errors:-
1. Oracle Error : EXP-00044: must be connected "AS SYSDBA" to do Point-in-time Recovery or Transportable Tablespace import
Cause: The user must log in "as SYSDBA" to perform transportable tablespace imports or Point-In-Time Recovery imports.
Action: Ask your database administrator to perform the Transportable Tablespace import or
the Tablespace Point-in-time Recovery import.
2. IMP-00017: following statement failed with ORACLE error
19721:IMP-00003: ORACLE error 19721 encountered
ORA-06512: at "SYS.DBMS_PLUGTS", line 2065
ORA-06512: at line 1
Cause: A duplicated data file name in the import parameters file was causing the issue
Action: Modify the import parameters file with the right datafile name
Please see the Data pump Export/Import related documents:
Click here for Data Pump Export modes and Interfaces.
Click here for Data Pump Export/Import with Examples.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Flashback before and recyclebin
The Oracle 10g provides the ability to reinstating an accidentally dropped table, which is called Flashback Drop.
When a table is dropped, the database does not immediately remove the space associated with that table. Instead, the table is renamed to a system-defined name and placed in the Recycle bin. The flashback drop operations recovers from recycle bin.
SQL> DROP TABLE employee_tbl;
You can see the system-defined name of the table.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE
You can see the dropped table in the recycle bin using
SQL> show Recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
-------------- ------------------------------ ------------ ----------------
EMPLOYEE_TBL BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE 2010-03-01:09:10:00
Recover the dropped table from Recycle bin using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP;
Recover the dropped table from Recycle bin with new name using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP RENAME TO employee_tmp;
You can also drop a table completely with out needing a flashback using
SQL> DROP TABLE employee_tbl PURGE;
The tablespace will not free up space until the table will remain until recycle bin is purged. You can purge table explicitly from recycle bin using
SQL> PURGE TABLE employee_tbl;
Purge all tables in recycle bin using
SQL> PURGE RECYCLEBIN;
As a DBA can purge all the objects in any tablespace using
SQL> PURGE DBA_RECYCLEBIN;
Purge all the objects in recycle bin in a tablespace using
PURGE TABLESPACE employee_tbs;
Purge all the objects in recycle bin in a tablespace specific to a user only using
PURGE TABLESPACE employee_tbs USER emp_user;
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
When a table is dropped, the database does not immediately remove the space associated with that table. Instead, the table is renamed to a system-defined name and placed in the Recycle bin. The flashback drop operations recovers from recycle bin.
SQL> DROP TABLE employee_tbl;
You can see the system-defined name of the table.
SQL> select * from tab;
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE
You can see the dropped table in the recycle bin using
SQL> show Recyclebin
ORIGINAL NAME RECYCLEBIN NAME OBJECT TYPE DROP TIME
-------------- ------------------------------ ------------ ----------------
EMPLOYEE_TBL BIN$gXxxELu7aQ/gQAoKd5l2Hg==$0 TABLE 2010-03-01:09:10:00
Recover the dropped table from Recycle bin using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP;
Recover the dropped table from Recycle bin with new name using
SQL> FLASHBACK TABLE employee_tbl TO BEFORE DROP RENAME TO employee_tmp;
You can also drop a table completely with out needing a flashback using
SQL> DROP TABLE employee_tbl PURGE;
The tablespace will not free up space until the table will remain until recycle bin is purged. You can purge table explicitly from recycle bin using
SQL> PURGE TABLE employee_tbl;
Purge all tables in recycle bin using
SQL> PURGE RECYCLEBIN;
As a DBA can purge all the objects in any tablespace using
SQL> PURGE DBA_RECYCLEBIN;
Purge all the objects in recycle bin in a tablespace using
PURGE TABLESPACE employee_tbs;
Purge all the objects in recycle bin in a tablespace specific to a user only using
PURGE TABLESPACE employee_tbs USER emp_user;
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Labels:
Administration,
Backup/Recovery,
Oracle 10g New Feat
Flashback Table Feature in Oracle 10g
In Oracle 9i Database, we have concept of Flashback Query option to
retrieve data from a point in time in the past. The Oracle 10g provides
the ability to recover a table or set of tables to a specified point in
time in the past, this concept is called “Flashback table”.
Oracle Flashback Table operation is very quick and you do not have to restore any data from backups, and the rest of your database remains available while the Flashback Table operation is being performed.
The Flashback table depends on Undo information retained in the undo tablespace. If you set UNDO_RETENTION=1 hr, Oracle will not overwritten the data in undo tablespace until 1 hr. User can recover from their mistakes until specified time only.
Flashback table feature has some prerequisites:
•Row movement must be enabled on the table.
SQL> ALTER TABLE table_name ENABLE ROW MOVEMENT;
•You must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
•You must have FLASHBACK ANY TABLE privilege or the FLASHBACK object privilege on the table.
Use below commands to restore the table to its state when the database was at the time specified by SCN or point in time.
SQL> FLASHBACK TABLE employee_tbl TO SCN 786;
or
SQL> FLASHBACK TABLE employee_tbl TO TIMESTAMP TO_TIMESTAMP ('2010-03-01 09:00:00', 'YYYY-MM-DD HH:MI:SS')
Regards
Satishbabu Gunukula
http://www.oracleracexpert.com
Oracle Flashback Table operation is very quick and you do not have to restore any data from backups, and the rest of your database remains available while the Flashback Table operation is being performed.
The Flashback table depends on Undo information retained in the undo tablespace. If you set UNDO_RETENTION=1 hr, Oracle will not overwritten the data in undo tablespace until 1 hr. User can recover from their mistakes until specified time only.
Flashback table feature has some prerequisites:
•Row movement must be enabled on the table.
SQL> ALTER TABLE table_name ENABLE ROW MOVEMENT;
•You must have SELECT, INSERT, DELETE, and ALTER privileges on the table.
•You must have FLASHBACK ANY TABLE privilege or the FLASHBACK object privilege on the table.
Use below commands to restore the table to its state when the database was at the time specified by SCN or point in time.
SQL> FLASHBACK TABLE employee_tbl TO SCN 786;
or
SQL> FLASHBACK TABLE employee_tbl TO TIMESTAMP TO_TIMESTAMP ('2010-03-01 09:00:00', 'YYYY-MM-DD HH:MI:SS')
Regards
Satishbabu Gunukula
http://www.oracleracexpert.com
How to recover undo or rollback tablespace
While handing with undo tablespace you need to be extra cautious due to
active transactions in the undo segments. You need to follow different
approach depend upon scenario.
Scenario 1: Undo/rollback datafile damaged or dropped accidently when database is up
In this scenario the lost or damaged datafile may contain the active transactions and you cannot able to offline or drop the undo/rollback datafile.
• Startup mount
SQL> STARTUP MOUNT
• Check the status of the datfile
SQL> SELECT FILE#, NAME, STATUS FROM V$DATAFILE;
If the datafile is offline you must bring the datafile online before you recover
• Restore and Recover the datafile
$ rman TARGET / CATALOG rman/*****@rman
RMAN> Restore datafile 'fullpath_and_filename'
RMAN> Recover datafile 'fullpath_and_filename'
• Open the database.
SQL> ALTER DATABASE OPEN;
Scenario 2: Undo/rollback datafile damaged or dropped accidently when database is down
• If using automatic UNDO_MANAGMENT then comment out the parameter in init.ora file. If using rollback segments then comment out ROLLBACK_SEMGNETS parameter
• Mount the database in restricted mode
SQL> STARTUP RESTRICT MOUNT
• Offline the undo or rollback datafile and drop the file
SQL> ALTER DATABASE DATAFILE 'fullpath_and_filename' OFFLINE DROP;
• Open the database and drop the UNDO tablespace or the tablespace which contains the rollback segments
SQL> ALTER DATABASE OPEN
SQL> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
• Recreate the undo tablespace. If you are using rollback segments, recreate the rollback segment tablespace and rollback segments. Make sure to bring the rollback segments online.
If using Undo tablespace
SQL> CREATE UNDO TABLESPACE undotbs2 DATAFILE '/oradata/undotbs2_01.dbf' SIZE 100M REUSE AUTOEXTEND ON;
If using rollback segment tablespace
SQL> CREATE TABLESPACE rollback_tbs DATAFILE '/oradata/rollback_tbs01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL;
SQL> CREATE ROLLBACK SEGMENT rollback1 TABLESPACE rollback_tbs;
SQL> ALTER ROLLBACK SEGMENT rollback1 ONLINE;
• Modify the parameter file settings
If using UNDO tablespace
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=new_undo_tablespace_name
If you are using the rollback segment tablespace, include the rollback segments that you created in previous step in ROLLBACK_SEMGNETS parameter in init.ora file
• Take the database out of restricted mode.
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
Oracle 11g introduced a new feature called RMAN UNDO Bypass. The RMAN backup command no longer backs up the UNDO data that is not needed for recovery. Prior to Oracle 11g, all UNDO transactions that were already committed also backed up. This backup undo optimization minimizes the backup time and storage.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Scenario 1: Undo/rollback datafile damaged or dropped accidently when database is up
In this scenario the lost or damaged datafile may contain the active transactions and you cannot able to offline or drop the undo/rollback datafile.
• Startup mount
SQL> STARTUP MOUNT
• Check the status of the datfile
SQL> SELECT FILE#, NAME, STATUS FROM V$DATAFILE;
If the datafile is offline you must bring the datafile online before you recover
• Restore and Recover the datafile
$ rman TARGET / CATALOG rman/*****@rman
RMAN> Restore datafile 'fullpath_and_filename'
RMAN> Recover datafile 'fullpath_and_filename'
• Open the database.
SQL> ALTER DATABASE OPEN;
Scenario 2: Undo/rollback datafile damaged or dropped accidently when database is down
• If using automatic UNDO_MANAGMENT then comment out the parameter in init.ora file. If using rollback segments then comment out ROLLBACK_SEMGNETS parameter
• Mount the database in restricted mode
SQL> STARTUP RESTRICT MOUNT
• Offline the undo or rollback datafile and drop the file
SQL> ALTER DATABASE DATAFILE 'fullpath_and_filename' OFFLINE DROP;
• Open the database and drop the UNDO tablespace or the tablespace which contains the rollback segments
SQL> ALTER DATABASE OPEN
SQL> DROP TABLESPACE tablespace_name INCLUDING CONTENTS;
• Recreate the undo tablespace. If you are using rollback segments, recreate the rollback segment tablespace and rollback segments. Make sure to bring the rollback segments online.
If using Undo tablespace
SQL> CREATE UNDO TABLESPACE undotbs2 DATAFILE '/oradata/undotbs2_01.dbf' SIZE 100M REUSE AUTOEXTEND ON;
If using rollback segment tablespace
SQL> CREATE TABLESPACE rollback_tbs DATAFILE '/oradata/rollback_tbs01.dbf' SIZE 100M EXTENT MANAGEMENT LOCAL;
SQL> CREATE ROLLBACK SEGMENT rollback1 TABLESPACE rollback_tbs;
SQL> ALTER ROLLBACK SEGMENT rollback1 ONLINE;
• Modify the parameter file settings
If using UNDO tablespace
UNDO_MANAGEMENT=AUTO
UNDO_TABLESPACE=new_undo_tablespace_name
If you are using the rollback segment tablespace, include the rollback segments that you created in previous step in ROLLBACK_SEMGNETS parameter in init.ora file
• Take the database out of restricted mode.
SQL> ALTER SYSTEM DISABLE RESTRICTED SESSION;
Oracle 11g introduced a new feature called RMAN UNDO Bypass. The RMAN backup command no longer backs up the UNDO data that is not needed for recovery. Prior to Oracle 11g, all UNDO transactions that were already committed also backed up. This backup undo optimization minimizes the backup time and storage.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Labels:
Backup/Recovery,
Oracle 11g New Feat,
RMAN Recovery
Thursday, May 27, 2010
Controlfile and Server parameter file (spfile) autobackup
RMAN can be configured to automatically backup the control file and server parameter file (spfile) whenever the database structure in the control file changes and whenever a backup record is added. The autobackup feature enables RMAN to recover the database even if the current control file, catalog, and server parameter file are lost.The RMAN can search the autobackup path and restore the server parameter file from backup. Once the instance is started with restored spfile, use RMAN to restore the controlfile from autobackup. After you mount the DB using restored control file, use RMAN repository information from control file to restore and recover the data files.
You can turn ON or OFF the autobackup feature by using the following commands:
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON;
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP OFF;
If control file autobackups is ON and the backup includes datafile 1, RMAN writes the control file and SPFILE to a separate backup set. If autobackup is OFF and the backup includes datafile 1, then RMAN includes the current control file and SPFILE in the same backup set as the datafiles. After backup completion the database writes a message containing the complete path of the backup piece and the device type to the alert log.
The control file autobackup filename has a default format of %F for all device types, so that the RMAN can guess the file location and restore the controlfile without a repository. All autobackup formats must include the %F variable.
The format %F translates into c-IIIIIIIIII-YYYYMMDD-QQ, where
IIIIIIIIII - stands for DBID.
YYYYMMDD - Time stamp of the day the backup is generated
QQ is the hex sequence starts with 00 and has a maximum of FF
Use the following command to configure the Control file Autobackup format
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/oradata/cf _spfile_%F';
Use following command to write to an Automatic Storage Management disk group
RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FOR DEVICE TYPE DISK TO '+DGROUP1/%F';
Use the following commands to clear control file autobackup formats for a device:
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK CLEAR;
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE sbt CLEAR;
You can use SET CONTROLFILE AUTOBACKUP FORMAT command to override the configured autobackup format at session level
RMAN> SET CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/backup/cf_spfile_%F';
RMAN> BACKUP DATABASE;
If you have configured control file autobackup, you do not need a recovery catalog or target database control file to restore the control file in case if you lost all your control files.
Use the below command to restore the control file autobackup.
RMAN> RESTORE CONTROLFILE FROM AUTOBACKUP;
DATA PUMP sample
To Use
Data Pump, DBA
has to create a directory in Server Machine and create a Directory Object in
the database mapping to the directory created in the file system.
The
following example creates a directory in the filesystem and creates a directory
object in the database and grants privileges on the Directory Object to the
SCOTT user.
$mkdir
my_dump_dir
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;
$sqlplus
Enter User:/ as sysdba
SQL>create directory data_pump_dir as ‘/u01/oracle/my_dump_dir’;
Now
grant access on this directory object to SCOTT user
SQL> grant read,write on directory data_pump_dir to scott;
To
Export Full Database, give the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full.dmp
LOGFILE=myfullexp.log JOB_NAME=myfullJob
LOGFILE=myfullexp.log JOB_NAME=myfullJob
The
above command will export the full database and it will create the dump file
full.dmp in the directory on the server /u01/oracle/my_dump_dir
In some
cases where the Database is in Terabytes the above command will not feasible
since the dump file size will be larger than the operating system limit, and
hence export will fail. In this situation you can create multiple dump files by
typing the following command
$expdp scott/tiger
FULL=y DIRECTORY=data_pump_dir DUMPFILE=full%U.dmp
FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
FILESIZE=5G LOGFILE=myfullexp.log JOB_NAME=myfullJob
This
will create multiple dump files named full01.dmp, full02.dmp, full03.dmp and so
on. The FILESIZE parameter specifies how much larger the dump file should be.
To
export all the objects of SCOTT’S schema you can run the following export data
pump command.
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT
SCHEMAS=SCOTT
You can
omit SCHEMAS since the default mode of Data Pump export is SCHEMAS only.
If you
want to export objects of multiple schemas you can specify the following
command
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT,HR,ALI
SCHEMAS=SCOTT,HR,ALI
You can
use Data Pump Export utility to export individual tables. The following example
shows the syntax to export tables
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tables.dmp
TABLES=employees,jobs,departments
Exporting
Tables located in a Tablespace
If
you want to export tables located in a particular tablespace you can type the
following command
$expdp hr/hr DIRECTORY=dpump_dir1 DUMPFILE=tbs.dmp
TABLESPACES=tbs_4,
tbs_5, tbs_6
The
above will export all the objects located in tbs_4,tbs_5,tbs_6
You can
exclude objects while performing a export by using
EXCLUDE option of Data Pump utility. For example you are exporting a schema and
don’t want to export tables whose name starts with “A” then you can type the
following command
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”
SCHEMAS=SCOTT EXCLUDE=TABLE:”like ‘A%’”
Then all
tables in Scott’s Schema whose name starts with “A “ will
not be exported.
Similarly
you can also INCLUDE option to only export certain objects like this
$expdp
scott/tiger DIRECTORY=data_pump_dir
DUMPFILE=scott_schema.dmp
SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”
SCHEMAS=SCOTT INCLUDE=TABLE:”like ‘A%’”
This is
opposite of EXCLUDE option i.e. it will export only those tables of Scott’s
schema whose name starts with “A”
Similarly
you can also exclude INDEXES, CONSTRAINTS, GRANTS, USER, SCHEMA
You can
use QUERY option to export only required rows. For Example, the following will
export only those rows of employees tables whose
salary is above 10000 and whose dept id is 10.
expdp
hr/hr QUERY=emp:'"WHERE dept_id
> 10 AND sal > 10000"'
NOLOGFILE=y DIRECTORY=dpump_dir1
DUMPFILE=exp1.dmp
You can
suspend running export jobs and later on resume these jobs or kill these jobs
using Data Pump Export. You can start a job in one client machine and then, if
because of some work, you can suspend it. Afterwards when your work has been
finished you can continue the job from the same client, where you stopped the
job, or you can restart the job from another client machine.
For
Example, suppose a DBA starts a full database export by typing the following
command at one client machine CLNT1 by typing the following command
$expdp
scott/tiger@mydb FULL=y
DIRECTORY=data_pump_dir
DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob
DUMPFILE=full.dmp LOGFILE=myfullexp.log JOB_NAME=myfullJob
After
some time, the DBA wants to stop this job temporarily. Then he presses CTRL+C
to enter into interactive mode. Then he will get the Export> prompt where he
can type interactive commands
Now he
wants to stop this export job so he will type the following command
Export>
STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y
Are you sure you wish to stop this job ([y]/n): y
The job is placed in a stopped state
and exits the client.
After finishing his other work, the
DBA wants to resume the export job and the client machine from where he
actually started the job is locked because, the user has locked his/her cabin.
So now the DBA will go to another client machine and he reattach to the job by typing
the following command
$expdp hr/hr@mydb ATTACH=myfulljob
After the job status is displayed,
he can issue the CONTINUE_CLIENT command to resume logging mode and restart the myfulljob job.
Export>
CONTINUE_CLIENT
A message is displayed that the job
has been reopened, and processing status is output to the client.
Note: After reattaching to the Job a
DBA can also kill the job by typing KILL_JOB, if he doesn’t want to continue
with the export job.
2012年5月22日星期二
show point of ASM
How to STOP and START processes in Oracle RAC and Log Directory Structure
You need to follow the below steps to Stop and Start processes in Oracle RACShutdown RAC Database
You need to Shutdown Database instances on each node. You can either use Oracle Enterprise Manager or SVRCTL to shutdown the instances. If you are using EM Grid control then set a blackout in Grid control for processes that you intend to shutdown. So that records for these processes indicate that the shutdown was planned.
Use below command to stop Enterprise Manager/Grid Control
$ORACLE_HOME/bin/emctl stop dbconsole
Use below command to shutdown all oracle RAC instances on all nodes.
- $ ORACLE_HOME/bin/srvctl stop database -d db_name
- $ ORACLE_HOME/bin/srvctl stop database -d db_name –i instance_name
Once the database is stopped, proceed with ASM Instance shutdown.
Use below command to shutdown ASM instances on all nodes
- $ORACLE_HOME/bin/bin/srvctl stop asm -n node
Use below command to shutdown node apps on all RAC nodes
- $ORACLE_HOME/bin/bin/srvctl stop nodeapps -n node
You need to Shutdown oracle clusterware or CRS as root and run below command on each node in the cluster.
- #crsctl stop crs
From 11g R2, you can do this in two stops
1. Stop Clustwerware stack on local node
- #crsctl stop cluster
- # Crsctl stop cluster –all
-all Start clusterware on all nodes
-n Start clusterware on particular nodes
2. Stop Oracle High availability service demon on each node in the cluster.
- # crsctl stop has
Once all process stopped run the below command to check the status of CRSD,CSSD,EVMD process.
- # crsctl check crs
- $ crsctl stop crs –all –f
Follow the reverse sequence to start all processes in oracle RAC
- # crsctl start crs
- $ORACLE_HOME/bin/bin/srvctl start nodeapps -n node
- $ORACLE_HOME/bin/bin/srvctl start asm -n node
- $ORACLE_HOME/bin/srvctl start database -d db_name
Oracle Clusterware Log Directory Structure
CRS_HOME/log/hostname/crsd/ - The log files for the CRS daemon CRS_HOME/log/hostname/cssd/ - The log files for the CSS daemon
CRS_HOME/log/hostname/evmd/ - The log files for the EVM daemon
CRS_HOME/log/hostname/client/ - The log files for the Oracle Cluster Registry (OCR)
CRS_HOME/log/hostname/racg/ - The log files for the Oracle RAC high availability component
CRS_HOME/log/hostname/racg/ - The log files for the Oracle RAC high availability component
CRS_HOME/log/hostanme/alert.log – The alert.log for Clusterware issues.
Please note that the CRS_HOME is the directory in which the Oracle Clusterware software was installed and hostname is the name of the node
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Posted by Satishbabu Gunukula at 2:58 PM 0 comments

Labels: ASM, OCR - VOTE, Oracle 11g New Feat, RAC Admin
Tuesday, February 16, 2010
ORA-600 [kfcNullConvert20] error on ASM instance
I have experienced ORA-600 error on ASM instance (10.2.0.2), which caused ASM instance restart on that particular node. Due to which all databases instances on that node are restarted, because they lost connectivity with ASM.
This issue is a known Bug 4682861 in Oracle and the effected version are 10.1.0.4 ,10.1.0.5 ,10.2.0.1 ,10.2.0.2. This BUG is fixed in 10.2.0.3 Server patch set and 11.1.0.6 Base release.
Check for Possible Error messages in ASM/Database alert.log and trace files.
If you find below SYMPTOMS then you are hitting a BUG 4682861.
ASM alert. log errors:
ORA-00600: internal error code, arguments: [kfcNullConvert20], [], [], [], [], [], [], []
DBW0: terminating instance due to error 471
Trace file (ASM DBWR trace file contains Stack Trace: kfcNullConvert and errors):-
error 600 detected in background process
ORA-00600: internal error code, arguments: [kfcNullConvert20], [], [], [], [], [], [], []
ksuitm: waiting for [5] seconds before killing DIAG
Database alert log errors:
ORA-15064: communication failure with ASM instance
ORA-03113: end-of-file on communication channel
ASMB: terminating instance due to error 15064
Cause:
Due to Bug 4682861, ASM instance restarted.
Solution:
Option 1:- Upgrade to 10.2.0.3 or higher release
Option 2:-Download the patch from MetaLink:
i) Click on Patches & Updates Link.
ii) Enter patch number: 4682861 and Select your O/S
iii) Click Go.
iv) Download the patch for the Oracle release that you experienced this issue.
Please note that ANY ORA-600 error indicates Oracle has detected an internal inconsistency or a problem which it doesn’t know how best to address. These are *NOT* necessarily bugs and can occur for reasons such as running out of some resource, Operating System IO problems.. etc.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com
Posted by Satishbabu Gunukula at 4:20 PM 0 comments

Labels: ASM, Patch/Bug Fix, RAC-Errors
Monday, October 5, 2009
Create, Drop and Alter ASM disk groups
Crete Disk Group:
Create Disk groups using the CREATE DISKGROUP statement and specify the level of redundancy.
Disk group redundancy types:-
NORMAL REDUNDANCY - Two-way mirroring, requiring two failure groups.
HIGH REDUNDANCY - Three-way mirroring, requiring three failure groups.
EXTERNAL REDUNDANCY - No mirroring for disks that are already protected using hardware RAID or mirroring.
- SQL> CREATE DISKGROUP data NORMAL REDUNDANCY
FAILGROUP failure_group_2 DISK '/dev/sdb1' NAME datab1,'/dev/sdb2' NAME datab2;
Drop Disk Group:
Drop disk group using DROP DISKGROUP statement.
- SQL> DROP DISKGROUP data INCLUDING CONTENTS;
Alter Disk Group:
Add or remove disks from disk groups Using ALTER DISKGROUP statement. You can also use wildcard "*" to reference disks.
Add a disk.
- SQL> ALTER DISKGROUP data ADD DISK '/dev/datac1', '/dev/datac2';
Add all reference disks
- SQL> ALTER DISKGROUP data ADD DISK '/dev/datad*;
Drop/remove a disk.
- SQL> ALTER DISKGROUP data DROP DISK datab2;
The UNDROP command used to undo only pending drop of disks. After you drop the disks you cannot revert.
- SQL> ALTER DISKGROUP data UNDROP DISKS;
Diskgroup Rebalance:
Disk groups can be rebalanced manually Using REBALANCE clause and you can modify the POWER clause default value.
- SQL> ALTER DISKGROUP disk_group_1 REBALANCE POWER 5;
MOUNT and DISMOUNT DiskGroups:
Normally Disk groups are mounted at ASM instance startup and dismounted at shutdown. Using MOUNT and DISMOUNT options you can make one or more Disk Groups available or unavailable.
- SQL> ALTER DISKGROUP data MOUNT;
- SQL> ALTER DISKGROUP data DISMOUNT;
- SQL> ALTER DISKGROUP ALL MOUNT;
- SQL> ALTER DISKGROUP ALL DISMOUNT;
DiskGroup Check:
Use CHECK ALL to verify the internal consistency of disk group metadata and repair in case of any error.
- SQL> ALTER DISKGROUP data CHECK ALL;
DiskGroup resize:
Resize the one or all disks in the Diskgroup.
Resize all disks in a failure group.
- SQL> ALTER DISKGROUP data RESIZE DISKS IN FAILGROUP failure_group_1 SIZE 1024G;
Resize a specific disk.
- SQL> ALTER DISKGROUP data RESIZE DISK dataa1 SIZE 1024G;
Resize all disks in a disk group.
- SQL> ALTER DISKGROUP data RESIZE ALL SIZE 1024G;
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Posted by Satishbabu Gunukula at 1:14 PM 1 comments

Labels: ASM
Migrate Database to ASM Using RMAN
We are Using RMAN to relocate non-ASM files to ASM files. The ASM files cannot be accessed through normal OS interfaces.
Step1: Query V$CONTROLFILE and V$LOGFILE to get the file names.
- SQL> select * from V$CONTROLFILE;
- SQL> select * from V$LOGFILE;
Step 2: Shutdown the database.
- SQL> SHUTDOWN IMMEDIATE;
Step3: Modify the target database parameter file:
(i) Remove the CONTROL_FILES parameter from the spfile, so the control files will be created automatically in ASM Disk group mentioned in DB_CREATE_FILE_DEST destination
Using a pfile then set CONTROL_FILES parameter to the appropriate ASM files or aliases.
(ii) Set the DB_CREATE_FILE_DEST and DB_CREATE_ONLINE_LOG_DEST_n parameters to the relevant ASM disk groups.
Step 4: Startup nomount mode and Restore control file
- RMAN> STARTUP NOMOUNT;
Restore the control file into the new location.
- RMAN> RESTORE CONTROLFILE FROM 'old_controlfile_name';
Step 5: Startup mount mode and backup the Database.
- RMAN> ALTER DATABASE MOUNT;
Backup database into ASM disk group.
- RMAN> BACKUP AS COPY DATABASE FORMAT '+diskgroup1';
Step 6: Switch database and create or rename Redo log members
Switch all data files to the new ASM Diskgroup location.
- RMAN> SWITCH DATABASE TO COPY;
- RMAN> SQL “ALTER DATABASE RENAME ‘old_redolog_member’ to ‘+diskgroup2’;
Create new redo logs in ASM Disk group and delete the old redo log files.
Step 7: Open Database and create temporary tablespace.
Open database using resetlogs
- SQL> ALTER DATABASE OPEN RESETLOGS;
Create temporary tablespace in ASM disk group.
- SQL> CREATE TABLESPACE temp1 ADD TEMPFILE ‘+diskgroup1’;
Step 8: Drop old database files.
- 1. SQL> DROP TABLESPACE ‘old_temporary_tablespace’ including contents and datafiles;
Regards
Satishbabu Gunukula
http://www.oracleracexpert.com/
Posted by Satishbabu Gunukula at 12:59 PM 4 comments

Labels: ASM, Migrate, RMAN Recovery
Tuesday, September 29, 2009
Createdisk, Deletedisk and Querydisk in ASM
The /etc/init.d/oracleasm script is used to create, delete and query ASM disks and make disks available.
Create and Delete ASM disk:
Run below command to Create ASM disks.
- # /etc/init.d/oracleasm createdisk DATA1 /dev/sdc
Run below command to Delete ASM disks.
- # /etc/init.d/oracleasm deletedisk DATA1
Query/List/Scan ASM disk:
Run the below querydisk command to see if the Disk/Device is used by ASM:
- # /etc/init.d/oracleasm querydisk /dev/sdc
- # /etc/init.d/oracleasm querydisk DATA1
Run below command to list Existing disks
- # /etc/init.d/oracleasm listdisks
Run the below command to scan the disks created on another node.
- # /etc/init.d/oracleasm scandisks
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Posted by Satishbabu Gunukula at 2:26 PM 0 comments

Labels: ASM
Monday, August 31, 2009
Delete Archivelog files without using RMAN
Here I am explaining two methods to delete archive logs and other database files with out using RMAN from ASM Disk Group in Oracle 10g/11g.
Method 1: asmcmd - ASM command-line utility
- ASMCMD> rm file_name
- For ex:-ASMCMD> rm ‘+dgroup2/testdb/archivelogs/thread_1_seq_363.510.1’
- Or
- ASMCMD> rm ‘+dgroup2/testdb/datafile/USERS.250.5334166963’
- If you use a wildcard, rm deletes all matches except non-empty directories (unless the -r flag is used). The rm command can delete the file or alias only if the file is not currently in use by a client database.
Method 2: SQLPLUS utility
- SQL> ALTER DISKGROUP DROP file
The “asmcmd” and “sqlplus” commands will not update the database views (V$ARCHIVED_LOG, V$FLASH_RECOVERY_AREA_USAGE), controlfile, Recovery Catalog that the files have been removed.
To update the Database views, control file or RMAN Catalog about deleted files you need to run the below command from RMAN.
- RMAN> CROSSCHECK ARCHIVELOG ALL;
- RMAN> DELETE EXPIRED ARCHIVELOG ALL;
Click here to learn How to delete archive logs from ASM.
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Posted by Satishbabu Gunukula at 9:06 AM 0 comments

Labels: ASM
Monday, August 17, 2009
Install and Configure ASMLib in 10g (Automatic Storage Management)
The Oracle ASM feature was introduced in Oracle 10g Release 1.There are two methods to configure ASM on Linux.
- 1. Configure ASM with ASMLib I/O: This method creates all Oracle database files on raw block devices, which are managed by ASM using ASMLib calls. ASMLib works with block devices and raw devices are not required with this method.
- 2. Configure ASM with Standard Linux I/O: This method creates Oracle database files on raw character devices, which are managed by ASM using standard Linux I/O system calls. It requires creating RAW devices for all disk partitions used by the ASM.
Here we will “Configure ASM with ASMLib I/O” method.
Step 1: Download and Install ASMLib
Download Oracle “ASMLib” software from below link and follow the link for your platform.
http://www.oracle.com/technology/tech/linux/asmlib/index.html
You must install all three packages for the kernel you are running. Use “uname –r “command to determine the version of your kernel.
oracleasm-support-version.cpu_type.rpm
oracleasm-kernel-version.cpu_type.rpm
oracleasmlib-version.cpu_type.rpm
See the below example to install the packages and run the command as root.
- # rpm -ivh oracleasm-support-2.0.3-1.x86_64.rpm \
> oracleasmlib-2.0.2-1.x86_64.rpm
Step 2: Configure and Enable Oracle ASM
Run the below command to configure the Oracle ASM and it will ask for the user and group that default to owing the ASM drivers access point.
- # /etc/init.d/oracleasm configure
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface []: oracle
Default group to own the driver interface []: dba
Start Oracle ASM library driver on boot (y/n) [n]: y
Fix permissions of Oracle ASM disks on boot (y/n) [y]: y
Writing Oracle ASM library driver configuration [ OK ]
Creating /dev/oracleasm mount point [ OK ]
Loading module "oracleasm" [ OK ]
Mounting ASMlib driver filesystem [ OK ]
Scanning system for ASM disks [ OK ]
This command will load the ASM driver and mount the ASM driver filesystem. By selecting “y” during the configuration, the system will always load the module and mount the file system on system boot.
Run the below command to enable automatic start
- #/etc/init.d/oracleasm enable
Regards,
Satishbabu Gunukula
http://www.oracleracexpert.com/
Posted by Satishbabu Gunukula at 6:07 PM 0 comments

Labels: ASM, Installation
订阅:
博文 (Atom)