Oracle Database can copy data between itself and other Oracle databases, and between itself and external files. Data copying is accomplished by exporting and importing data if both the source and the target are Oracle database, and by unloading and loading data if one party is Oracle database but the other is different.
Exporting is the term used for copying database data to external files for import into another Oracle database only. These files are in a proprietary binary format. Importing is the term used for copying data into the database from external files that were created by exporting from another Oracle database. Data exported from any Oracle Database edition (Express Edition, Standard Edition, and Enterprise Edition), can be imported into any other edition. Unloading is the term used for copying database data to external text files for consumption by another Oracle database, another database or another application (such as a spreadsheet application). The text files are in an industry-standard format such as tab-delimited or comma-delimited (CSV). Loading is the term used for copying data into the database from external text files that are in either a standard delimited format or in any of the formats that are supported by the Oracle SQL*Loader utility.
Oracle also provide some tools (depending on the version of the database) such as sql loader utility, data pump export and data pump import utilities, import and export utilities or wizards related to load / unload operations for performing these tasks.
Oracle Export / Import Utilities:
Export (exp) and Import (imp) are the Oracle utilities (before 10g database) allowing data to be written to a Oracle binary formatted file and Oracle binary formatted dump file to an Oracle database respectively. These utilities support export and import operations at the level of full database, tablespace, user (all objects belonging to the user) or table.
If the source and target Oracle database versions are not the same, exp utility must be the lower version and imp utility must be the same as the target version.
Export utility includes the data definition (ddl) statements for the exported data therefore these objects will be created automatically on the target database. When an export command is executed, database objects are extracted with their dependency objects. That means if it extracts a table, the dependences like indexes, comments, and grants are extracted and written into an export file. There are also some considerations while importing data. First of all, if the target database has the exported objects previously with the same names, these objects should be dropped or truncated. Otherwise data will be added to tables, sequences may generate inconsistent values. If there are constraints on the objects, these should be disabled during the import process.
- buffer: Specifies the size, in bytes, of the buffer (array) used to insert the data. High value increases performance. (Ex: 1M, 2M etc.)
- constraints[Y]: Specifies whether table constraints should be exported with table data.
- direct[N]: Determines whether to use direct or conventional path export. Direct path exports enhance performance. Export in direct path mode simply bypasses the SQL evaluation buffer. A direct path export does not read directly from the data files and write to the DMP file. It should be noted that in direct path mode, you cannot use the QUERY parameter to select a subset of rows. This is why direct mode export bypass the SQL evaluation buffer where the where clause processing would normally take place. (Not used with query parameter)
- file: The name of the export file. Multiple files may be created. When the file size exceeded, export utility will begin to write the next file in the list.
- filesize: The maximum size of the files in the file list.
- full: The entire database is exported.
- grants [Y]: Specifies weather object grants will be exported or not.
- indexes[Y]: Specifies weather the index definitions (NOT index data) will be exported or not.
- log: The name of the log file.
- object_consistent[N]: Specify if export utility uses SET TRANSACTION READ ONLY to ensure data consistency or not.
- owner: Only the specified owners objects will be exported.
- parfile: The file name including export parameters instead of specifying then in command line.
- query: Allows a subset of the rows from a table to be exported based on where clause. (Not used with direct parameter)
- recordlength: Amount of data spooled before writing to the disk. Max value is 64KB.
- rows[Y]: Specifies weather or not table rows will be exported.
- tables: Specifies the export is in the table mode and the specified list of the tables will be exported. Table partitions and sub partitions can also be specified.
- tablespaces: Specifies the export is in the tablespace mode and all the tables in the specified list of the tablespaces will be exported.
- transport_tablespace[N]: Enables the export of the metadata needed for transportable tablespaces.
- triggers[Y]: Specifies weather or not defined triggers on specified tables will be exported.
- userid: User information of the user that will perform the export process.
Sample Export Command:
[…] defines the optional pat. If password is not written in the command, it is prompted.
- buffer:Specifies the size, in bytes, of the buffer (array) used to insert the data. High value increases performance. (Ex: 20M, 30M etc.)
- commit[N]:Specifies whether import should commit after each array insert. By default, import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables. For high performance set this value to Y.
- compile[Y]:Tells import to compile procedural objects when they are imported.
- constraints[Y]:Specifies whether table constraints should also be imported with table data.
- datafiles (only with transport_tablespace):Lists data files to be transported to the database.
- destroy[N]:Overwrite tablespace data file.
- file:The name of the export file to import. Multiple files can be listed, separated by commas. When export reaches the filesize it will begin writing to the next file in the list.
- filesize:Maximum size of each dump file.
- fromuser:A comma delimited list of schemas from which to import. If the export file contains many users or even the entire database, the fromuser option enables only a subset of those objects (and data) to be imported.
- full:The entire export file is imported.
- grants[Y]:Specifies to import object grants.
- ignore[N]:Specifies how object creation errors should be handled. If a table already exists and ignore=y, then the rows are imported to the existing tables, otherwise errors will be reported and no rows are loaded into the table.
- indexes[Y]:Determines whether indexes are imported.
- indexfile:Specifies a filename that contains index creation statements. This file can be used to build the indexes after the import has completed.
- log:The name of the log file.
- parfile:The file name including import parameters instead of specifying then in command line.
- rows[Y]:Specifies whether or not the table rows should be imported.
- show[N]:Specifies weather the DDL within the export file is displayed or not.
- skip_unusable_indexes[N]:Specifies whether import skips the building of indexes that are in an unusable state.
- tables:Specifies that the type of export is table-mode and lists the tables to be exported. Table partitions and sub partitions can also be specified.
- tablespaces:When transport_tablespace=y, this parameter provides a list of tablespaces.
- to_user:Specifies a list of user schemas that will be targets for imports.
- transport_tablespace[N]:When Y, transportable tablespace metadata will be imported from the export file.
- tts_owners:When transport_tablespace=Y, this parameter lists the users who own the data in the transportable tablespace set.
- userid:Specifies the userid/password of the user performing the import.
Sample Import Command:
[…] defines the optional pat. If password is not written in the command, it is prompted.
If there are any constraints on the target table, thse should be disabled during import for better performance.
Export and import utilities DOES NOT support the FLOAT and DOUBLE data types.
Oracle Datapump Export / Datapump Import Utilities:
By the database version 10g, Oracle introduced new tools for transferring objects from one Oracle database to another. These tools are expdp for exporting and impdp for importing, which brings additional features and enhancements over exp and imp utilities. The Data Pump Export utility exports data and metadata into a set of operating system files called a dump file set. The Data Pump Import utility imports an export dump file set into a target Oracle database. A dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format, which means that the dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported to the same database or it can be moved to another system and loaded into the Oracle database there. Because the dump files are written by the database, rather than by the Data Pump client application, you must create directory objects for the directories to which files will be written. A directory object is a database object that is an alias for a directory in the host operating systems file system.
Data Pump Export and Import enable you to move a subset of the data and metadata. This is done by using Data Pump parameters to specify export and import modes, as well as various filtering criteria. Data Pump Export and Import also provide a set of interactive commands so that you can monitor and modify ongoing export and import jobs. Parameters for these tools can be directly written in command line, can be specified in parameter file or can be specified as it is prompted in an interactive command line interface.
Datapump export and datapump import utilities DOES NOT support XML type data. If you need to export or import objects that includes XML types, you need to use Export and Import utilities respectively.
Data Pump should be preferred unless you must import or export XML Type data.
Sample Datapump Export Command:
expdp userid=username/password dumpfile=export.dmp logfile=export.log full=y directory=export log=export.log
Sample Datapump Import Command:
impdp username/password schemas=Schema directory=export dumpfile=export.dmp logfile=import.log
In order to demonstrate a sample scenario we are going to export schema USER1, then import it to another user USER2 (which is not present yet)
How To Use expdp:
Create a directory on the database server where the exported files will be placed:
Open a command line to run SQL*Plus to, connect to the database, execute the following sql statemens in order to create a directory object named exportsdir for the exports directory that you just created, and to grant read and write access to it for user:
CREATE OR REPLACE DIRECTORY exportsdir AS ‘c:exports’;
GRANT READ,WRITE ON DIRECTORY exportsdir TO USER;
Export the USER1 schema to a dump file named schema.dmp by issuing the following command at the operating system command prompt:
expdp SYSTEM[/password] SCHEMAS=USER1 DIRECTORY=exportsdir DUMPFILE=schema.dmp LOGFILE=exp.log
SYSTEM is the Oracle database system user, that has the privileges to performs exports and import tasks where password is the password for the SYSTEM user. The schema.dmp file and the exp.log file are written to the exportsdir directory.
How To Use impdp:
To import the dump file, schema.dmp, from the USER1 into another schema, USER2, for this specific scenario, you need to use the REMAP_SCHEMA command parameter to indicate that objects are to be imported into a schema other than their original schema. Because the USER2 user account does not already exist, the import process automatically creates it. You will import everything except constraints, ref_constraints, and indexes. If a table already exists, it is replaced with the table in the export file.
At the operating system command prompt, issue the following command run the following command:
impdp SYSTEM[/password] SCHEMAS=USER1 DIRECTORY=exportsdir DUMPFILE=schema.dmp
REMAP_SCHEMA=USER1:USER2 EXCLUDE=constraint, ref_constraint, index
SYSTEM is the Oracle database system user, that has the privileges to performs exports and import tasks where password is the password for the SYSTEM user.
The USER2 schema is now populated with data from the USER1 schema.
The last thing is to assign a password to the newly created USER2 user account. To do so, start SQL Command Line and then at the SQL prompt, enter the following ALTER USER statement:
ALTER USER USER2 IDENTIFIED BY pass;
Note that datapump export and datapump import operations run on the database server. So Even if you exit the command prompt during operation, datapump jobs will continue to run at the server. If you give a name to the datapump operation using job_name parameter, then you can re-attach the session using:
expdp username/password attach=given_job_name
To stop the job on the command line interface, use the following:
Are you sure you wish to stop this job ([y]/n): y
To resume the job use the following:
expdp username/password attach=given_job_name