Category Archives: Database

OBIEE – Variables

Oracle Business Intelligence Enterprise Edition (OBIEE) allows us to use several types of variables in the different places of components like analyses, dashboards, KPI’s, conditions… etc.

There are four main types of variables each having a different use area and syntax. These are:

  • Session Variables
  • Repository Variables
  • Presentation Variables
  • Request Variable

 

Session Variables

A session variable is a variable as its name implies, that is initialized at the time of user login for each user. When a new user logins to the BI Server, a new instance of session variables are created, initialized and assigned to that user.

There are two types of session variables:

  • System Session Variable: A session variable that the Oracle BI Server and Oracle BI Presentation Services use for specific purposes. System session variables have reserved names that CAN NOT be used for other kinds of variables.
  • Non-System Session Variables: A session variable that the administrator user creates and names in the repository us,ng the Administration tool.

Syntax:

@{biServer.variables[‘NQ_SESSION.variablename‘]}

Sample:

@{biServer.variables[‘NQ_SESSION.USER’]}

 

 

Repository Variables

Repository variables are created by the BI Administration Tool in the repository by the admin user. There are two types of repository variables:

  • Static Repository Variable: Static repository variables are constant and they do not change until the administrator user changes them in the repository.
  • Dynamic Repository Variable: Dynamic repository variables values are refreshed by data returned from queries.

Syntax:

@{biServer.variables.variablename} or @{biServer.variables[‘variablename‘]}

Sample:

@{biServer.variables.country} or @{biServer.variables[‘country’]}

 

 

Presentation Variables

A presentation variable is a variable that users in the BI Answer can create as creating dashboard prompts as:

  • Column prompt: A presentation variable created as part of a column prompt is associated with a column, and the values that it can take come from the column values. To create a presentation variable as part of a column prompt, in the “New Prompt dialog” (or Edit Prompt dialog), you have to select Presentation Variable in the Set a variable field and then enter a name for the variable in the Variable Name field.
  • Variable prompt: A presentation variable created as part of a variable prompt is not associated with any column, and you define the values that it can take. To create a presentation variable as part of a variable prompt, in the “New Prompt dialog” (or Edit Prompt dialog), you have to select Presentation Variable in the Prompt for field and then enter a name for the variable in the Variable Name field.

The value of a presentation variable is populated by the column or variable prompt with which it was created. That is, each time a user selects a value in the column or variable prompt, the value of the presentation variable is set to the value that the user selects.

Syntax:

@{[variables.]variablename}[format]{defaultvalue} or @{scope.variables[‘variablename‘]}

Sample:

@{[variables.]Country}{COUNTRY NOT SELECTED} or @{dashboard.variables[‘Country’]}

  • variablename is the name of the presentation or request variable
  • (optional) format is any valid format mask dependent on the data type of the variable (#,##0, MM/DD/YY hh:mm:ss.) Note that the format is not applied to the default value
  • (optional) defaultvalue is a constant or variable reference indicating a value to be used if the variable referenced by variablename is not set
  • scope identifies the qualifiers for the variable. You must specify the scope when a variable is used at multiple levels (analyses, dashboard pages, and dashboards) and you want to access a specific value. (If you do not specify the scope, then the order of precedence is analyses, dashboard pages, and dashboards.)

 

Request Variables

A request variable lets you override the value of a session variable but only for the duration of a database request initiated from a column prompt. You can create a request variable as part of the process of creating a column prompt.
A request variable that is created as part of a column prompt is associated with a column, and the values that it can take come from the column values.

To create a request variable as part of a column prompt, in the “New Prompt dialog” (or Edit Prompt dialog), you have to select Request Variable in the Set a variable field and then enter the name of the session variable to override in the Variable Name field.

The value of a request variable is populated by the column prompt with which it was created. That is, each time a user selects a value in the column prompt, the value of the request variable is set to the value that the user selects. The value, however, is in effect only from the time the user presses the Go button for the prompt until the analysis results are returned to the dashboard.

Syntax:

@{[variables.]variablename}[format]{defaultvalue} or @{scope.variables[‘variablename‘]}

Sample:

@{[variables.]Country}{COUNTRY NOT SELECTED} or @{dashboard.variables[‘Country’]}

  • variablename is the name of the presentation or request variable
  • (optional) format is any valid format mask dependent on the data type of the variable (#,##0, MM/DD/YY hh:mm:ss.) Note that the format is not applied to the default value
  • (optional) defaultvalue is a constant or variable reference indicating a value to be used if the variable referenced by variablename is not set
  • scope identifies the qualifiers for the variable. You must specify the scope when a variable is used at multiple levels (analyses, dashboard pages, and dashboards) and you want to access a specific value. (If you do not specify the scope, then the order of precedence is analyses, dashboard pages, and dashboards.)

 

 

 

 

References:

http://obieetutorialguide.blogspot.com.tr/2012/02/using-variables-in-obiee.html

http://docs.oracle.com/cd/E12096_01/books/AnyUser/AnyUser_Format8.html

OBIEE – Direct Database Request: Using Presentation Variables

In the previous post, we have mentioned about direct database requests (DDR), and positive and negative properties of them. If you are interested in those topics please take some time and read corresponding post:

OBIEE – Direct Database Requests

In this article, we will try to explain how to use presentation variables in these requests.

 

OBIEE – Direct Database Request: User Permissions

In the previous post, we have mentioned about direct database requests (DDR), and positive and negative properties of them. If you are interested in those topics please take some time and read corresponding post:

OBIEE – Direct Database Requests

In this article, we will try to explain which permissions are required for DDR’s and how to assign then to a specific user.

First of all, we need to configure some settings in the repository part using Administration tool:

  • Open the Administration Tool and go to Manage -> Identity
  • Select BI Repository and on the right side, select Users or Application Roles tab
  • Under the selected tab, select the desired user or application role name and double click
  • Select the permissions button on the Application Role popup window
  • In the Query Limits tabs, change Execute Direct Database Requests from Ignore to Allow
  • Save and close windows.

Then we need to configure some settings in the BI Server:

  • Using your web browser, open BI Server (analytics) and login as an administrator (weblogic user)
  • Click on the Administration link placed at the right side of the top most part of the page
  • Click on the Manage Privileges link located under Security part
  • In the Answers section (list is not alphabetically sorted), for Edit Direct Database Analysis and Execute Direct Database Analysis sections click on the default value to include desired users and groups to allow these features.

 

OBIEE – Direct Database Request

The ability to execute SQL statements directly against the physical data source is called as direct database request (DDR). Direct database requests is available in Oracle Business Intelligence Enterprise Edition (OBIEE) but have some limitations, which are:

  • SQL statement is always executed on the physical data source.
  • SQL statement is not optimized by Oracle Business Intelligence (Oracle BI Server) Server engine.

Users with the appropriate permissions can create and issue a direct database request directly to a physical back end database. The results of the request can be displayed and manipulated within Oracle BI Answers, and subsequently incorporated into Oracle BI Interactive dashboard.

The Oracle BI Server sends original unprocessed, user-entered, physical SQL directly to an underlying database. The returned results set is then treated as an Oracle BI request.

Direct database requests completely bypass the Oracle BI Server mechanisms like security and logical business model. And one must BE CAREFUL, since in addition to SELECT queries, direct database requests also able to execute CREATE, or even DELETE and DROP statements on physical data store.

All above statements lists the the negative characteristics of direct database requests, but this feature has some benefits. These can be listed as:

  • DDR allows users to write and execute complex SQL queries
  • DDR allows to check physical connectivity to the database and can be used to determine performance issues.

How to Create a Direct Database Request:

In order to create a direct database request, open your web browser and navigate to the BI Answers (http://localhost:9704/analytics) to login with using an appropriate user (weblogic) having required permissions.

Select Analysis from the New menu at the the top-right corner of the page and Click Create Direct Database Request link at the bottom.

On the criteria tab, provide the Connection Pool name (the same as in the repository created by the Administration Tool) and SQL Statement (database specific SQL syntax is required) and then click Validate SQL and Retrieve Columns button.

If everything is OK, then you should have column names listed below the Result Columns part. Be sure to write connection pool name exactly the same as the name in the repository (with speces etc.) otherwise you may end up with Unresolved Connection Pool object:”..” error.

After successfully executed the SQL statement and get the result columns, click the Result tab and build your analysis as usual.

 

References:

http://gerardnico.com/wiki/dat/obiee/direct_database_request

https://santoshbidw.wordpress.com/category/obiee-10g/obiee-10g-answers/direct-database-request/

Delete Duplicate Rows in a Table

In some cases, a database table may contain duplicate rows. The definition of duplicate may be tricky depending on the table structure. If a table has a primary key and it is defined as unique, then the chance of duplicate rows is almost zero since database software will ensure you have NOT entering the same data to the table again. But if we have a poor design or somehow we create a redundant data, we can fix it by deleting duplicate row.

This article will try to explain how you may proceed to delete duplicate rows.

In a table, duplicate may mean all the columns of a row is exactly the same, or in another table duplicate may mean only one or some combination of columns date are the same. The first scenario is where there is no key data in the table and the second data there exists a key in the table to distinguish the row from the others.

To explain the scenario we are going to use a sample table. Create the table of European Union member countries by using the following statement:

CREATE TABLE EU_COUNTRIES AS
WITH SAMPLE_DATA AS
(
SELECT ‘Germany’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘France’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘United Kingdom’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Italy’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Spain’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Poland’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Romania’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Netherlands’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Belgium’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Greece’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Portugal’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Czech Republic’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Hungary’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Sweden’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Austria’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Bulgaria’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Denmark’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Finland’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Slovakia’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Ireland’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Croatia’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Lithuania’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Slovenia’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Latvia’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Estonia’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Cyprus’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Luxembourg’ EU_COUNTRY_NAME FROM DUAL UNION
SELECT ‘Malta’ EU_COUNTRY_NAME FROM DUAL
)
SELECT * FROM SAMPLE_DATA ORDER BY EU_COUNTRY_NAME
;

This table has no constraints (primary/unique key definitions). So one can easily insert a country name that is already in the table. Lets do that.

INSERT INTO EU_COUNTRIES (EU_COUNTRY_NAME) VALUES (‘Luxembourg’);

Now, in our table, the country Luxembourg is inserted twice (in oracle database, if the tool you use is not set to auto-commit, DO NOT forget to commit after the insert/update and delete statements). If we check our table for the duplicate rows:

SELECT EU_COUNTRY_NAME, COUNT(*) FROM EU_COUNTRIES GROUP BY EU_COUNTRY_NAME HAVING COUNT(*)>1;

The above sql statement will result in a row showing that Luxembourg has 2 counts.

EU_COUNTRY_NAME COUNT(*)
Luxembourg 2

Now we need to delete this duplicate row in order to assure the data consistency and prevent increased number of rows when joining this table with other tables due to the duplicated country.

 

Delete Duplicate Rows Using Sub-query:

DELETE FROM EU_COUNTRIES E1 WHERE E1.rowid < ANY
(
SELECT E2.rowid FROM EU_COUNTRIES E2
WHERE
E1.EU_COUNTRY_NAME = E2.EU_COUNTRY_NAME
);

In the above sql statement, in the outer statement, if we use < operator, the older rows will be deleted and the most recent row should be kept (*). If we use > operator then the newer rows will be deleted and the least recent row should be kept (*). If you examine the sub-query part in detail:

SELECT E2.rowid FROM EU_COUNTRIES E2
WHERE
E1.EU_COUNTRY_NAME = E2.EU_COUNTRY_NAME

WHERE clause is the tricky part. We need to use all the columns in the WHERE clause that defines a row. In this particular example, only column is EU_COUNTRY name. But if we had a table of EMPLOYEES having columns like EMPLOYEE_NAME, EMPLOYEE_SURNAME then the WHERE clause of the sub-query would be like:

SELECT E2.rowid FROM EMPLOYEES E2
WHERE
E1.EMPLEYEE_NAME=E2.EMPLOYEE_NAME
AND
E1.EMPLEYEE_SURNAME=E2.EMPLOYEE_SURNAME

Notice that the WHERE clause includes all the columns (EMPLOYEE_NAME and EMPLOYEE_SURNAME) that defines one single employee. If only the EMPLOYEE_NAME were used, since different employees may have the same names, our query would delete different employees having the same name which is NOT intended.

 

Delete Duplicate Rows Using RANK Function:

DELETE FROM EU_COUNTRIES WHERE rowid IN
(
SELECT “ROWID” FROM
(
SELECT RANK() OVER (PARTITION BY EU_COUNTRY_NAME ORDER BY rowid ASC) RANK_N, rowid AS “ROWID” FROM EU_COUNTRIES WHERE EU_COUNTRY_NAME IN
(SELECT EU_COUNTRY_NAME FROM EU_COUNTRIES GROUP BY EU_COUNTRY_NAME HAVING COUNT(*) >1)
)
WHERE RANK_N >1
);

In the above sql statement, rank() analytic function was used to specify rowid of duplicate rows. In the sub-query part of the statement, if you ORDER the windowing clause ASC, the newer rows will be deleted. If you ORDER the windowing clause DESC, then the older rows will be deleted. The windowing clause in the sub-query must include all the columns that define a row as unique in the partition by clause and the group by clause of the inner-most sub-query.

 

Delete Duplicate Rows Using Self Joins:

The most effective way to delete duplicate rows is to join the table against itself (self-join).

DELETE FROM EU_COUNTRIES WHERE rowid IN
(
SELECT rowid AS “ROWID” FROM EU_COUNTRIES E1
WHERE rowid >
(SELECT MIN(rowid) FROM EU_COUNTRIES E2 WHERE E1.EU_COUNTRY_NAME=E2.EU_COUNTRY_NAME)
);

In the above sql statement, if you use > operator with MIN function in sub-query, the newer rows will be deleted. If you use < operator with the MAX function in sub-query, older rows will be deleted. The sub-query again must be include all the columns that define a row as unique.

 

Delete Duplicate Rows Using ROW_NUMBER Function:

SELECT * FROM EU_COUNTRIES WHERE rowid IN (
SELECT rowid FROM
(SELECT rowid, ROW_NUMBER() OVER (PARTITION BY EU_COUNTRY_NAME ORDER BY rowid ASC) RN FROM EU_COUNTRIES) WHERE RN>1
);

ROW_NUMBER() is an analytic function and it can be used like the above sql statement to delete duplicate rows. In the sub-query part of the statement, if you ORDER the windowing clause ASC, the newer rows will be deleted. If you ORDER the windowing clause DESC, then the older rows will be deleted. The windowing clause in the sub-query must include all the columns that define a row as unique in the partition by clause.

NOTE:

In the examples demonstrated above situations, if there is no primary key defined in the table, all column names must be used. That means combination of each columns is assumed to be the primary key. For this situation one single query may be used to determine the duplicates as follows:

SELECT * FROM TABLE_WITHOUT_PK WHERE rowid NOT IN (
SELECT MIN(rowid) FROM FROM TABLE_WITHOUT_PK
GROUP BY COL1,COL2,COL3,…., COLN
);

 

(*) This is not guaranteed since the rowid may change if the “row movement” for the table is enabled.

References:

http://www.dba-oracle.com/t_delete_duplicate_table_rows.htm

Oracle Export / Import Data

Oracle Database can copy data between itself and other Oracle databases, and between itself and external files. Data copying is accomplished by exporting and importing data if both the source and the target are Oracle database, and by unloading and loading data if one party is Oracle database but the other is different.

Exporting is the term used for copying database data to external files for import into another Oracle database only. These files are in a proprietary binary format. Importing is the term used for copying data into the database from external files that were created by exporting from another Oracle database. Data exported from any Oracle Database edition (Express Edition, Standard Edition, and Enterprise Edition), can be imported into any other edition. Unloading is the term used for copying database data to external text files for consumption by another Oracle database, another database or another application (such as a spreadsheet application). The text files are in an industry-standard format such as tab-delimited or comma-delimited (CSV). Loading is the term used for copying data into the database from external text files that are in either a standard delimited format or in any of the formats that are supported by the Oracle SQL*Loader utility.

Oracle also provide some tools (depending on the version of the database) such as sql loader utility, data pump export and data pump import utilities, import and export utilities or wizards related to load / unload operations for performing these tasks.

 

Oracle Export / Import Utilities:

Export (exp) and Import (imp) are the Oracle utilities (before 10g database) allowing data to be written to a Oracle binary formatted file and Oracle binary formatted dump file to an Oracle database respectively. These utilities support export and import operations at the level of full database, tablespace, user (all objects belonging to the user) or table.

If the source and target Oracle database versions are not the same, exp utility must be the lower version and imp utility must be the same as the target version.

Export utility includes the data definition (ddl) statements for the exported data therefore these objects will be created automatically on the target database. When an export command is executed, database objects are extracted with their dependency objects. That means if it extracts a table, the dependences like indexes, comments, and grants are extracted and written into an export file. There are also some considerations while importing data. First of all, if the target database has the exported objects previously with the same names, these objects should be dropped or truncated. Otherwise data will be added to tables, sequences may generate inconsistent values. If there are constraints on the objects, these should be disabled during the import process.

Export Options:

  • buffer: Specifies the size, in bytes, of the buffer (array) used to insert the data. High value increases performance. (Ex: 1M, 2M etc.)
  • constraints[Y]: Specifies whether table constraints should be exported with table data.
  • direct[N]: Determines whether to use direct or conventional path export. Direct path exports enhance performance. Export in direct path mode simply bypasses the SQL evaluation buffer. A direct path export does not read directly from the data files and write to the DMP file. It should be noted that in direct path mode, you cannot use the QUERY parameter to select a subset of rows. This is why direct mode export bypass the SQL evaluation buffer where the where clause processing would normally take place. (Not used with query parameter)
  • file: The name of the export file. Multiple files may be created. When the file size exceeded, export utility will begin to write the next file in the list.
  • filesize: The maximum size of the files in the file list.
  • full: The entire database is exported.
  • grants [Y]: Specifies weather object grants will be exported or not.
  • indexes[Y]: Specifies weather the index definitions (NOT index data) will be exported or not.
  • log: The name of the log file.
  • object_consistent[N]: Specify if export utility uses SET TRANSACTION READ ONLY to ensure data consistency or not.
  • owner: Only the specified owners objects will be exported.
  • parfile: The file name including export parameters instead of specifying then in command line.
  • query: Allows a subset of the rows from a table to be exported based on where clause. (Not used with direct parameter)
  • recordlength: Amount of data spooled before writing to the disk. Max value is 64KB.
  • rows[Y]: Specifies weather or not table rows will be exported.
  • tables: Specifies the export is in the table mode and the specified list of the tables will be exported. Table partitions and sub partitions can also be specified.
  • tablespaces: Specifies the export is in the tablespace mode and all the tables in the specified list of the tablespaces will be exported.
  • transport_tablespace[N]: Enables the export of the metadata needed for transportable tablespaces.
  • triggers[Y]: Specifies weather or not defined triggers on specified tables will be exported.
  • userid: User information of the user that will perform the export process.

Sample Export Command:

exp userid=username[/password]@tns_name
file=export.dmp
log=export.log
full=y
statistics=none

[…] defines the optional pat. If password is not written in the command, it is prompted.

Import Options:

  • buffer:Specifies the size, in bytes, of the buffer (array) used to insert the data. High value increases performance. (Ex: 20M, 30M etc.)
  • commit[N]:Specifies whether import should commit after each array insert. By default, import commits after each table is loaded, however, this use a lot of the rollback segments or undo space for huge tables. For high performance set this value to Y.
  • compile[Y]:Tells import to compile procedural objects when they are imported.
  • constraints[Y]:Specifies whether table constraints should also be imported with table data.
  • datafiles (only with transport_tablespace):Lists data files to be transported to the database.
  • destroy[N]:Overwrite tablespace data file.
  • file:The name of the export file to import. Multiple files can be listed, separated by commas. When export reaches the filesize it will begin writing to the next file in the list.
  • filesize:Maximum size of each dump file.
  • fromuser:A comma delimited list of schemas from which to import. If the export file contains many users or even the entire database, the fromuser option enables only a subset of those objects (and data) to be imported.
  • full:The entire export file is imported.
  • grants[Y]:Specifies to import object grants.
  • ignore[N]:Specifies how object creation errors should be handled. If a table already exists and ignore=y, then the rows are imported to the existing tables, otherwise errors will be reported and no rows are loaded into the table.
  • indexes[Y]:Determines whether indexes are imported.
  • indexfile:Specifies a filename that contains index creation statements. This file can be used to build the indexes after the import has completed.
  • log:The name of the log file.
  • parfile:The file name including import parameters instead of specifying then in command line.
  • rows[Y]:Specifies whether or not the table rows should be imported.
  • show[N]:Specifies weather the DDL within the export file is displayed or not.
  • skip_unusable_indexes[N]:Specifies whether import skips the building of indexes that are in an unusable state.
  • tables:Specifies that the type of export is table-mode and lists the tables to be exported. Table partitions and sub partitions can also be specified.
  • tablespaces:When transport_tablespace=y, this parameter provides a list of tablespaces.
  • to_user:Specifies a list of user schemas that will be targets for imports.
  • transport_tablespace[N]:When Y, transportable tablespace metadata will be imported from the export file.
  • tts_owners:When transport_tablespace=Y, this parameter lists the users who own the data in the transportable tablespace set.
  • userid:Specifies the userid/password of the user performing the import.

Sample Import Command:

imp userid=username[/password]@tns_name
file=export.dmp
log=import.log
full=y
statistics=none

[…] defines the optional pat. If password is not written in the command, it is prompted.

If there are any constraints on the target table, thse should be disabled during import for better performance.

Export and import utilities DOES NOT support the FLOAT and DOUBLE data types.

 

Oracle Datapump Export / Datapump Import Utilities:

By the database version 10g, Oracle introduced new tools for transferring objects from one Oracle database to another. These tools are expdp for exporting and impdp for importing, which brings additional features and enhancements over exp and imp utilities. The Data Pump Export utility exports data and metadata into a set of operating system files called a dump file set. The Data Pump Import utility imports an export dump file set into a target Oracle database. A dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format, which means that the dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported to the same database or it can be moved to another system and loaded into the Oracle database there. Because the dump files are written by the database, rather than by the Data Pump client application, you must create directory objects for the directories to which files will be written. A directory object is a database object that is an alias for a directory in the host operating systems file system.

Data Pump Export and Import enable you to move a subset of the data and metadata. This is done by using Data Pump parameters to specify export and import modes, as well as various filtering criteria. Data Pump Export and Import also provide a set of interactive commands so that you can monitor and modify ongoing export and import jobs. Parameters for these tools can be directly written in command line, can be specified in parameter file or can be specified as it is prompted in an interactive command line interface.

Datapump export and datapump import utilities DOES NOT support XML type data. If you need to export or import objects that includes XML types, you need to use Export and Import utilities respectively.

Data Pump should be preferred unless you must import or export XML Type data.

Sample Datapump Export Command:

expdp userid=username/password dumpfile=export.dmp logfile=export.log full=y directory=export log=export.log

Sample Datapump Import Command:

impdp username/password schemas=Schema directory=export dumpfile=export.dmp logfile=import.log

 

In order to demonstrate a sample scenario we are going to export schema USER1, then import it to another user USER2 (which is not present yet)

How To Use expdp:

Create a directory on the database server where the exported files will be placed:

MKDIR c:exports

Open a command line to run SQL*Plus to, connect to the database, execute the following sql statemens in order to create a directory object named exportsdir for the exports directory that you just created, and to grant read and write access to it for user:

CREATE OR REPLACE DIRECTORY exportsdir AS ‘c:exports’;
GRANT READ,WRITE ON DIRECTORY exportsdir TO USER;

Export the USER1 schema to a dump file named schema.dmp by issuing the following command at the operating system command prompt:

expdp SYSTEM[/password] SCHEMAS=USER1 DIRECTORY=exportsdir DUMPFILE=schema.dmp LOGFILE=exp.log

SYSTEM is the Oracle database system user, that has the privileges to performs exports and import tasks where password is the password for the SYSTEM user. The schema.dmp file and the exp.log file are written to the exportsdir directory.

 

How To Use impdp:

To import the dump file, schema.dmp, from the USER1 into another schema, USER2, for this specific scenario, you need to use the REMAP_SCHEMA command parameter to indicate that objects are to be imported into a schema other than their original schema. Because the USER2 user account does not already exist, the import process automatically creates it. You will import everything except constraints, ref_constraints, and indexes. If a table already exists, it is replaced with the table in the export file.

At the operating system command prompt, issue the following command run the following command:

impdp SYSTEM[/password] SCHEMAS=USER1 DIRECTORY=exportsdir DUMPFILE=schema.dmp
REMAP_SCHEMA=USER1:USER2 EXCLUDE=constraint, ref_constraint, index
TABLE_EXISTS_ACTION=replace LOGFILE=imp.log

SYSTEM is the Oracle database system user, that has the privileges to performs exports and import tasks where password is the password for the SYSTEM user.

The USER2 schema is now populated with data from the USER1 schema.

The last thing is to assign a password to the newly created USER2 user account. To do so, start SQL Command Line and then at the SQL prompt, enter the following ALTER USER statement:

ALTER USER USER2 IDENTIFIED BY pass;

 

Note that datapump export and datapump import operations run on the database server. So Even if you exit the command prompt during operation, datapump jobs will continue to run at the server. If you give a name to the datapump operation using job_name parameter, then you can re-attach the session using:
expdp username/password attach=given_job_name

To stop the job on the command line interface, use the following:

press CTRL+C
Export> STOP_JOB=IMMEDIATE
Are you sure you wish to stop this job ([y]/n): y

To resume the job use the following:

expdp username/password attach=given_job_name
Export> CONTINUE_CLIENT

 

References:

http://docs.oracle.com/cd/B25329_01/doc/admin.102/b25107/impexp.htm