Oracle® Database Utilities 11g Release 1 (11.1) Part Number B28319-01 |
|
|
View PDF |
Oracle Data Pump technology enables very high-speed movement of data and metadata from one database to another.
This chapter discusses the following topics:
Oracle Data Pump is made up of three distinct parts:
The command-line clients, expdp
and impdp
The DBMS_DATAPUMP
PL/SQL package (also known as the Data Pump API)
The DBMS_METADATA
PL/SQL package (also known as the Metadata API)
The Data Pump clients, expdp
and impdp
, invoke the Data Pump Export utility and Data Pump Import utility, respectively. They provide a user interface that closely resembles the original Export (exp
) and Import (imp
) utilities.
Note:
Dump files generated by the Data Pump Export utility are not compatible with dump files generated by the original Export utility. Therefore, files generated by the original Export (exp
) utility cannot be imported with the Data Pump Import (impdp
) utility.
In most cases, Oracle recommends that you use the Data Pump Export and Import utilities. They provide enhanced data movement performance in comparison to the original Export and Import utilities.
See Chapter 20, "Original Export and Import" for information about situations in which you should still use the original Export and Import utilities.
The expdp
and impdp
clients use the procedures provided in the DBMS_DATAPUMP
PL/SQL package to execute export and import commands, using the parameters entered at the command-line. These parameters enable the exporting and importing of data and metadata for a complete database or for subsets of a database.
When metadata is moved, Data Pump uses functionality provided by the DBMS_METADATA
PL/SQL package. The DBMS_METADATA
package provides a centralized facility for the extraction, manipulation, and resubmission of dictionary metadata.
The DBMS_DATAPUMP
and DBMS_METADATA
PL/SQL packages can be used independently of the Data Pump clients.
Note:
All Data Pump Export and Import processing, including the reading and writing of dump files, is done on the system (server) selected by the specified database connect string. This means that, for nonprivileged users, the database administrator (DBA) must create directory objects for the Data Pump files that are read and written on that server file system. For privileged users, a default directory object is available. See Default Locations for Dump, Log, and SQL Files for more information about directory objects.Note:
Data Pump Export and Import are not supported on physical or logical standby databases except for initial table instantiation on a logical standby.See Also:
Oracle Database PL/SQL Packages and Types Reference for descriptions of theDBMS_DATAPUMP
and DBMS_METADATA
packagesData Pump uses four mechanisms for moving data in and out of databases. They are as follows, in order of decreasing speed:
Data file copying
Direct path
External tables
Network link import
Note:
Data Pump will not load tables with disabled unique indexes. If the data needs to be loaded into the table, the indexes must be either dropped or reenabled.Note:
There are a few situations in which Data Pump will not be able to load data into a table using either direct path or external tables. This occurs when there are conflicting table attributes. For example, a conflict occurs if a table contains a column of datatypeLONG
(which requires the direct path access method) but also has a condition that prevents use of direct path access. In such cases, an ORA-39242 error message is generated. To work around this, prior to import, create the table with a LOB column instead of a LONG
column. You can then perform the import and use the TABLE_EXISTS_ACTION
parameter with a value of either APPEND
or TRUNCATE
.The following sections briefly explain how and when each of these data movement mechanisms is used.
The fastest method of moving data is to copy the database data files to the target database without interpreting or altering the data. With this method, Data Pump Export is used to unload only structural information (metadata) into the dump file. This method is used in the following situations:
The TRANSPORT_TABLESPACES
parameter is used to specify a transportable mode export. Only metadata for the specified tablespaces is exported.
The TRANSPORTABLE=ALWAYS
parameter is supplied on a table mode export (specified with the TABLES
parameter). Only metadata for the tables, partitions, and subpartitions specified on the TABLES
parameter is exported.
When an export operation uses data file copying, the corresponding import job always also uses data file copying. During the ensuing import operation, you will be loading both the data files and the export dump file.
When data is moved by using data file copying, the character sets must be identical on both the source and target databases. Therefore, in addition to copying the data, you may need to prepare it by using the Recovery Manager (RMAN) CONVERT
command to perform some data conversions. You can generally do this at either the source or target database.
See Also:
Oracle Database Backup and Recovery Reference for information about the RMAN CONVERT
command
Oracle Database Administrator's Guide for a description and example (including how to convert the data) of transporting tablespaces between databases
After data file copying, direct path is the fastest method of moving data. In this method, the SQL layer of the database is bypassed and rows are moved to and from the dump file with only minimal interpretation. Data Pump automatically uses the direct path method for loading and unloading data when the structure of a table allows it. Note that if the table has any columns of datatype LONG
, then direct path must be used.
The following sections describe situations in which direct path cannot be used for loading and unloading.
Situations in Which Direct Path Load Is Not Used
If any of the following conditions exist for a table, Data Pump uses external tables rather than direct path to load the data for that table:
A global index on multipartition tables exists during a single-partition load. This includes object tables that are partitioned.
A domain index exists for a LOB column.
A table is in a cluster.
There is an active trigger on a pre-existing table.
Fine-grained access control is enabled in insert mode on a pre-existing table.
A table contains BFILE
columns or columns of opaque types.
A referential integrity constraint is present on a pre-existing table.
A table contains VARRAY
columns with an embedded opaque type.
The table has encrypted columns
The table into which data is being imported is a pre-existing table and at least one of the following conditions exists:
There is an active trigger
The table is partitioned
Fine-grained access control is in insert mode
A referential integrity constraint exists
A unique index exists
Supplemental logging is enabled and the table has at least one LOB column.
The Data Pump command for the specified table used the QUERY
, SAMPLE
, or REMAP_DATA
parameter.
Situations in Which Direct Path Unload Is Not Used
If any of the following conditions exist for a table, Data Pump uses the external table method to unload data, rather than direct path:
Fine-grained access control for SELECT
is enabled.
The table is a queue table.
The table contains one or more columns of type BFILE
or opaque, or an object type containing opaque columns.
The table contains encrypted columns.
The table contains a column of an evolved type that needs upgrading.
The table contains a column of type LONG
or LONG RAW
that is not last.
The Data Pump command for the specified table used the QUERY
, SAMPLE
, or REMAP_DATA
parameter.
When data file copying is not selected and the data cannot be moved using direct path, the external table mechanism is used. The external table mechanism creates an external table that maps to the dump file data for the database table. The SQL engine is then used to move the data. If possible, the APPEND
hint is used on import to speed the copying of the data into the database. The representation of data for direct path data and external table data is the same in a dump file. Therefore, Data Pump might use the direct path mechanism at export time, but use external tables when the data is imported into the target database. Similarly, Data Pump might use external tables for the export, but use direct path for the import.
In particular, Data Pump uses external tables in the following situations:
Loading and unloading very large tables and partitions in situations where parallel SQL can be used to advantage
Loading tables with global or domain indexes defined on them, including partitioned object tables
Loading tables with active triggers or clustered tables
Loading and unloading tables with encrypted columns
Loading tables with fine-grained access control enabled for inserts
Loading tables that are partitioned differently at load time and unload time
Note:
When Data Pump uses external tables as the data access mechanism, it uses theORACLE_DATAPUMP
access driver. However, it is important to understand that the files that Data Pump creates when it uses external tables are not compatible with files created when you manually create an external table using the SQL CREATE TABLE ... ORGANIZATION EXTERNAL
statement. One of the reasons for this is that a manually created external table unloads only data (no metadata), whereas Data Pump maintains both data and metadata information for all objects involved.When the Export NETWORK_LINK
parameter is used to specify a network link for an export operation, a variant of the external tables method is used. In this case, data is selected from across the specified network link and inserted into the dump file using an external table.
See Also:
NETWORK_LINK for information about using the Export NETWORK_LINK
parameter
Oracle Database SQL Language Reference for information about using the APPEND
hint
When the Import NETWORK_LINK
parameter is used to specify a network link for an import operation, SQL is directly used to move the data using an INSERT SELECT
statement. The SELECT
clause retrieves the data from the remote database over the network link. The INSERT
clause uses SQL to insert the data into the target database. There are no dump files involved.
When you perform an export over a database link, the data from the source database instance is written to dump files on the connected database instance. The source database can be a read-only database.
Because the link can identify a remotely networked database, the terms database link and network link are used interchangeably.
Because reading over a network is generally slower than reading from a disk, network link is the slowest of the four access methods used by Data Pump and may be undesirable for very large jobs.
Supported Link Types
The following types of database links are supported for use with Data Pump Export and Import:
Public (both public and shared)
Fixed-user
Connected user
Unsupported Link Types
The database link type, Current User, is not supported for use with Data Pump Export or Import:
See Also:
The Export NETWORK_LINK parameter for information about performing exports over a database link
The Import NETWORK_LINK parameter for information about performing imports over a database link
Oracle Database SQL Language Reference for information about database links
Data Pump jobs use a master table, a master process, and worker processes to perform the work and keep track of progress.
For every Data Pump Export job and Data Pump Import job, a master process is created. The master process controls the entire job, including communicating with the clients, creating and controlling a pool of worker processes, and performing logging operations.
While the data and metadata are being transferred, a master table is used to track the progress within a job. The master table is implemented as a user table within the database. The specific function of the master table for export and import jobs is as follows:
For export jobs, the master table records the location of database objects within a dump file set. Export builds and maintains the master table for the duration of the job. At the end of an export job, the content of the master table is written to a file in the dump file set.
For import jobs, the master table is loaded from the dump file set and is used to control the sequence of operations for locating objects that need to be imported into the target database.
The master table is created in the schema of the current user performing the export or import operation. Therefore, that user must have the CREATE TABLE
system privilege and a sufficient tablespace quota for creation of the master table. The name of the master table is the same as the name of the job that created it. Therefore, you cannot explicitly give a Data Pump job the same name as a preexisting table or view.
For all operations, the information in the master table is used to restart a job.
The master table is either retained or dropped, depending on the circumstances, as follows:
Upon successful job completion, the master table is dropped.
If a job is stopped using the STOP_JOB
interactive command, the master table is retained for use in restarting the job.
If a job is killed using the KILL_JOB
interactive command, the master table is dropped and the job cannot be restarted.
If a job terminates unexpectedly, the master table is retained. You can delete it if you do not intend to restart the job.
If a job stops before it starts running (that is, before any database objects have been copied), the master table is dropped.
See Also:
JOB_NAME for more information about how job names are formed.Within the master table, specific objects are assigned attributes such as name or owning schema. Objects also belong to a class of objects (such as TABLE
, INDEX,
or DIRECTORY
). The class of an object is called its object type. You can use the EXCLUDE
and INCLUDE
parameters to restrict the types of objects that are exported and imported. The objects can be based upon the name of the object or the name of the schema that owns the object. You can also specify data-specific filters to restrict the rows that are exported and imported.
When you are moving data from one database to another, it is often useful to perform transformations on the metadata for remapping storage between tablespaces or redefining the owner of a particular set of objects. This is done using the following Data Pump Import parameters: REMAP_DATAFILE
, REMAP_SCHEMA
, REMAP_TABLE,REMAP_TABLESPACE
, TRANSFORM
and PARTITION_OPTIONS
.
See Also:
Data Pump can employ multiple worker processes, running in parallel, to job increase performance. Use the PARALLEL
parameter to set a degree of parallelism that takes maximum advantage of current conditions. For example, to limit the effect of a job on a production system, the database administrator (DBA) might wish to restrict the parallelism. The degree of parallelism can be reset at any time during a job. For example, PARALLEL
could be set to 2 during production hours to restrict a particular job to only two degrees of parallelism, and during nonproduction hours it could be reset to 8. The parallelism setting is enforced by the master process, which allocates work to be executed to worker processes that perform the data and metadata processing within an operation. These worker processes operate in parallel. In general, the degree of parallelism should be set to no more than twice the number of CPUs on an instance.
Note:
The ability to adjust the degree of parallelism is available only in the Enterprise Edition of Oracle Database.The worker processes are the ones that actually unload and load metadata and table data in parallel. Worker processes are created as needed until the number of worker processes is equal to the value supplied for the PARALLEL
command-line parameter. The number of active worker processes can be reset throughout the life of a job.
Note:
The value ofPARALLEL
is restricted to 1 in the Standard Edition of Oracle Database.When a worker process is assigned the task of loading or unloading a very large table or partition, it may choose to use the external tables access method to make maximum use of parallel execution. In such a case, the worker process becomes a parallel execution coordinator. The actual loading and unloading work is divided among some number of parallel I/O execution processes (sometimes called slaves) allocated from the Oracle RAC-wide pool of parallel I/O execution processes.
The Data Pump Export and Import utilities can be attached to a job in either interactive-command mode or logging mode. In logging mode, real-time detailed status about the job is automatically displayed during job execution. The information displayed can include the job and parameter descriptions, an estimate of the amount of data to be exported, a description of the current operation or item being processed, files used during the job, any errors encountered, and the final job state (Stopped or Completed).
See Also:
Job status can be displayed on request in interactive-command mode. The information displayed can include the job description and state, a description of the current operation or item being processed, files being written, and a cumulative status.
A log file can also be optionally written during the execution of a job. The log file summarizes the progress of the job, lists any errors that were encountered along the way, and records the completion status of the job.
See Also:
An alternative way to determine job status or to get other information about Data Pump jobs, would be to query the DBA_DATAPUMP_JOBS
, USER_DATAPUMP_JOBS
, or DBA_DATAPUMP_SESSIONS
views. See Oracle Database SQL Language Reference for descriptions of these views.
Data Pump operations that transfer table data (export and import) maintain an entry in the V$SESSION_LONGOPS
dynamic performance view indicating the job progress (in megabytes of table data transferred). The entry contains the estimated transfer size and is periodically updated to reflect the actual amount of data transferred.
Use of the COMPRESSION
, ENCRYPTION
, ENCRYPTION_ALGORITHM
, ENCRYPTION_MODE
, ENCRYPTION_PASSWORD
, QUERY
, REMAP_DATA
, and SAMPLE
parameters will not be reflected in the determination of estimate values.
The usefulness of the estimate value for export operations depends on the type of estimation requested when the operation was initiated, and it is updated as required if exceeded by the actual transfer amount. The estimate value for import operations is exact.
The V$SESSION_LONGOPS
columns that are relevant to a Data Pump job are as follows:
USERNAME
- job owner
OPNAME
- job name
TARGET_DESC
- job operation
SOFAR
- megabytes (MB) transferred thus far during the job
TOTALWORK
- estimated number of megabytes (MB) in the job
UNITS
- 'MB'
MESSAGE
- a formatted status message of the form:
'job_name: operation_name : nnn out of mmm MB done'
Data Pump jobs manage the following types of files:
Dump files to contain the data and metadata that is being moved
Log files to record the messages associated with an operation
SQL files to record the output of a SQLFILE operation. A SQLFILE operation is invoked using the Data Pump Import SQLFILE
parameter and results in all of the SQL DDL that Import will be executing based on other parameters, being written to a SQL file.
Files specified by the DATA_FILES
parameter during a transportable import.
An understanding of how Data Pump allocates and handles these files will help you to use Export and Import to their fullest advantage.
For export operations, you can specify dump files at the time the job is defined, as well as at a later time during the operation. For example, if you discover that space is running low during an export operation, you can add additional dump files by using the Data Pump Export ADD_FILE
command in interactive mode.
For import operations, all dump files must be specified at the time the job is defined.
Log files and SQL files will overwrite previously existing files. For dump files, you can use the Export REUSE_DUMPFILES
parameter to specify whether or not to overwrite a preexisting dump file.
Because Data Pump is server-based, rather than client-based, dump files, log files, and SQL files are accessed relative to server-based directory paths. Data Pump requires you to specify directory paths as directory objects. A directory object maps a name to a directory path on the file system.
For example, the following SQL statement creates a directory object named dpump_dir1
that is mapped to a directory located at /usr/apps/datafiles.
SQL> CREATE DIRECTORY dpump_dir1 AS '/usr/apps/datafiles';
The reason that a directory object is required is to ensure data security and integrity. For example:
If you were allowed to specify a directory path location for an input file, you might be able to read data that the server has access to, but to which you should not.
If you were allowed to specify a directory path location for an output file, the server might overwrite a file that you might not normally have privileges to delete.
On Unix and Windows NT systems, a default directory object, DATA_PUMP_DIR
, is created at database creation or whenever the database dictionary is upgraded. By default, it is available only to privileged users.
If you are not a privileged user, before you can run Data Pump Export or Data Pump Import, a directory object must be created by a database administrator (DBA) or by any user with the CREATE
ANY
DIRECTORY
privilege.
After a directory is created, the user creating the directory object needs to grant READ
or WRITE
permission on the directory to other users. For example, to allow the Oracle database to read and write files on behalf of user hr
in the directory named by dpump_dir1
, the DBA must execute the following command:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir1 TO hr;
Note that READ
or WRITE
permission to a directory object only means that the Oracle database will read or write that file on your behalf. You are not given direct access to those files outside of the Oracle database unless you have the appropriate operating system privileges. Similarly, the Oracle database requires permission from the operating system to read and write files in the directories.
Data Pump Export and Import use the following order of precedence to determine a file's location:
If a directory object is specified as part of the file specification, then the location specified by that directory object is used. (The directory object must be separated from the filename by a colon.)
If a directory object is not specified for a file, then the directory object named by the DIRECTORY
parameter is used.
If a directory object is not specified, and if no directory object was named by the DIRECTORY
parameter, then the value of the environment variable, DATA_PUMP_DIR
, is used. This environment variable is defined using operating system commands on the client system where the Data Pump Export and Import utilities are run. The value assigned to this client-based environment variable must be the name of a server-based directory object, which must first be created on the server system by a DBA. For example, the following SQL statement creates a directory object on the server system. The name of the directory object is DUMP_FILES1
, and it is located at '/usr/apps/dumpfiles1'
.
SQL> CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';
Then, a user on a UNIX-based client system using csh
can assign the value DUMP_FILES1
to the environment variable DATA_PUMP_DIR
. The DIRECTORY
parameter can then be omitted from the command line. The dump file employees.dmp
, as well as the log file export.log
, will be written to '/usr/apps/dumpfiles1'
.
%setenv DATA_PUMP_DIR DUMP_FILES1
%expdp hr/password TABLES=employees DUMPFILE=employees.dmp
If none of the previous three conditions yields a directory object and you are a privileged user, then Data Pump attempts to use the value of the default server-based directory object, DATA_PUMP_DIR
. This directory object is automatically created at database creation or when the database dictionary is upgraded. You can use the following SQL query to see the path definition for DATA_PUMP_DIR
:
SQL> SELECT directory_name, directory_path FROM dba_directories 2 WHERE directory_name='DATA_PUMP_DIR';
If you are not a privileged user, access to the DATA_PUMP_DIR
directory object must have previously been granted to you by a DBA.
Do not confuse the default DATA_PUMP_DIR
directory object with the client-based environment variable of the same name.
If you use Data Pump Export or Import with Automatic Storage Management (ASM) enabled, you must define the directory object used for the dump file so that the ASM disk-group name is used (instead of an operating system directory path). A separate directory object, which points to an operating system directory path, should be used for the log file. For example, you would create a directory object for the ASM dump file as follows:
SQL> CREATE or REPLACE DIRECTORY dpump_dir as '+DATAFILES/';
Then you would create a separate directory object for the log file:
SQL> CREATE or REPLACE DIRECTORY dpump_log as '/homedir/user1/';
To enable user hr
to have access to these directory objects, you would assign the necessary privileges, for example:
SQL> GRANT READ, WRITE ON DIRECTORY dpump_dir TO hr; SQL> GRANT READ, WRITE ON DIRECTORY dpump_log TO hr;
You would then use the following Data Pump Export command:
> expdp hr/password DIRECTORY=dpump_dir DUMPFILE=hr.dmp LOGFILE=dpump_log:hr.log
See Also:
The Export DIRECTORY parameter
The Import DIRECTORY parameter
Oracle Database SQL Language Reference for information about the CREATE
DIRECTORY
command
Oracle Database Administrator's Guide for more information about Automatic Storage Management (ASM)
For export and import operations, the parallelism setting (specified with the PARALLEL
parameter) should be less than or equal to the number of dump files in the dump file set. If there are not enough dump files, the performance will not be optimal because multiple threads of execution will be trying to access the same dump file.
The PARALLEL
parameter is valid only in the Enterprise Edition of Oracle Database.
Instead of, or in addition to, listing specific filenames, you can use the DUMPFILE
parameter during export operations to specify multiple dump files, by using a substitution variable (%U
) in the filename. This is called a dump file template. The new dump files are created as they are needed, beginning with 01
for %U
, then using 02
, 03
, and so on. Enough dump files are created to allow all processes specified by the current setting of the PARALLEL
parameter to be active. If one of the dump files becomes full because its size has reached the maximum size specified by the FILESIZE
parameter, it is closed, and a new dump file (with a new generated name) is created to take its place.
If multiple dump file templates are provided, they are used to generate dump files in a round-robin fashion. For example, if expa%U
, expb%U,
and expc%U
were all specified for a job having a parallelism of 6, the initial dump files created would be expa01
.dmp
, expb01
.dmp
, expc01
.dmp
, expa02
.dmp
, expb02
.dmp
, and expc02
.dmp
.
For import and SQLFILE operations, if dump file specifications expa%U
, expb%U,
and expc%U
are specified, then the operation will begin by attempting to open the dump files expa01
.dmp
, expb01
.dmp
, and expc01
.dmp
. It is possible for the master table to span multiple dump files, so until all pieces of the master table are found, dump files continue to be opened by incrementing the substitution variable and looking up the new filenames (for example, expa02
.dmp
, expb02
.dmp
, and expc02
.dmp
). If a dump file does not exist, the operation stops incrementing the substitution variable for the dump file specification that was in error. For example, if expb01
.dmp
and expb02
.dmp
are found but expb03
.dmp
is not found, then no more files are searched for using the expb%U
specification. Once the entire master table is found, it is used to determine whether all dump files in the dump file set have been located.
Because most Data Pump operations are performed on the server side, if you are using any version of the database other than COMPATIBLE
, you must provide the server with specific version information. Otherwise, errors may occur. To specify version information, use the VERSION
parameter.
Keep the following information in mind when you are using Data Pump Export and Import to move data between different database versions:
If you specify a database version that is older than the current database version, certain features may be unavailable. For example, specifying VERSION=10.1
will cause an error if data compression is also specified for the job because compression was not supported in 10.1.
On a Data Pump export, if you specify a database version that is older than the current database version, then a dump file set is created that you can import into that older version of the database. However, the dump file set will not contain any objects that the older database version does not support. For example, if you export from a version 10.2 database to a version 10.1 database, comments on indextypes will not be exported into the dump file set.
Data Pump Import can always read dump file sets created by older versions of the database.
Data Pump Import cannot read dump file sets created by a database version that is newer than the current database version, unless those dump file sets were created with the version parameter set to the version of the target database. Therefore, the best way to perform a downgrade is to perform your Data Pump export with the VERSION
parameter set to the version of the target database.
When operating across a network link, Data Pump requires that the remote database version be either the same as the local database or one version older, at the most. For example, if the local database is version 11.1, the remote database must be either version 10.2 or 11.1.