Oracle® Universal Installer and OPatch User's Guide 11g Release 1 (11.1) for Windows and UNIX Part Number B31207-01 |
|
|
View PDF |
This chapter provides information about Oracle cloning using Oracle Universal Installer (OUI). This chapter contains the following topics:
Creating an Oracle Real Application Cluster Environment using Cloning
Adding Nodes Using Cloning in Oracle Real Application Clusters Environments
Cloning is the process of copying an existing Oracle installation to a different location and updating the copied bits to work in the new environment. The changes made by applying one-off patches on the source Oracle home are also present after the clone operation. The source and the destination path (host to be cloned) need not be the same. During cloning, OUI replays the actions that were run to install the home. Cloning is similar to installation, except that OUI runs the actions in a special mode referred to as clone mode. Some situations in which cloning is useful are:
Creating an installation that is a copy of a production, test, or development installation. Cloning enables you to create a new installation with all patches applied to it in a single step. This contrasts with going through the installation process by performing separate steps to install, configure, and patch the installation.
Rapidly deploying an instance and the applications that it hosts.
Preparing an Oracle home and deploying it to many hosts.
The cloned installation behaves the same as the source installation. For example, the cloned Oracle home can be removed using OUI or patched using OPatch. You can also use the cloned Oracle home as the source for another cloning operation. You can create a cloned copy of a test, development, or production installation by using the command-line cloning scripts. The default cloning procedure is adequate for most usage cases. However, you can also customize various aspects of cloning, for example, to specify custom port assignments, or to preserve custom settings.
The cloning process copies all of the files from the source Oracle home to the destination Oracle home. Thus, any files used by the source instance located outside the source Oracle home's directory structure are not copied to the destination location.
The size of the binaries at the source and the destination may differ because these are relinked as part of the clone operation, and the operating system patch levels may also differ between these two locations. Additionally, the number of files in the cloned home would increase because several files copied from the source, specifically those being instantiated, are backed up as part of the clone operation.
OUI cloning is more beneficial than using the tarball approach, because cloning configures the Central Inventory and the Oracle home inventory in the cloned home. Cloning also makes the home manageable and allows the paths in the cloned home and the target home to be different.
The cloning process uses the OUI cloning functionality. This operation is driven by a set of scripts and add-ons that are included in the respective Oracle software. The cloning process has two phases:
To prepare the source Oracle home to be cloned, perform the following steps:
At the source, run a script called prepare_clone.pl
. This is a Perl script that prepares the source for cloning by recording the information required for cloning. This script is generally found in the following location:
$ORACLE_HOME/clone/bin/prepare_clone.pl
During this phase, prepare_clone.pl
parses files in the source Oracle home to extract and store the required values. For more information about the parameters to be passed, see the section Cloning Script Variables and their Definitions.
Note:
The need to perform the preparation phase depends on the Oracle product you are installing. This script needs to be executed only for the Application Server Cloning. Database and CRS Oracle home cloning does not require this.Archive and compress the source Oracle home using your preferred archiving tool. For example, you can use WinZip on Microsoft Windows system computers and tar
or gzip
on UNIX. Make sure that the tool that you use preserves the permissions and file timestamps. When archiving the home, also ensure that you skip the *.log
, *.dbf
, listener.ora
, sqlnet.ora,
and tnsnames.ora
for archiving. Also ensure that you do not archive the following folders:
$ORACLE_HOME/<Hostname>_<SID> $ORACLE_HOME/oc4j/j2ee/OC4J_DBConsole_<Hostname>_<SID>
The following sample shows an exclude file list:
$ cat excludedFileList.txt ./install/make.log ./cfgtoollogs/cfgfw/CfmLogger_2007-07-13_12-03-16-PM.log ./cfgtoollogs/cfgfw/oracle.server_2007-07-13_12-03-17-PM.log ./cfgtoollogs/cfgfw/oracle.network.client_2007-07-13_12-03-18-PM.log ./cfgtoollogs/cfgfw/oracle.has.common_2007-07-13_12-03-18-PM.log ./cfgtoollogs/cfgfw/oracle.assistants.server_2007-07-13_12-03-18-PM.log ./cfgtoollogs/cfgfw/OuiConfigVariables_2007-07-13_12-03-18-PM.log ./cfgtoollogs/cfgfw/oracle.sysman.console.db_2007-07-13_12-03-18-PM.log ./cfgtoollogs/cfgfw/oracle.sqlplus.isqlplus_2007-07-13_12-03-18-PM.log ./cfgtoollogs/oui/cloneActions2007-07-13_11-52-19AM.log ./cfgtoollogs/oui/silentInstall2007-07-13_11-52-19AM.log
The following example shows how to archive and compress the source for various platforms:
To archive and compress: tar cpf - . | compress -fv > temp_dir/archiveName.tar.Z ( for "aix" or $^O eq "hpux") tar cpfX - excludeListFile . | compress -fv > temp_dir/archiveName.tar.Z (for remaining UNIX based systems)
Note:
Do not use the jar utility to archive and compress the Oracle home.On the destination system, you unarchive the Oracle home and run the clone.pl
script. This Perl script performs all parts of the cloning operation automatically by running OUI and various other utilities. This script uses the cloning functionality in OUI. When you run the clone.pl
script, it handles the specifics that OUI may have missed. The Central Inventory of the box where the home is being cloned is updated as is the Oracle home inventory ($ORACLE_HOME/inventory
).
The following example shows how to unarchive and decompress the source for various platforms:
To unarchive: mkdir Destination_oracle_homecd Destination_oracle_homezcat temp_dir/archiveName.tar.Z | tar xpf - (for "hpux")zcat temp_dir/archiveName.tar.Z | tar xBpf - (for remaining UNIX based systems)
You must have Perl 5.6 or higher installed on your system to enable cloning. Also ensure that you set the path environment variable to the correct Perl executable.
Note:
The cloned home and source home will not be identical in size, because the cloned home will have additional files created during the cloning operation.The cloning script runs multiple tools, each of which may generate its own log files. However, the following log files that OUI and the cloning scripts generate are the key log files of interest for diagnostic purposes:
<Central_Inventory>/logs/cloneActions timestamp.log: Contains a detailed log of the actions that occur during the OUI part of the cloning.
<Central_Inventory>/logs/oraInstall timestamp.err: Contains information about errors that occur when OUI is running.
<Central_Inventory>/logs/oraInstall timestamp.out: Contains other miscellaneous messages generated by OUI.
$ORACLE_HOME/clone/logs/clone timestamp.log: Contains a detailed log of the actions that occur during the pre-cloning and cloning operations.
$ORACLE_HOME/clone/logs/error timestamp.log: Contains information about errors that occur during the pre-cloning and cloning operations.
To find the location of the Oracle inventory directory:On all UNIX system computers except Linux and IBM AIX, look in the /var/opt/oracle/oraInst.loc
file. On IBM AIX and Linux-based systems, look in the /etc/oraInst.loc
file.
On Windows system computers, you can obtain the location from the Windows Registry key: HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\INST_LOC
.
After the clone.pl
script finishes running, refer to these log files to obtain more information about the cloning process.
There are two steps involved in cloning an Oracle Database 11.1 Oracle home:
To prepare the source Oracle home to be cloned, perform the following steps:
Ensure that the Oracle Database installation whose home you want to clone has been successful.
For Windows system computers, you can check the status of the installation by reviewing the installActions
date_time
.log
file for the installation session, where date_time
represents the date and time when the file was created; for example, installActions2007-05-30_10-28-04PM.log
. This log file is normally located in the c:\Program Files\Oracle\Inventory\logs
directory.
For Linux-based systems, the logs are kept in the <inventory location>/logs
directory. To determine the location of the Central Inventory, see"Locating and Viewing Log Files".
If you have installed patches, you can check their status by running the following commands:
For Windows-based system computers:
c:\ORACLE_BASE\ORACLE_HOME\OPatch> set ORACLE_HOME = ORACLE_HOME_using_patch c:\ORACLE_BASE\ORACLE_HOME\OPatch> opatch lsinventory
For Linux-based and UNIX-based systems:
/ORACLE_BASE/ORACLE_HOME/OPatch> setenv ORACLE_HOME ORACLE_HOME_using_patch /ORACLE_BASE/ORACLE_HOME/OPatch> ./opatch lsinventory
Archive and compress the source Oracle home, using your preferred tool for archiving. For more information on this, see "Source Preparation Phase".
To clone the 11.1 Oracle Database, perform the following steps:
Copy the compressed zip or archive file to the target computer.
Extract the contents of the compressed zip or archive file in the target computer. For more information on extracting the contents, see "Cloning Phase".
On the target computer, go to the $ORACLE_HOME/clone/bin
directory and run clone.pl
. This is a Perl script that performs all parts of the cloning operation automatically by calling various utilities and OUI. This script uses the cloning functionality in OUI.
Note:
Theclone.pl
script clones the software only and not the database instance.The following command shows the syntax for the clone.pl
script:
For Windows-based systems:
perl <Oracle_Home>\clone\bin\clone.pl ORACLE_HOME=<Path to the Oracle_Home being_cloned> ORACLE_HOME_NAME=<Oracle_Home_Name for the Oracle_Home being cloned> [-command_line_arguments]
For Linux-based and UNIX-based systems:
perl <Oracle_Home>/clone/bin/clone.pl ORACLE_HOME=<Path to the Oracle_Home being_cloned> ORACLE_HOME_NAME=<Oracle_Home_Name for the Oracle_Home being cloned> [-command_line_arguments]
The preceding command uses the <command_line_arguments> variable. Table 6–1 describes the command-line arguments.
Table 6-1 Command-line arguments in the clone.pl script
Command-line Argument | Description |
---|---|
-O |
If you use this argument, anything following it is passed to the OUI clone command line. For example, you can use this option to pass the location of the '-O -paramFile C:\OraHome_1\oui\oraparam.ini' |
-debug |
If you use this argument, the script runs in debug mode. |
-help |
If you use this argument, the script prints the help for the clone script. |
You can also pass values in the command line by using the $ORACLE_HOME/clone/config/cs.properties
file. You can enter values in the line clone_command_line=<value>
. The values entered here are appended to the OUI command line, which is run to perform the clone operation.
For example, to specify a non-default location for the Oracle inventory file on UNIX system computers, you can add the following line to the cs.properties
file:
clone_command_line= -invptrloc /private/oracle/oraInst.loc
Note:
To specify multiple arguments, separate each argument with a space.Locate the log file, if desired, after OUI starts and records the cloning actions in the cloneActionstimestamp.log
file:
For Windows-based systems, this log file is normally located in the following directory:
c:\Program Files\Oracle\Inventory\logs
For Linux-based and UNIX-based systems, this log file is normally located in the following directory:
c:\Program Files\Oracle\Inventory\logs
To configure the connection information for the new database, run the Net Configuration Assistant:
On Windows-based systems, select Start > Programs > Oracle - HOME_NAME > Configuration and Migration Tools > Net Configuration Assistant.
On Linux-based and UNIX-based systems, set the ORACLE_HOME
variable and run $ORACLE_HOME/bin/netca
.
To create a new database for the newly cloned Oracle home, run the Oracle Database Configuration Assistant:
On Windows-based systems, select Start > Programs > Oracle - HOME_NAME > Configuration and Migration Tools > Database Configuration Assistant.
On Linux-based and UNIX-based systems, run $ORACLE_HOME/bin/dbca
.
After cloning, you can view the status of the clone operation by navigating to the $ORACLE_HOME/clone/log
directory and by reviewing the *.log
and *.err
files. For more information, see the section "Locating and Viewing Log Files".
This section explains how to create an Oracle Real Application Clusters (RAC) environment by using Oracle cloning. The following topics explain how to use cloning for both UNIX and Linux system environments, as well as Windows system environments:
Creating Oracle RAC Environments on UNIX and Linux System-Based Environments
Creating Oracle RAC Environments on Windows System-Based Environments
Before proceeding, note the following advisory information when cloning the Oracle Database with Oracle RAC:
The order of the nodes specified should always be the same on all hosts.
Oracle Clusterware should be installed on the cluster nodes before starting an Oracle RAC installation.
For a shared home, you need to also provide a value for the -cfs
parameter on the command line.
This section explains how to clone an Oracle RAC environment by using Oracle cloning as described in the following procedures:
Cloning Oracle Clusterware on UNIX and Linux System-Based Environments
Cloning Oracle RAC Software on UNIX and Linux System-Based Environments
Complete the following steps to clone Oracle Clusterware on UNIX and Linux systems:
Skip this step if you have a shared Oracle Clusterware home. If you do not have a shared Oracle Clusterware home, tar the Oracle Clusterware home from an existing node and copy it to the new node. Use $CRS_HOME
as the destination Oracle Clusterware home on the new node.
Note:
For more information on archiving, see "Source Preparation Phase".Unarchive the home on the new nodes. In the case of shared homes, unarchive the home only once on the nodes.
Note:
For more information on archiving, see "Cloning Phase".If you do not have a shared Oracle Clusterware home, navigate to the $ORACLE_HOME/clone/bin
directory on the new node and run the following command, where Oracle_home_name
is the name of the Oracle home, new_node
is the name of the new node, new_node-priv
is the private interconnect protocol address of the new node, new_node-vip
is the virtual interconnect protocol address of the new node, and central_inventory_location
is the location of the Oracle Central Inventory:
perl clone.pl ORACLE_HOME=<$CRS_HOME> ORACLE_HOME_NAME=Oracle_home_name '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_tableList={new_node:new_node-priv:new_node-vip}"' '-O-noConfig''-O"INVENTORY_LOCATION=central_inventory_location"'
If you have a shared Oracle Clusterware home, append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system. Ensure that n_storageTypeOCR
and n_storageTypeVDSK
has been set to 2 for redundant storage. Ensure that this value is set to 1 for non-redundant storage. In this case, the mirror locations will also have to be specified.
On the new node, go to the directory that contains the central Oracle inventory. Run the orainstRoot.sh
script to populate the file /etc/oraInst.loc
with information about the Central Inventory location. On the new node, go to the $CRS_HOME
directory and run ./root.sh
. This starts the Oracle Clusterware on the new node.
Determine the remote port to use in the next step by running the following command from the $CRS_HOME/opmn/conf
directory:
cat ons.config | grep remoteport
On the new node, run the following command from the $CRS_HOME/bin
directory, where racgons
is the Oracle RAC Notification Service Utility, new_node
is the name of the new node, and remote_port
is the value from the output of the previous step:
./racgons add_config new_node:<Remote_Port>
Execute the following command to get the interconnect information. You can use this information in the next step.
$CRS_HOME/bin/oifcfg iflist –p
Execute the oifcfg
command as follows:
oifcfg setif -global <interface_name>/<subnet>:public <inteface_ name>/<subnet>:cluster_interconnect [<interface_name>/<subnet>:public <inteface_name>/<subnet>:cluster_interconnect .......]
Note:
Oracle Clusterware cloning can only be performed in silent mode.Complete the following steps to clone Oracle Database with RAC software on UNIX and Linux systems:
If you do not have a shared Oracle Database home, tar the Oracle RAC home from the existing node and copy it to the new node. Assume that the location of the destination Oracle RAC home on the new node is $ORACLE_HOME
. Otherwise, skip this step.
Note:
For more information on archiving, see "Source Preparation Phase".Unarchive the home on the new nodes. In the case of shared homes, unarchive the home only once on the nodes.
Note:
For more information on unarchiving, see "Cloning Phase".On the new nodes, go to the $CRS_HOME/oui/bin
directory and run the following command, where new_node2
and new_node3
are the names of the new nodes, and Oracle_home_name
is the name of the Oracle home:
perl clone.pl ORACLE_HOME=<Path to the Oracle_Home being cloned> ORACLE_HOME _NAME=<Oracle_Home_Name for the Oracle_Home being cloned> '-O"CLUSTER_ NODES={new_node_2,new_node_3}"' '-O"LOCAL_NODE=new_node_2"'
If you have a shared Oracle Database home, append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system.
Note:
Set theLOCAL_NODE
to the node on which you run the clone
command.On the new node, go to the $ORACLE_HOME
directory and run the following command:
./root.sh
On the new node, run the Net Configuration Assistant (NETCA) to add a listener.
From the node that you cloned, run the Database Configuration Assistant (DBCA) to add the new instance.
This section explains how to clone an Oracle RAC environment by using Oracle cloning as described in the following procedures:
Cloning Oracle Clusterware on Windows System-Based Environments
Cloning Oracle RAC Software on Windows System-Based Environments
Complete the following steps to clone Oracle Clusterware on Windows system computers:
Skip this step if you have a shared Oracle Clusterware home. If you do not have a shared Oracle Clusterware home, zip the Oracle Clusterware home from the existing node and copy it to the new node. Unzip the home on the new node in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the existing node. For example, assume that the location of the destination Oracle Clusterware home on the new node is %CRS_HOME%
.
Note:
For more information on zipping and unzipping, see "Source Preparation Phase" and "Cloning Phase".If you do not have a shared Oracle Clusterware home, navigate to the $ORACLE_HOME/clone/bin
directory on the new node and run the following command, where Oracle_home_name
is the name of the Oracle home, new_node
is the name of the new node, new_node-priv
is the private interconnect protocol address of the new node, new_node-vip
is the virtual interconnect protocol address of the new node, and central_inventory_location
is the location of the Oracle Central Inventory:
perl clone.pl ORACLE_HOME=<CRS_HOME> ORACLE_HOME_NAME=<CRS_HOME_NAME> '-On_storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"' '-O"ret_PrivIntrList=<private interconnect list>"' '-O"sl_OHPartitionsAndSpace_valueFromDlg={partition and space information}"' '-O-noConfig'
If you have a shared Oracle Clusterware home, append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system. Ensure that n_storageTypeOCR
and n_storageTypeVDSK
has been set to 2 for redundant storage. Ensure that this value is set to 1 for non-redundant storage. In this case, the mirror locations will also have to be specified. On the other nodes, execute the same command by passing the additional argument PERFORM_PARTITION_TASKS=FALSE
.
perl clone.pl ORACLE_HOME=<CRS_HOME> ORACLE_HOME_NAME=<CRS_HOME_NAME> '-On_ storageTypeVDSK=2' '-On_storageTypeOCR=2' '-O"sl_ tableList={node2:node2-priv:node2-vip, node3:node3-priv:node3-vip}"' '-O"ret_ PrivIntrList=<private interconnect list>"' '-O"sl_OHPartitionsAndSpace_ valueFromDlg={partition and space information}"' '-O-noConfig' '-OPERFORM_ PARTITION_TASKS=FALSE'
From the %CRS_HOME%\cfgtoollogs
directory on the existing node, run the following command:
<CRS_HOME>\cfgtoollogs\cfgToolAllCommands
This instantiates the Virtual Protocol Configuration Assistant (VIPCA), the Oracle RAC Notification Service Utility (racgons
), Oracle Clusterware Setup (crssetup), and oifcfg.
Note:
Oracle Clusterware cloning can only be performed in silent mode.Complete the following steps to clone Oracle Database with RAC software on Windows system computers:
Skip this step if you have a shared Oracle Database home. If you do not have a shared Oracle Database home, zip the Oracle Database home with Oracle RAC on the existing node and copy it to the new node. Unzip the Oracle Database with Oracle RAC home on the new node in the same directory in which the Oracle Database home with Oracle RAC resided on the existing node. For example, assume that the location of the destination Oracle RAC home on the new node is %ORACLE_HOME%
.
Note:
For more information on zipping and unzipping, see "Source Preparation Phase" and "Cloning Phase".On the new node, go to the %ORACLE_HOME%\clone\bin
directory and run the following command, where Oracle_Home
is the Oracle Database home, Oracle_Home_Name
is the name of the Oracle Database home, existing_node
is the name of the existing node, and new_node
is the name of the new node:
perl clone.pl ORACLE_HOME=Oracle_Home ORACLE_HOME_NAME=Oracle_Home_Name '-O"CLUSTER_NODES={existing_node,new_node}"' '-OLOCAL_NODE=new_node' '-O-noConfig'
If you have a shared Oracle Database home with Oracle RAC, append the -O-cfs
option to the command example in this step and provide a complete path location for the cluster file system. Repeat this step for all nodes.
On the new node, run NETCA to add a listener.
From the node that you cloned, run DBCA to add the database instance to the new node.
This section explains how to add nodes to existing Oracle RAC environments by using Oracle cloning. These following topics explain how to use cloning for both UNIX and Linux system environments, as well as Windows system environments:
Cloning Oracle RAC Environments on UNIX and Linux System-Based Environments
Cloning Oracle RAC Environments on Windows System-Based Environments
These procedures assume that you have successfully installed and configured an Oracle RAC environment to which you want to add nodes and instances. To add nodes to a UNIX or Linux system Oracle RAC environment using cloning, extend the Oracle Clusterware configuration, extend the Oracle Database software with RAC, and then add the listeners and instances by running the Oracle assistants as described in the following procedures:
Cloning Oracle Clusterware on UNIX and Linux System-Based Environments
Cloning Oracle RAC Software on UNIX and Linux System-Based Environments
Complete the following steps to clone Oracle Clusterware on UNIX and Linux systems:
Skip this step if you have a shared Oracle Clusterware home. If you do not have a shared Oracle Clusterware home, tar the Oracle Clusterware home from an existing node and copy it to the new node. Use $CRS_HOME
as the destination Oracle Clusterware home on the new node.
Note:
For more information on archiving and unarchiving, see "Source Preparation Phase" and "Cloning Phase".If you do not have a shared Oracle Clusterware home, navigate to the $ORACLE_HOME/clone/bin
directory on the new node and run the following command, where Oracle_home_name
is the name of the Oracle home, new_node
is the name of the new node, new_node-priv
is the private interconnect protocol address of the new node, new_node-vip
is the virtual interconnect protocol address of the new node, and central_inventory_location
is the location of the Oracle Central Inventory:
perl clone.pl ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Oracle_home_name '-O"sl_tableList={new_node:new_node-priv:new_node-vip}"' '-O-noConfig''-O"INVENTORY_LOCATION=central_inventory_location"'
If you have a shared Oracle Clusterware home, append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system.
Note:
Only provide a value for thesl_tableList
variable. The Perl clone.pl
script takes all other variable settings from the zipped Oracle Clusterware home. This is only true, however, if the source of the zipped home was from an existing node of the cluster that you are extending.
If you use any other Oracle RAC environment as your cloning source, that is, if you clone from a node in a cluster other than the one that you are extending, you must provide values for all of the arguments. This includes values for the Oracle Cluster Registry and voting disk location arguments. You must do this because the value for sl_tableList
is used as shown in the command example in this step. Also note that you should only specify values for the new node for the sl_tableList
options.
On the new node, go to the directory that contains the central Oracle inventory. Run the orainstRoot.sh
script to populate the /etc/oraInst.loc
file with information about the Central Inventory location.
Run the following command on the existing node, where new_node
is the name of the new node, new_node-priv
is the private interconnect protocol address for the new node, and new_node-vip
is the virtual interconnect protocol address for the new node:
$ORACLE_HOME/oui/bin/addNode.sh –silent "CLUSTER_NEW_NODES={new_node}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={new_node-priv}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={new_node-vip}" –noCopy
Note:
(Because theclone.pl
script has already been run on the new node, this step only updates the inventories on the nodes and instantiates scripts on the local node.On the existing node, run the rootaddnode.sh
script from the $ORACLE_HOME/install
directory.
On the new node, go to the $ORACLE_HOME
directory and run the ./root.sh
script to start the Oracle Clusterware on the new node.
Determine the remote port to use in the next step by running the following command from the $CRS_HOME/opmn/conf
directory:
cat ons.config
From the $CRS_HOME/bin
directory on an existing node, run the following command, where racgons
is the Oracle RAC Notification Service Utility, new_node
is the name of the new node, and remote_port
is the value from the output of the previous step:
./racgons add_config new_node:remote_port
Complete the following steps to clone Oracle Database with RAC software on UNIX and Linux systems:
If you do not have a shared Oracle Database home, tar the Oracle RAC home from the existing node and copy it to the new node. Assume that the location of the destination Oracle RAC home on the new node is $ORACLE_HOME
. Otherwise, skip this step.
Note:
For more information on archiving and unarchiving, see "Source Preparation Phase" and "Cloning Phase".If you do not have a shared Oracle Database home, navigate to the $ORACLE_HOME/clone/bin
directory on the new node and run the following command, where existing_node
is the name of the node that you are cloning, new_node2
and new_node3
are the names of the new nodes, and Oracle_home_name
is the name of the Oracle home:
perl clone.pl '-OÒCLUSTER_NODES={existing_node,new_node2,new_node3}"' '-O"LOCAL_NODE=new_node2"' ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Oracle_home_name '-O-noConfig'
If you have a shared Oracle Database home, append the -cfs
option to the command example in this step and provide a complete path location for the cluster file system.
Run the following command on the existing node from the $ORACLE_HOME/oui/bin
directory, where existing_node
is the name of the original node that you are cloning, and new_node2
and new_node3
are the names of the new node:
./runInstaller –updateNodeList ORACLE_HOME=$ORACLE_HOME ÒCLUSTER_NODES={existing_node,new_node2,new_node3}Ó
On the new node, go to the $ORACLE_HOME
directory and run the following command:
./root.sh
On the new node, run the Net Configuration Assistant (NETCA) to add a listener.
From the node that you cloned, run the Database Configuration Assistant (DBCA) to add the new instance.
These procedures assume that you have successfully installed and configured an Oracle RAC environment to which you want to add nodes and instances. To add nodes to a Windows system Oracle RAC environment using cloning, extend the Oracle Clusterware configuration, extend the Oracle Database software with RAC, and then add the listeners and instances by running the Oracle assistants as described in the following procedures:
Cloning Oracle Clusterware on Windows System-Based Environments
Cloning Oracle RAC Software on Windows System-Based Environments
Complete the following steps to clone Oracle Clusterware on Windows system computers:
Skip this step if you have a shared Oracle Clusterware home. If you do not have a shared Oracle Clusterware home, zip the Oracle Clusterware home from the existing node and copy it to the new node. Unzip the home on the new node in the equivalent directory structure as the directory structure in which the Oracle Clusterware home resided on the existing node. For example, assume that the location of the destination Oracle Clusterware home on the new node is %CRS_HOME%
.
Note:
For more information on zipping and unzipping, see "Source Preparation Phase" and "Cloning Phase".On the new node, go to the %CRS_HOME%\clone\bin
directory and run the following command, where CRS_HOME
is the location of the Oracle Clusterware home, CRS_HOME_NAME
is the name of the Oracle Clusterware home, and where new_node
, new_node-priv
and new_node-vip
are the name of the new node, the private interconnect protocol address of the new node, and the virtual interconnect protocol address of the new node, respectively:
perl clone.pl ORACLE_HOME=CRS_HOME ORACLE_HOME_NAME=CRS_HOME_NAME '-O"sl_tableList={new_node:new_node-priv:new_node-vip}"' '-O-noConfig' '-OPERFORM_PARTITION_TASKS=FALSE'
If you have a shared Oracle Clusterware home, append the -O-cfs
option to the command example in this step and provide a complete path location for the cluster file system.
Note:
Only provide a value for thesl_tableList
variable. The Perl clone.pl
script takes all other variable settings from the zipped Oracle Clusterware home. This is only true, however, if the source of the zipped home was from an existing node of the cluster that you are extending.
If you use any other Oracle RAC environment as your cloning source; that is, if you clone from a node in a cluster other than the one that you are extending, you must provide values for all of the arguments. This includes values for the Oracle Cluster Registry (OCR) and voting disk location arguments. You must do this because the value for sl_tableList
is used as shown in the command example in this step. You must also specify the OCR and voting disk locations using the sl_OHPartitionsAndSpace_valueFromDlg
variable as well as provide values for the PERFORM_PARTITION_TASKS
argument. Only specify values for the new node for the sl_tableList
options.
Run the following command on the existing node, where new_node
, new_node-priv
, and new_node-vip
are the name of the new node, the private interconnect protocol address of the new node, and the virtual interconnect protocol address of the new node, respectively:
%ORACLE_HOME%\oui\bin\addNode.bat -silent "CLUSTER_NEW_NODES={new_node}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={new_node-priv}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={new_node-vip}" -noCopy -noRemoteActions
Note:
(Because you have already run theclone.pl
script on the new node, this step only updates the inventories on the nodes and instantiates scripts on the local node.From the %CRS_HOME%\install
directory on the existing node, run the crssetup.add.bat
script to instantiate the Virtual Protocol Configuration Assistant (VIPCA) and the Oracle RAC Notification Service Utility (racgons
).
Complete the following steps to clone Oracle Database with RAC software on Windows system computers:
Skip this step if you have a shared Oracle Database home. If you do not have a shared Oracle Database home, zip the Oracle Database home with Oracle RAC on the existing node and copy it to the new node. Unzip the Oracle Database with Oracle RAC home on the new node in the same directory in which the Oracle Database home with Oracle RAC resided on the existing node. For example, assume that the location of the destination Oracle RAC home on the new node is %ORACLE_HOME%
.
Note:
For more information on zipping and unzipping, see "Source Preparation Phase" and "Cloning Phase".On the new node, go to the %ORACLE_HOME%\clone\bin
directory and run the following command, where Oracle_Home
is the Oracle Database home, Oracle_Home_Name
is the name of the Oracle Database home, existing_node
is the name of the existing node, and new_node
is the name of the new node:
perl clone.pl ORACLE_HOME=Oracle_Home ORACLE_HOME_NAME=Oracle_Home_Name '-O"CLUSTER_NODES={existing_node,new_node}"' '-OLOCAL_NODE=new_node' '-O-noConfig'
If you have a shared Oracle Database home with Oracle RAC, append the -O-cfs
option to the command example in this step and provide a complete path location for the cluster file system.
On the existing node from the RAC_HOME\oui\bin
directory, run the following command, where Oracle_Home
is the Oracle Database home with Oracle RAC, existing_node
is the name of the existing node, and new_node
is the name of the new node:
setup.exe -updateNodeList ORACLE_HOME=Oracle_Home "CLUSTER_NODES={existing_node,new_node" LOCAL_NODE=existing_node
On the new node, run NETCA to add a listener.
From the node that you cloned, run DBCA to add the database instance to the new node.
This section describes the clone.pl
script variables and their definitions for UNIX and Linux systems, as well as Windows systems under the following topics:
Table 6-2 describes the variables that can be passed to clone.pl
with the -O
option for UNIX and Linux systems.
Table 6-2 UNIX and Linux System-Based Variables for clone.pl with the -O Option
Variable | Datatype | Description |
---|---|---|
|
Integer |
Set to |
|
Integer |
Set to |
|
String |
Contains user-entered cluster name information; allow a maximum of 15 characters. |
|
String |
Not required in the Oracle Cluster Registry (OCR) dialog. |
|
String |
Passes the cluster configuration file information, which is the same file as that specified during installation. You can use this file instead of node1 node1-priv node1-vip node2 node2-priv node2-vip Note that if you are cloning from an existing installation, you should use |
|
String List |
Return value from the Private Interconnect Enforcement table. This variable has values in the format
For example: {"eth0:10.87.24.0:2","eth1:140.87.24.0:1","eth3:140.74.30.0:3"} You can run the |
|
String List |
Set the value of this variable to be equal to the information in the cluster configuration information table. This file contains a comma-separated list of values. The first field designates the public node name, the second field designates the private node name, and the third field designates the virtual host name. Only OUI uses the fourth and fifth fields, and they should default to {"node1:node1-priv:node1-vip:N:Y","node2:node2-priv:node2-vip:N:Y"}. |
|
String |
Set the value of this variable to be the location of the voting disk. For example: /oradbshare/oradata/vdisk |
|
String |
Set the value of this variable to be the location of the first additional voting disk. You must set this variable if you choose a value of /oradbshare/oradata/vdiskmirror1 |
|
String |
Set the value of this variable to the OCR location. Oracle places this value in the /oradbshare/oradata/ocr |
|
String |
Set the value of this variable to the value for the OCR mirror location. Oracle places this value in the /oradbshare/oradata/ocrmirror |
|
String |
Set the value of this variable to be the location of the second additional voting disk. You must set this variable if you choose a value of /oradbshare/oradata/vdiskmirror2 |
Table 6-3 describes the variables that can be passed to clone.pl
with the -O
option for Windows system environments.
Table 6-3 Windows System-Based Variables for clone.pl with the -O Option
Variable | Datatype | Description |
---|---|---|
|
String List |
Represents the cluster node names you selected for installation. For example, if you selected CLUSTER_NODES = {"node1"} |
|
Boolean |
Only set this variable when performing a silent installation with a response file. The valid values are |
|
String |
Set the value for this variable to be the name of the cluster that you are creating from a cloning operation using a maximum of 15 characters. Valid characters for the cluster name can be any combination of lower and uppercase alphabetic characters |
|
String List |
Set the value of this variable to be equal to the information in the cluster configuration information table. This file contains a comma-separated list of values. The first field designates the public node name, the second field designates the private node name, and the third field designates the virtual host name. OUI only uses the fourth and fifth fields, and they should default to {"node1:node1-priv:node1-vip:N:Y","node2:node2-priv:node2-vip:N:Y} |
|
String List |
Set the value for this variable using the following format:
For example, to configure the OCR and voting disk on raw devices and to not use a cluster file system for either data or software, set sl_OhPartitionsAndSpace_valueFromDlg = {Disk,Partition,partition size, 0,N/A,1,Disk,Partition, partition size,0,N/A,2,.....) |