Oracle® Database 2 Day + Real Application Clusters Guide 11g Release 1 (11.1) Part Number B28252-01 |
|
|
View PDF |
This chapter describes how to add nodes and instances in Oracle Real Application Clusters (Oracle RAC) environments. You can use these methods when configuring a new Oracle RAC cluster, or when scaling up an existing Oracle RAC cluster.
This chapter includes the following sections:
Note:
For this chapter, it is very important that you perform each step in the order shown.See Also:
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
To prepare the new node prior to installing the Oracle software, see Chapter 2, "Preparing Your Cluster".
It is critical that you follow the configuration steps in order for the following procedures to work. These steps include, but are not limited to the following:
Adding the public and private node names for the new node to the /etc/hosts
file on the existing nodes, docrac1
and docrac2
Verifying the new node can be accessed (using the ping
command) from the existing nodes
Running the following command on either docrac1
or docrac2
to verify the new node has been properly configured:
cluvfy stage -pre crsinst -n docrac3
Now that the new node has been configured to support Oracle Clusterware, you use Oracle Universal Installer (OUI) to add a CRS home to the node being added to your Oracle RAC cluster. This section assumes that you are adding a node named docrac3
and that you have already successfully installed Oracle Clusterware on docrac1
in a nonshared home, where
represents the successfully installed Oracle Clusterware home. Adding a new node to an Oracle RAC cluster is sometimes referred to as cloning.CRS_home
To extend the Oracle Clusterware installation to include the new node:
Verify the ORACLE_HOME
environment variable on docrac1
directs you to the successfully installed CRS home on that node.
Go to CRS_home
/oui/bin
and run the addNode.sh
script.
cd /crs/oui/bin ./addNode.sh
OUI starts and first displays the Welcome window.
Click Next.
The Specify Cluster Nodes to Add to Installation window appears.
Select the node or nodes that you want to add, for example, docrac3
. Make sure the public, private and VIP names are configured correctly for the node you are adding. Click Next.
Verify the entries that OUI displays on the Summary window and click Next.
The Cluster Node Addition Progress window appears. During the installation process, you will be prompted to run scripts to complete the configuration.
Run the rootaddNode.sh
script from the CRS_home
/install/
directory on docrac1
as the root
user when prompted to do so. For example:
[docrac1:oracle]$ su root [docrac1:root]# cd /crs/install [docrac1:root]# ./rootaddNode.sh
This script adds the node applications of the new node to the Oracle Cluster Registry (OCR) configuration.
Run the orainstRoot.sh
script on the node docrac3
if OUI prompts you to do so. When finished, click OK in the OUI window to continue with the installation.
Another window appears, prompting you to run the root.sh
script.
Run the CRS_home
/root.sh
script as the root
user on the node docrac3
to start Oracle Clusterware on the new node.
[docrac3:oracle]$ su root [docrac3:root]# cd /crs [docrac3:root]# ./root.sh
Return to the OUI window after the script runs successfully, then click OK.
OUI displays the End of Installation window.
Exit the installer.
Obtain the Oracle Notification Services (ONS) port identifier used by the new node, which you need to know for the next step, by running the ons.config
script in the CRS_home
/opmn/conf
directory on the docrac1
node, as shown in the following example:
[docrac1:oracle]$ cd /crs/opmn/conf [docrac1:oracle]$ cat ons.config
After you locate the ONS port identifier for the new node, you must make sure that the ONS on docrac1
can communicate with the ONS on the new node, docrac3
.
Add the new node's ONS configuration information to the shared OCR. From the CRS_home
/bin
directory on the node docrac1
, run the ONS configuration utility as shown in the following example, where remote_port
is the port identifier from Step 11, and docrac3
is the name of the node that you are adding:
[docrac1:oracle]$ ./racgons add_config docrac3:remote_port
You should now have Oracle Clusterware running on the new node. To verify the installation of Oracle Clusterware on the new node, you can run the following command as the root
user on the newly configured node, docrac3
:
[docrac1:oracle]$ opt/oracle/crs/bin/cluvfy stage -post crsinst -n docrac3 -verbose
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
To extend an existing Oracle RAC database to a new node, you must configure the shared storage for the new database instances that will be created on new node. You must configure access to the same shared storage that is already used by the existing database instances in the cluster. For example, the sales
cluster database in this guide uses Automatic Storage Management (ASM) for the database shared storage, so you must configure ASM on the node being added to the cluster.
Because you installed ASM in its own home directory, you must configure an ASM home on the new node using OUI. The procedure for adding an ASM home to the new node is very similar to the procedure you just completed for extending Oracle Clusterware to the new node.
Note:
If the ASM home directory is the same as the Oracle home directory in your installation, then you do not need to complete the steps in this section.To extend the ASM installation to include the new node:
Ensure that you have successfully installed the ASM software on at least one node in your cluster environment. In the following steps, ASM_home
refers to the location of the successfully installed ASM software.
Go to the
directory on ASM_home
/oui/bindocrac1
and run the addNode.sh
script.
When OUI displays the Node Selection window, select the node to be added (docrac3
), and then click Next.
Verify the entries that OUI displays on the Summary window, and then click Next.
Run the root.sh
script on the new node, docrac3
, from the ASM home directory on that node when OUI prompts you to do so.
You now have a copy of the ASM software on the new node.
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
Now that you have extended the CRS home and ASM home to the new node, you must extend the Oracle home on docrac1
to docrac3
. The following steps assume that you have already completed the previous tasks described in this section, and that docrac3
is already a member node of the cluster to which docrac1
belongs.
The procedure for adding an Oracle home to the new node is very similar to the procedure you just completed for extending ASM to the new node.
To extend the Oracle RAC installation to include the new node:
Ensure that you have successfully installed the Oracle RAC software on at least one node in your cluster environment. To use these procedures as shown, replace
with the location of your installed Oracle home directory.Oracle_home
Go to the
directory on Oracle_home
/oui/bindocrac1
and run the addNode.sh
script.
When OUI displays the Specify Cluster Nodes to Add to Installation window, select the node to be added (docrac3
), and then click Next.
Verify the entries that OUI displays in the Cluster Node Addition Summary window, and then click Next.
The Cluster Node Addition Progress window appears.
When prompted to do so, run the root.sh
script s the root
user on the new node, docrac3
, from the Oracle home directory on that node.
Return to the OUI window and click OK. The End of Installation window appears.
Exit the installer.
After completing these steps, you should have an installed Oracle home on the new node.
See Also:
Oracle Real Application Clusters Administration and Deployment Guide for more information about adding and removing nodes from your cluster database
You can use Enterprise Manager to add an instance to your cluster database. You must first configured the new node to be a part of the cluster and installed the software on the new node.
To add an instance to the cluster database:
From the Cluster Database Home page, click Server.
Under the heading Change Database, click Add Instance.
The Add Instance: Cluster Credentials page appears.
Enter the host credentials and ASM credentials, then click Next.
The Add Instance: Host page appears.
Select the node on which you want to create the new instance, verify the new instance name is correct, and then Next.
After the selected host has been validated, the Add Instance: Review page appears.
Review the information, then click Submit Job to proceed.
A confirmation page appears.
Click View Job to check on the status of the submitted job.
The Job Run detail page appears.
Click your browser's Refresh button until the job shows a status of Succeeded or Failed.
If the job shows a status of Failed, you can click the name of the step that failed to view the reason for the failure.
Click the Database tab to return to the Cluster Database Home page.
The number of instances available in the cluster database is increased by one.
To delete an instance from the cluster:
From the Cluster Database Home page, click Server.
On the Server subpage, under the heading Change Database, click Delete Instance.
The Delete Instance: Cluster Credentials page appears.
Enter your cluster credentials and ASM credentials, then click Next.
The Delete Instance: Database Instance page appears
Select the instance you want to delete, then click Next.
The Delete Instance: Review page appears.
Review the information, and if correct, click Submit Job to continue. Otherwise, click Back and correct the information.
A Confirmation page appears.
Click View Job to view the status of the node deletion job.
A Job Run detail page appears.
Click your browser's Refresh button until the job shows a status of Succeeded or Failed.
If the job shows a status of Failed, you can click the name of the step that failed to view the reason for the failure.
Click the Database tab to return to the Cluster Database Home page.
The number of instances available in the cluster database is reduced by one.