Oracle® Database 2 Day + Data Warehousing Guide 11g Release 1 (11.1) Part Number B28314-01 |
|
|
View PDF |
An Oracle data warehouse is an Oracle Database specifically configured and optimized to handle the size of data and types of queries that are intrinsic to data warehousing. This section discusses how to initially configure your data warehouse environment. It includes the following topics:
In these instructions, you configure an Oracle Database for use as a data warehouse. Subsequently, you install Oracle Warehouse Builder which leverages the Oracle Database and provides graphical user interfaces for designing data management strategies.
To set up a data warehouse system, complete the following steps:
Size and configure your hardware as described in "Preparing the Environment".
Install the Oracle Database software.
Optimize the Database for use as a data warehouse as described in "Setting Up a Database for a Data Warehouse".
Access the Oracle Warehouse Builder software.
Oracle Warehouse Builder is the data integration product that is packaged with the Standard and Enterprise editions of the Oracle Database.
Follow the instructions in "Accessing Oracle Warehouse Builder". Subsequently, you can install a demonstration to assist you in learning how to complete common data warehousing tasks using Warehouse Builder.
The basic components for a data warehouse architecture are the same as for an online transaction processing (OLTP) system. However, due to the sheer size of the data, you have to choose different quantities to balance the individual building blocks differently. The starting point for sizing a data warehouse is the throughput that you require from the system. This can be one or both of the following:
The amount of data that is being accessed by queries hitting the system at peak time, in conjunction with the acceptable response time. You may be able to use throughput numbers and experience from an existing application to estimate the required throughput.
The amount of data that is loaded within a window of time.
In general, you need to estimate the highest throughput you need at any given point.
Hardware vendors can recommend balanced configurations for a data warehousing application and can help you with the sizing. Contact your preferred hardware vendor for more details.
A properly sized and balanced hardware configuration is required to maximize data warehouse performance. The following sections discuss some important considerations in achieving this balance:
Central processing units (CPUs) provide the calculation capabilities in a data warehouse. You must have sufficient CPU power to perform the data warehouse operations. Parallel operations are more CPU-intensive than the equivalent serial operation would be.
Use the estimated highest throughput as a guideline for the number of CPUs you need. As a rough estimate, use the following formula:
<number of CPUs> = <maximum throughput in MB/s> / 200
In other words, a CPU can sustain up to about 200 MB per second. For example, if a system requires a maximum throughput of 1200 MB per second, then the system needs <number of CPUs> = 1200/200 = 6 CPUs
. A configuration with 1 server with 6 CPUs can service this system. A 2-node clustered system could be configured with 3 CPUs in both nodes.
Memory in a data warehouse is particularly important for processing memory-intensive operations such as large sorts. Access to the data cache is less important in a data warehouse because most of the queries access vast amounts of data. Data warehouses do not have memory requirements as critical as OLTP applications.
The number of CPUs provides you a good guideline for the amount of memory you need. Use the following simplified formula to derive the amount of memory you need from the CPUs you selected:
<amount of memory in GB> = 2 * <number of CPUs>
For example, a system with 6 CPUs needs 2 * 6 = 12 GB
of memory. Most standard servers fulfill this requirement.
A common mistake in data warehouse environments is to size the storage based on the maximum capacity needed. Sizing based exclusively on storage requirements will likely create a throughput bottleneck.
Use the throughput you require to find out how many disk arrays you need. Use the storage provider's specifications to find out how much throughput a disk array can sustain. Note that storage providers measure in Gb per second, and your initial throughput estimate is based on MB second. An average disk controller has a maximum throughput of 2 Gb second, which translates to a sustainable throughput of about (70% * 2 Gbit/s) /8 = 180 MB/s
.
Use the following formula to determine the number of disk arrays you need:
<number of disk controllers> = <throughput in MB/s> / <individual controller throughput in MB/s>
For example, our system with 1200 MB per second throughput requires at least 1200 / 180 = 7 disk arrays.
Make sure you have enough physical disks to sustain the throughput you require. Ask your disk vendor for the throughput numbers of the disks.
The end-to-end I/O system consists of more components than just the CPUs and disks. A well-balanced I/O system has to provide approximately the same bandwidth across all components in the I/O system. These components include the following:
Host Bus Adapters (HBAs), the connectors between the server and the storage.
Switches, in between the servers and a Storage Area Network (SAN) or Network Attached Storage (NAS).
Ethernet adapters for network connectivity (GigE NIC or Infiniband). In a clustered environment, you need an additional private port for the interconnect between the nodes that you should not include when sizing the system for I/O throughput. The interconnect must be sized separately, taking into account factors such as internode parallel execution.
Wires that connect the individual components.
Each of the components has to be able to provide sufficient I/O bandwidth to ensure a well-balanced I/O system. The initial throughput you estimated and the hardware specifications from the vendors are the basis to determine the quantities of the individual components you need. Use the conversion in the following table to translate the vendors' maximum throughput numbers in bits into sustainable throughput numbers in bytes.
Table 2-1 Throughput Performance Conversion
Component | Bits | Bytes Per Second |
---|---|---|
HBA |
2 Gbit |
200 MB |
16 Port Switch |
8 * 2 Gbit |
1200 MB |
Fibre Channel |
2 Gbit |
200 MB |
GigE NIC |
1 Gbit |
80 MB |
Inf-2 Gbit |
2 Gbit |
160 MB |
In addition to having sufficient components to ensure sufficient I/O bandwidth, the layout of data on the disk is also a key to success or failure. If you configured the system for sufficient throughput across all disk arrays, and all data a query is going to retrieve still resides on one disk, then you still will not be able to get the required throughput, because your one disk will be the bottleneck. In order to avoid such a situation, stripe data across as many disks as possible, ideally all disks. A stripe size of 256 KB to 1 MB provides a good balance between multiblock read operations and data spread across multiple disks.
About Automatic Storage Management (ASM)
ASM is a component of Oracle Database that you can use to stripe data across disks in a disk group. ASM ensures the data is balanced across all disks. Disks can be added or removed while ASM is operational, and ASM will automatically rebalance the storage across all available disks. ASM can also be used to mirror data on the file system, to avoid loss of data in case of disk failure. The default stripe size for ASM is 1 MB. You can lower the stripe size to 128 KB.
You can perform storage operations without ASM, but this increases the chances of making a mistake. Thus, Oracle recommends you use ASM whenever possible.
Before you install Oracle Database, you should verify your setup on the hardware and operating-system level. The key point to understand is that if the operating system cannot deliver the performance and throughput you need, Oracle Database will never be able to perform according to your requirements. Two tools for verifying throughput are the dd
utility and Orion, an Oracle-supplied tool.
A very basic way to validate the operating system throughput, on UNIX or Linux systems, is to use the dd
utility. The dd
utility is a very basic way to read data blocks directly from disk and, because there is almost no overhead involved, the output from the dd
utility provides a reliable calibration. Oracle Database will reach a maximum throughput of approximately 90 percent of what the dd
utility can achieve.
To use the dd
utility:
First, the most important options for using dd
are the following:
bs=BYTES: Read BYTES bytes at a time; use 1 MB count=BLOCKS: copy only BLOCKS input blocks if=FILE: read from FILE; set to your device of=FILE: write to FILE; set to /dev/null to evaluate read performance; write to disk would erase all existing data!!! skip=BLOCKS: skip BLOCKS BYTES-sized blocks at start of input
To estimate the maximum throughput Oracle Database will be able to achieve, you can mimic a workload of a typical data warehouse application, which consists of large, random sequential disk access.
The following dd
command performs random sequential disk access across two devices reading a total of 2 GB. The throughput is 2 GB divided by the time it takes to finish the following command:
dd bs=1048576 count=200 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=200 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=400 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=600 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 skip=800 if=/raw/data_1 of=/dev/null & dd bs=1048576 count=200 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=200 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=400 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=600 if=/raw/data_2 of=/dev/null & dd bs=1048576 count=200 skip=800 if=/raw/data_2 of=/dev/null &
In your test, you should include all the storage devices that you plan to include for your database storage. When you configure a clustered environment, you should run dd
commands from every node.
Orion is a tool that Oracle provides to mimic an Oracle-like workload on a system to calibrate the throughput. Compared to the dd
utility, Orion provides the following advantages:
Orion's simulation is closer to the workload the database will produce.
Orion enables you to perform reliable write and read simulations within one simulation.
Oracle recommends you use Orion to verify the maximum achievable throughput, even if a database has already been installed.
The types of supported I/O workloads are as follows:
small and random
large and sequential
large and random
mixed workloads
For each type of workload, Orion can run tests at different levels of I/O load to measure performance metrics such as MB per second, I/O per second, and I/O latency. A data warehouse workload is typically characterized by sequential I/O throughput, issued by multiple processes. You can run different I/O simulations depending upon which type of system you plan to build. Examples are the following:
daily workloads when users or applications query the system
the data load when users may or may not access the system
index and materialized view builds
backup operations
To download Orion software, point your browser to the following:
http://www.oracle.com/technology/software/tech/orion/index.html
Note that Orion is Beta software, and unsupported.
To invoke Orion:
$ orion -run simple -testname mytest -num_disks 8
Typical output is as follows:
Orion VERSION 10.2 Command line: -run advanced -testname orion14 -matrix point -num_large 4 -size_large 1024 -num_disks 4 -type seq -num_streamIO 8 -simulate raid0 -cache_size 0 -verbose This maps to this test: Test: orion14 Small IO size: 8 KB Large IO size: 1024 KB IO Types: Small Random IOs, Large Sequential Streams Number of Concurrent IOs Per Stream: 8 Force streams to separate disks: No Simulated Array Type: RAID 0 Stripe Depth: 1024 KB Write: 0% Cache Size: 0 MB Duration for each Data Point: 60 seconds Small Columns:, 0 Large Columns:, 4 Total Data Points: 1 Name: /dev/vx/rdsk/asm_vol1_1500m Size: 1572864000 Name: /dev/vx/rdsk/asm_vol2_1500m Size: 1573912576 Name: /dev/vx/rdsk/asm_vol3_1500m Size: 1573912576 Name: /dev/vx.rdsk/asm_vol4_1500m Size: 1573912576 4 FILEs found. Maximum Large MBPS=57.30 @ Small=0 and Large=4
In this example, the maximum throughput for this particular workload is 57.30 MB per second.
After you set up your environment and install the Oracle Database software, ensure that you have the Database parameters set correctly. Note that there are not many database parameters that have to be set.
As a general guideline, avoid changing a database parameter unless you have good reason to do so. You can use Oracle Enterprise Manager to set up your data warehouse. To view various parameter settings, navigate to the Database page, then click Server. Under Database Configuration, click Memory Parameters or All Inititalization Parameters.
On a high level, there are two memory segments:
Shared memory: Also called the system global area (SGA), this is the memory used by the Oracle instance.
Session-based memory: Also called program global area (PGA), this is the memory that is occupied by sessions in the database. It is used to perform database operations, such as sorts and aggregations.
Oracle Database can automatically tune the distribution of the memory components in two memory areas. As a result, you need to set only the following parameters:
SGA_TARGET
The SGA_TARGET
parameter is the amount of memory you want to allocate for shared memory. For a data warehouse, the SGA can be relatively small compared to the total memory consumed by the PGA. To get started, assign 25% of the total memory you allow Oracle Database to use to the SGA. The SGA, at a minimum, should be 100 MB.
PGA_AGGREGATE_TARGET
The PGA_AGGREGATE_TARGET
parameter is the target amount of memory that you want the total PGA across all sessions to consume. As a starting point, you can use the following formula to define the PGA_AGGREGATE_TARGET
value:
PGA_AGGREGATE_TARGET = 3 * SGA_TARGET
If you do not have enough physical memory for the PGA_AGGREGATE_TARGET
to fit in memory, then reduce PGA_AGGREGATE_TARGET
.
MEMORY_TARGET
and MEMORY_MAX_TARGET
The MEMORY_TARGET
parameter enables you to set a target memory size and the related initialization parameter, MEMORY_MAX_TARGET
, sets a maximum target memory size. The database then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and aggregate program global area (PGA). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you cannot accidentally set the target memory size too high. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the database also prevents you from setting the target memory size too low.
You can set an initialization parameter by issuing an ALTER
SYSTEM
statement, as illustrated by the following:
ALTER SYSTEM SET SGA_TARGET = 1024M;
A good starting point for a data warehouse is the data warehouse template database that you can select when you run the Database Configuration Assistant (DBCA). However, any database will be acceptable as long as you make sure you take the following initialization parameters into account:
COMPATIBLE
The COMPATIBLE
parameter identifies the level of compatibility that the database has with earlier releases. To benefit from the latest features, set the COMPATIBLE
parameter to your database release number.
OPTIMIZER_FEATURES_ENABLE
To benefit from advanced cost-based optimizer features such as query rewrite, make sure this parameter is set to the value of the current database version.
DB_BLOCK_SIZE
The default value of 8 KB is appropriate for most data warehousing needs. If you intend to use table compression, consider a larger block size.
DB_FILE_MULTIBLOCK_READ_COUNT
The DB_FILE_MULTIBLOCK_READ_COUNT
parameter enables reading several database blocks in a single operating-system read call. Because a typical workload on a data warehouse consists of many sequential I/Os, make sure you can take advantage of fewer large I/Os as opposed to many small I/Os. When setting this parameter, take into account the block size as well as the maximum I/O size of the operating system, and use the following formula:
DB_FILE_MULTIBLOCK_READ_COUNT * DB_BLOCK_SIZE = <maximum operating system I/O size>
Maximum operating-system I/O sizes vary between 64 KB and 1 MB.
PARALLEL_MAX_SERVERS
The PARALLEL_MAX_SERVERS
parameter sets a resource limit on the maximum number of processes available for parallel execution. Parallel operations need at most twice the number of query server processes as the maximum degree of parallelism (DOP) attributed to any table in the operation.
Oracle Database sets the PARALLEL_MAX_SERVERS
parameter to a default value that is sufficient for most systems. The default value for PARALLEL_MAX_SERVERS
is as follows:
(CPU_COUNT x PARALLEL_THREADS_PER_CPU x (2 if PGA_AGGREGATE_TARGET > 0; otherwise 1) x 5)
This might not be enough for parallel queries on tables with higher DOP attributes. Oracle recommends users who expect to run queries of higher DOP to set PARALLEL_MAX_SERVERS
as follows:
2 x DOP x NUMBER_OF_CONCURRENT_USERS
For example, setting the PARALLEL_MAX_SERVERS
parameter to 64 will allow you to run four parallel queries simultaneously, assuming that each query is using two slave sets with a DOP of eight for each set.
If the hardware system is neither CPU-bound nor I/O bound, then you can increase the number of concurrent parallel execution users on the system by adding more query server processes. When the system becomes CPU- or I/O-bound, however, adding more concurrent users becomes detrimental to the overall performance. Careful setting of the PARALLEL_MAX_SERVERS
parameter is an effective method of restricting the number of concurrent parallel operations.
PARALLEL_ADAPTIVE_MULTI_USER
The PARALLEL_ADAPTIVE_MULTI_USER
parameter, which can be TRUE
or FALSE
, defines whether or not the server will use an algorithm to dynamically determine the degree of parallelism for a particular statement depending on the current workload. To take advantage of this feature, set PARALLEL_ADAPTIVE_MULTI_USER
to TRUE
.
QUERY_REWRITE_ENABLED
To take advantage of query rewrite against materialized views, you must set this parameter to TRUE
. This parameter defaults to TRUE
.
QUERY_REWRITE_INTEGRITY
The default for the QUERY_REWRITE_INTEGRITY
parameter is ENFORCED
. This means that the database will rewrite queries against only fully up-to-date materialized views, if it can base itself on enabled and validated primary, unique, and foreign key constraints.
In TRUSTED
mode, the optimizer trusts that the data in the materialized views is current and the hierarchical relationships declared in dimensions and RELY
constraints are correct.
STAR_TRANSFORMATION_ENABLED
To take advantage of highly optimized star transformations, make sure to set this parameter to TRUE
.
Oracle Warehouse Builder is a flexible tool that enables you to design and deploy various types of data management strategies, including traditional data warehouses.
To enable Warehouse Builder, complete the following steps:
Ensure that you have access to either a Enterprise or Standard Edition of the Oracle Database 11g.
Oracle Database 11g comes with the Warehouse Builder server components pre-installed. This includes a schema for the Warehouse Builder repository.
To utilize the default Warehouse Builder schema installed in Oracle Database 11g, first unlock the schema.
Connect to SQL*Plus as the SYS or SYSDBA user. Execute the following commands:
SQL> ALTER USER OWBSYS ACCOUNT UNLOCK;
SQL> ALTER USER OWBSYS IDENTIFIED BY
owbsys_passwd;
Launch the Warehouse Builder Design Center.
For Windows, select Start, Programs, Oracle, Warehouse Builder and then select Design Center.
For UNIX, locate owb home/owb/bin/unix and then execute owbclient.sh
Define a workspace and assign a user to the workspace.
In the single Warehouse Builder repository, you can define multiple workspaces with each workspace corresponding to a set of users working on related projects. For instance, you could create a workspace for each of the following environments: development, test, and production.
For simplicity, create one workspace MY_WORKSPACE and assign a user.
In the Design Center dialog box, click Show Details and then Workspace Management.
The Repository Assistant displays.
Following the prompts and accept the default settings in the Repository Assistant, you create a workspace and assign a user as the workspace owner.
Log into the Design Center with the user name and password you created.
In subsequent topics, this guide uses exercises to illustrate how to consolidate data from multiple flat file sources, transform the data, and load it into a new relational target. To execute the exercises presented in this guide, download the Warehouse Builder demonstration. To facilitate your learning of the product, the demonstration provides you with flat file data and scripts that create various Warehouse Builder objects.
To perform the Warehouse Builder exercises presented in this guide, complete the following steps:
Download the demonstration.
The demonstration is comprised of a set of files in a zip file called owb_demo.zip,
which is available at the following link:
http://www.oracle.com/technology/obe/admin/owb10gr2_gs.htm
The zip file includes a SQL script, two source files in comma separated values format, and 19 scripts written in Tcl.
Edit the script owbdemoinit.tcl.
The script owbdemoinit.tcl
defines and sets variables used by the other tcl scripts. Edit the following variables to match the values in your computer environment:
set tempspace TEMP
set owbclientpwd workspace_owner
set sysuser sys
set syspwd pwd
set host hostname
set port portnumber
set service servicename
set project owb_project_name
set owbclient workspace_owner
set sourcedir drive:/newowbdemo
set indexspace USERS
set dataspace USERS
set snapspace USERS
set sqlpath drive:/
oracle/11.1.0/db_1/BIN
set sid servicename
Execute the Tcl scripts from the Warehouse Builder scripting utility, OMB Plus.
For Windows, select Start, Programs, Oracle, Warehouse Builder and then select OMB Plus.
For UNIX, locate owb home/owb/bin/unix and then execute OMBPlus.sh
At the OMB+> prompt, type the following to change to the directory containing the scripts:
cd
drive
:\\newowbdemo\\
Execute all the Tcl scripts in desired sequence by typing the following command:
source loadall.tcl
Launch the Design Center and log into it as the workspace owner, using the credentials you specified in the script owbdemoinit.tcl.
Verify that you successfully set up the Warehouse Builder client to follow the demonstration.
In the Design Center, expand the Locations node which is on the right side and in the Connection Explorer. Expand Databases and then Oracle. The Oracle node should include the follow locations:
OWB_REPOSITORY
SALES_WH_LOCATION
When you successfully install the Warehouse Builder demonstration, the Design Center displays with an Oracle module named EXPENSE_WH.