Category Archives: Technical Article

Part 5: RAC/Linux/Firewire – Firewire + OCFS Setup

Firewire + OCFS Setup


In this installment, we’ll discuss how to get the Firewire drive shared between your two Linux boxes.

8. Test Firewire drive

At this point you can test the firewire drive if you like, with the standard Linux driver. You won’t be able to share the drive between the two nodes yet, however.

As root do the following:

$ modprobe ohci1394

$ modprobe ieee1394

$ modprobe sbp2

$ modprobe scsi_mod

Grab a copy of rescan-scsi-bus.sh from here and run it:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

run rescan scsi bus

Now partition it with fdisk:

$ fdisk /dev/sda

Now try making an ext2 filesystem with mke2fs

$ mke2fs /dev/sda1

Now mount it

$ mount -t ext2 /dev/sda1 /mnt/test

Now unmount it

$ umount /mnt/test

9. Linux Kernel Setup w/Firewire patch

The Linux kernel is a complex beast, and compiling it can often be a challenge. Though I like rolling my own, I downloaded the patched firewire source distro off of OTN, and try as I might, I could not get those compiled kernels to work. If anyone *DOES* get it to work, please send me their “.config” from the kernel source directory. Also I’ve tried to encourage the Oracle/Linux Firewire team to build a patch-only distro which can be applied against a standard Linux source tree. No luck yet.

Assuming you’re not going to roll your own, just download linux-2.4.20rc2-orafw-up.tar.gz from here:

http://otn.oracle.com/tech/linux/open_source.html

Move to the “/” or root directory, and untar the file:

$ tar xvzf linux-2.4.20rc2-orafw-up.tar.gz

Edit your /etc/lilo.conf or /etc/grub.conf file to include the new kernel. Do *NOT* make it the default kernel, it may not boot.

Reboot. If you come up again, you’re in luck, the kernel works for your machine. Next you want to edit your /etc/modules.conf to include these lines:

# options for oracle firewire patched kernel

options sbp2 sbp2_exclusive_login=0

post-install sbp2 insmod sd_mod

post-remove sbp2 rmmod sd_mod

As root, load the modules like this:

$ modprobe ieee1394

$ modprobe ohci1394

$ modprobe ide-scsi

$ modprobe sbp2

$ modprobe scsi_mod

If you’re having trouble seeing the device, grab a copy of rescan-scsi-bus.sh from here:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

If you want to partition, now is a good time. Use fdisk as root like this:

$ fdisk /dev/sda

If you have other SCSI devices, it may be /dev/sdb or dev/sdc and so on.

10. Go through steps 1-8 on node 2

11. Cluster Filesystem setup (OCFS)

If you wanna play around, use mke2fs on the one of the partitions you created with fdisk, and then mount the partiton on machine a. Then mount the partition again on machine b. Create a file on one of the two boxes. The other machine *WON’T* reflect it. This is equivalent to unplugging a disk which is mounted, such as a USB device, or some such. You can and probably *HAVE* corrupted the filesystem. That’s ok, because we don’t have anything important on the disk yet. Ok, unmount on both machines. If you have trouble, you may need to reboot.

Having gone through the above example, you know why OCFS is so important. Ok, now the fun part. Install OCFS. There are good docs to be found in the linux_ocfs.pdf file here:

http://download.oracle.com/otn/linux/code/ocfs/linux_ocfs.pdf

Without RedHat Advanced Server, the RPMs are *NOT* going to work. Just grab a copy of ocfs-1.0-up.o and put it in
/lib/modules/2.4.20-rc2-orafw/kernel/fs.

Use ocfstool to create the /etc/ocfs.conf file. The pdf doc listed above is pretty good at explaining this.

Load the ocfs kernel module with load_ocfs. If everything goes right it will tell you like this:

$ cd /lib/modules/2.4.20-rc2-orafw/kernel/fs

$ load_ocfs

/sbin/insmod ocfs node_name=zenith
ip_address=192.168.0.9 ip_port=7000 cs=1865 guid=72C2AF5CA29FA17CB9CB000AE6312F24

Using /lib/modules/2.4.20-rc2-orafw/kernel/fs/ocfs.o

Next make the filesystem. ocfstool can do this too.

$ mkfs.ocfs -F -b 128 -L /ocfs -m /ocfs -u 1001 -g 1001 -p 0775 /dev/sda1

And finally mount the filesystem!

$ mount -t ocfs /dev/sda1 /ocfs

$ df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/hda2 55439548 20835260 31788096 40% /

/dev/hda1 101089 18534 77336 20% /boot

none 112384 0 112384 0% /dev/shm

/dev/cdrom 122670 122670 0 100% /mnt/cdrom

/dev/sda1 60049024 30080 60018944 1% /ocfs

12. Perform step 10 on node 2.

13. Test ocfs

Here we quickly verify that a file created on one instance is viewable on another.

On node1 do:

$ cd /ocfs

$ touch mytestfile

On node2 do:

$ cd /ocfs

$ ls

mytestfile

$

You’ll see to your astonishment that the file is now visible on node 2!

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 6: RAC/Linux/Firewire – Cluster Manager Setup

Cluster Manager Setup


The cluster manager software is how the Oracle instances communicate their activities. Obviously this is an important piece to the puzzle as well. Here I review the configs, and then show how to get it up and running on each node. I *DID NOT* patch the cluster manager with the 9.2.0.2 db patch, but your mileage may vary.

Edit file $ORACLE_HOME/oracm/admin/cmcfg.ora

HeartBeat=10000

ClusterName=Oracle Cluster Manager, version 9i

PollInterval=300

PrivateNodeNames=zenith utopia

PublicNodeNames=zenith utopia

ServicePort=9998

HostName=zenith

#CmDiskFile=/ocfs/oradata/foo

MissCount=5

WatchdogSafetyMargin=3000

WatchdogTimerMargin=6000

Note, if you patch oracm to 9.2.0.2, remove the two Watchdog lines, and uncomment and use the CmDiskFile.

Edit file $ORACLE_HOME/oracm/admin/ocmargs.ora

watchdogd -d /dev/null -l 0

oracm /a:0

norestart 1800

Note, if you patch oracm to 9.2.0.2, comment out the watchdog line.

Now *AS ROOT* start up the cluster manager:

$ ./$ORACLE_HOME/oracm/bin/ocmstart.sh

You should see 8 processes with “ps -auxw | grep oracm”. Note that if you are running RH8, there’s a new ps which needs a special option “m” to notice threads. Apparently oracm is threaded (Thanks Wim). This had me pulling my hair out for weeks, and I’m bald! Anyway if that is the case, use “ps auxwm | grep oracm”. One more little recommendation. oracm is communicating via a port which you define. If you’re using iptables/ipchains, or some other firewall solution, I would recommend disabling it, at least temporarily, until you know you’ve configured everything right. Then reenable it, being sure you are good at configuring just the ports you need.

15. Perform step 14 on node 2.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Asterisk Calling Card Applications

Asterisk is a powerful PBX solution, that we already know. But what else can it do. In this article we’ll explain how to setup Asterisk to handle Call Data Records (CDR data) in MySQL. Once you have that configured, there are a number of calling card applications which can be integrated with Asterisk to provide you with the makings of a serious calling gateway.


Setup Asterisk CDR with MySQL

By default Asterisk pumps all it’s call data information to text-based log files. That’s fine for normal use, but what if you want to put that data to use in a calling card application? First you have to get Asterisk to use a database. Luckily the support is already there, all you have to do is configure it.


Start by editing your cdr_manager.conf file as follows:


enabled = yes

Next edit your modules.conf file, and somewhere in the [modules] section, add:


load => cdr_addon_mysql.so

We’re going to compile this, don’t worry. Next edit your cdr_mysql.conf file in /etc/asterisk or create it if necessary:


[global]

hostname=localhost

dbname=asteriskcdrdb

user=astxuser

;user=

password=astxpass

;password=

port=3306

sock=/var/lib/mysql/mysql.sock

;sock=/tmp/mysql.sock

userfield=1

Next install MySQL. Luckily for all you lazy bums out there, this is the simplest of all. You’ll need to download three RPMs and install them. You’ll need the latest version of mysql-server, mysql-client and finally mysql-devel.


Next you’ll create a database called “asteriskcdrdb” with mysqladmin, create a table named “cdr” with the Asterisk provided script, and then set user grants.


Now it’s time to compile the asterisk-addons package. Be sure you have zlib-devel and mysql-devel packages installed on your system or you may get errors. Checkout the source from cvs. I got some strange errors which I had to track down on the email lists, and then edit the makefile as shown below:


CFLAGS+=-DMYSQL_LOGUNIQUEID

Now stop asterisk, and start it up again, and monitor the asterisk logfile for errors as follows:


tail -f /var/log/asterisk/messages

You can finally verify that you are dumping cdr information into mysql as follows:


$ mysql asteriskcdrdb

mysql> select uniqueid, src, calldate from cdr;



There should be one entry for every call. Make some calls to local

extensions and verify that records show up here. New cdr records

will still show up in the /var/log/asterisk/cdr-csv/Master.csv

file. Not sure if this can be disabled.


Calling Card Applications


ASTCC

Though the homepage is just a voip-info wiki page

and the download available through CVS, this calling card application was updated in late December 2004. This application seems to be the winner in terms of popularity on the voip-info wiki. It comes from Digium, it supports MySQL, and setup is pretty straightforward.

AreskiCC

With a strange name, it nevertheless seems a pretty complete system. Last updated end of December, 2004, it includes a web interface, though no support for MySQL. That’s fine, but my MySQL setup instructions will need to change slightly as you’ll need to configure Asterisk to dump CDR data into Postgres.


Asterisk Billing – Prepaid application

Last updated in July, I had trouble compiling this application. There is a basic sourceforge download page, but no real homepage. I’m guessing this one is still sort of in the development stages. Also, it doesn’t come with any sound files, so you’ll have to record your own, or *borrow* from some of these other applications.

Part 7: RAC/Linux/Firewire – Cluster Database Setup

Cluster Database Setup


Setting up a clustered database is a lot like setting up an normal Oracle database. You have datafiles, controlfiles, redologs, rollback segments, and so on. With a clustered database you have a few new settings in your init.ora, and an second undo tablespace.

init.ora + config.ora setup

In a RAC environement, we finally see while Oracle has been recommending a separate config.ora and init.ora file all these years. config.ora contains instance specific parameters, such as the dump directories, name of the undo tablespace (there is one for each instance), and the instance and thread number. init.ora contains all common parameters two the database.

# config.ora for WEST instance

background_dump_dest=/home/oracle/admin/WEST/bdump

core_dump_dest=/home/oracle/admin/WEST/cdump

user_dump_dest=/home/oracle/admin/WEST/udump

undo_tablespace=UNDO_WEST

instance_name=WEST

instance_number=1

thread=1

# config.ora for EAST instance

background_dump_dest=/home/oracle/admin/EAST/bdump

core_dump_dest=/home/oracle/admin/EAST/cdump

user_dump_dest=/home/oracle/admin/EAST/udump

undo_tablespace=UNDO_EAST

instance_name=EAST

instance_number=2

thread=2

Notice that their are *TWO* undo tablespaces. In previous versions of Oracle this was rollback segment tablespace. At any rate each instance needs one. In the creating a RAC database section below, you’ll learn when and how these are created.

– initWEST.ora (on node 2 it’s initEAST.ora) –

# this is the only line that changes for each instance

ifile = /home/oracle/admin/WEST/pfile/configWEST.ora

control_files=
(/ocfs/oradata/EASTWEST/cntlEASTWEST01.ctl,

/ocfs/oradata/EASTWEST/cntlEASTWEST02.ctl,

/ocfs/oradata/EASTWEST/cntlEASTWEST03.ctl)

db_block_size=8192

# new Oracle9i parameter to set buffer cache size

db_cache_size=37108864

# if you have more instances, this number will be higher

cluster_database_instances=2

# see below for details

filesystemio_options=”directIO”

open_cursors=300

timed_statistics=TRUE

db_domain=localdomain

remote_login_passwordfile=EXCLUSIVE

# some stuff for Java

dispatchers=”(PROTOCOL=TCP)(SER=MODOSE)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.GiopServer)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.SGiopServer)”, “(PROTOCOL=TCP)”

compatible=9.0.0

# notice db name is different than instance names

db_name=EASTWEST

java_pool_size=12428800

large_pool_size=10485760

shared_pool_size=47440512

processes=150

fast_start_mttr_target=300

resource_manager_plan=SYSTEM_PLAN

sort_area_size=524288

undo_management=AUTO

cluster_database=true

That should do it. You may have more or less memory so adjust these values accordingly. Many of them are standard for non-RAC databases, so you’ll already be familiar with them. The Oracle docs are decent on explaining these in more detail, so check them for more info.

The init.ora parameter filesystemio_options is no longer a hidden parameter as of Oracle 9.2. The setting I use above is from Wim Coekaerts documentation. Arup Nanda says in the OPS days, “setall” was the setting he usually used. Your mileage may vary.

Steve Adam’s recommenations with respect to this parameter:

http://www.ixora.com.au/notes/filesystemio_options.htm

17. Creating the RAC database

This is much like creating a normal database. Most of the special stuff is in the init.ora and config.ora. The only new stuff is creating and enabling a separate undo tablespace, as well as second sets of redologs. Well you’re probably used to mirroring these anyway. Run this from node1.

– crEASTWEST.sql –

– send output to this logfile

spool crEASTWEST.log

startup nomount

– the big step, creates initial datafiles

create database EASTWEST

maxinstances 5

maxlogfiles 10

character set “we8iso8859p1″

datafile
‘/ocfs/oradata/EASTWEST/sysEASTWEST01.dbf’ size 500m reuse

default temporary tablespace tempts tempfile ‘/ocfs/oradata/EASTWEST/tmpEASTWEST01.dbf’ size 50m reuse

undo tablespace UNDO_WEST datafile ‘/ocfs/oradata/EASTWEST/undEASTWEST01.dbf’ size 50m reuse

logfile
‘/ocfs/oradata/EASTWEST/logEASTWEST01a.dbf’ size 25m reuse,

‘/ocfs/oradata/EASTWEST/logEASTWEST01b.dbf’ size 25m reuse;

– create the data dictionary

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql

– create the second undo tablespace

create undo tablespace UNDO_EAST datafile
‘/ocfs/oradata/EASTWEST/undEASTWEST02.dbf’ size 50m reuse;

– create a second set of redologs

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02a.dbf’ size 25m reuse;

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02b.dbf’ size 25m reuse;

alter database enable thread 2;

shutdown immediate;

18. Startup of all instances

The magic step. Not a lot to it if all the above steps went

properly, but exciting none the less.

First on node1

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Then the same thing on node2

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Voila! You should be up and running at this point.

Errors. If you’re getting ORA-32700 like this:

SQL> startup

ORACLE instance started.

Total System Global Area 93393188 bytes

Fixed Size 450852 bytes

Variable Size 88080384 bytes

Database Buffers 4194304 bytes

Redo Buffers 667648 bytes

ORA-32700: error occurred in DIAG Group Service

It probably means oracm didn’t start properly. This would probably

give you trouble *CREATING* a database as well.


Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 8: RAC/Linux/Firewire – Review of Clustered Features + Architecture

Review of Clustered Features + Architecture

Oracle 9iRAC has some important hardware and software components which are distinct from a standard single-instance setup.

On the hardware side, you have the IPC interconnect. On high-end specialized hardware such as sun clusters, you have a proprietary interconnect. On our low-cost working-mans clustering solution, you simply use a private or public ethernet network. The Oracle software components which we’ll describe in detail below, use this interconnect for interprocess communication, sending messages to syncronize caches, locks, and datablocks between each of the instances. This sharing of cache information is called Cache Fusion, and creates what Oracle calls the Global Cache.

Another important piece of the 9iRAC pie is the storage subsystem, and the Oracle cluster filesystem. What we’ve created with our cheap firewire shared drive is affectively a SAN or Storage Area Network. In high-end systems this SAN would probably be built with fiber-channel technology and switches. This storage subsystem is sometimes called a shared-disk subsystem. In order to write to the same disk being accessed by two machines, you have your choice of raw devices, or OCFS. Raw devices can also be used with a single instance database. They eliminate completely the OS filesystem, and all associated caching and management, providing direct raw access to the device. This type of arrangement is more difficult to manage. You don’t have datafiles to work with, so your backups, and database management become a bit more complex. Also, adding new datafiles is always adding a new partition, thus they are more difficult to delete, resize, and rearrange. OCFS provides you this functionionality, but with the flexibility and simplicity of a filesystem. Definitely the recommended option.

Oracle’s cluster manager (the oracm process we started above) coordinates activities between the cluster of instances. It monitors resources, and makes sure all the instances are in sync. If one becomes unavailable, it handles that eventuality.

With a 9iRAC database, aside from the normal SMON, PMON, LGWR, CKPT, + DBWR processes, you have a number of new processes which show up. They are as follows:

PROCESS NAME DESCRIPTION

——- —————– ———————-

LMSn global cache services controls the flow of data blocks + messages

LMON global enqueue monitor monitors global locks

LMD global enqueue service daemon: manages remote resource requests

LCK lock process manages local library and row cache req

DIAG diagnosability daemon reports process failures to alert.log

In 9iRAC there are two important components which manage shared resources. They are Global Cache Services (GCS) (Block Server Process or BSP in 8iOPS) and Global Enqueue Services (GES) components. GCS shares physical blocks from the buffer caches of each instance in the cluster, passing them back and forth as necessary. The GES shares locking information. In the local context you have three types of resource locks – null, shared, and exclusive. A null lock generally escalates to other types of locks, and strange as it may seem, doesn’t convey any access rights. Multiple instances can gain a null lock. Multiple instances can acquire a shared lock for reading, however, while it is in shared mode, other instances cannot write to it. And an exclusive lock can be held by only one instance. It gives exclusive access for writing. In the global context, ie whenever Cache Fusion is invoked, or whenever two instances in a cluster want the same data, you have those same three locks in two modes. Ownership of the current image or past image. The issue of the past image comes up because in a single instance, another session can construct the past image from undo, however, in the global context, this has to be put together and passed along to the other instance in the cluster.

The physical database in and Oracle 9iRAC environment has a lot in common with a single instance database. In 9iRAC, each instance has it’s own ORACLE_HOME where the Oracle software, ORACLE_BASE/admin/ORACLE_SID directory in OFA where the bdump, udump, cdump, pfile, and create directories all are. Each instance also has it’s own archive logs, if you are running in archivelog mode. The example above I was not running in archivelog mode, for simplicity sake. All the other files which make up your database are shared, including datafiles for data, datafiles for index, redo, system, temp, and other tablespaces, as well as controlfiles.


Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 9: RAC/Linux/Firewire – A Quick 9iRAC Example

A quick 9iRAC example


The names of the two instances I’m using are EAST and WEST, so I’ll use them here to refer to commands you’ll execute at the sqlplus prompt. This test assumes you’re logged into the same schema on both instances. I used ‘sys’ but you can create your own schema if you like.

1. On WEST do:

SQL> create table rac_test (c1 number, c2 varchar2 (64));

2. On EAST do:

SQL> desc rac_test

3. On WEST do:

SQL> insert into rac_test values (1, ‘SEAN’);

SQL> insert into rac_test values (2, ‘MIKE’);

SQL> insert into rac_test values (3, ‘JANE’);

4. On EAST do: (notice no rows are returned)

SQL> select * from rac_test;

5. On WEST do:

SQL> commit;

6. On EAST do: (notice the rows appear now)

SQL> select * from rac_test;

7. On WEST do:

SQL> update rac_test set c2 = ‘SHAWN’ where c1 = 1;

8. On EAST do: (notice the error Oracle returns)

SQL> select * from rac_test where c1 = 1 for update nowait;

select * from rac_test where c1 = 1 for update nowait

*

ERROR at line 1:

ORA-00054: resource busy and acquire with NOWAIT specified

9. Again on EAST do: (notice Oracle waits…)

SQL> update rac_test set c2 = ‘JOE’ where c1 = 1;

10. On WEST do:

SQL> commit;

11. On EAST the transaction completes.

This simple exercise illustrates that two sessions running on different instances, against the same database are behaving just like two sessions on a single instance or machine against a database. This is key. Oracle must maintain transactional consistency. Oracle maintains ACID properties which are Atomicity, Consistency, Isolation, and Duarability. Atomicity means a transaction either executes to completion, or fails. Consistency means that the database operates in discrete transactions, and moves from one consistent state to another. Isolation means that actions of other transactions are invisible until they are completed (commited). Finally Durability means when a transaction has finally completed and commited, it becomes permanent.

Our example above demonstrates that Oracle maintains all these promises, even in a clustered environment. How Oracle does this behind the scenes involves the null, shared, and exclusive locks we described above, and current and past image management. A lot of these details are reserved for a 9iRAC internals article. Take a look below at Madhu Tumma’s article for more on 9iRAC internals.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 10: RAC/Linux/Firewire – Summary

Summary

We covered a lot of ground in this article and it should serve as an introduction to 9iRAC on cheap Linux hardware. There are plenty of other topics to dig into including tuning, backup, SQL*Net setup and Load Balancing, I/O Fencing, NIC Failover and so on.

9iRAC is *NOT* a silver bullet as any good DBA knows. It will protect you from a single instance failure because of memory, kernel panic, or an interconnect failure, but there are still cases where your database could go down, for instance if the cluster manager software fails, or you lose a datafile either from human error, or a storage subsystem problem. Further redundancy can help you, but there are risks associated with an accidentally deleted object, or even and Oracle software bug.

Take a look at some of the documents listed below for further reading.

Other References

Red Hat Linux Advanced Server 2.1 – docs

Oracle Technology Network’s Linux Center

Oracle Cluster Filesystem FAQ

Oracle Cluster File System docs

Internals of Real Application Clusters by Madhu Tumma

Oracle9i Real Application Clusters Concepts

Oracle9i Real Application Clusters Administration

Linux HOWTO Docs – Networking, kernel config, hardware compatability etc


Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary