Oracle 9iRAC – Clustering on Linux/Firewire


Ever since the announcement of Oracle 9i, Oracle’s Real Application Clustering feature has created quite a stir. For those not familiar, 9iRAC is a complete overhaul of Oracle Parallel Server (OPS) from previous versions of the database into a workable product.

For many DBA’s, however, this technology is completely out of reach. Without an employer who has already committed to OPS and wants to upgrade, or a client who would like to venture into the unknown, there’s no way to get ahold of an environment on which to test it. The lowest entry option for clustering technology has been Fibre Channel. Unfortunately cost is prohibitive.

Enter Oracle’s new Linux Firewire project. To some, this announcement is as exciting as Oracle’s first announcement of a port of their RDBMS to the Linux platform. Through the release of various Open Source software components, such as a modified ieee1394 driver for sharing external Firewire disks, a clustered filesystem (OCFS), as well as a number of other interesting components, this platform is now within our reach at very low cost.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Tracking the Wily Proxy Hackers

Recently the server that hosts our business was hacked. This interrupted the service of twelve different websites we host, as well as our corporate mail. Needless to say it caused us plenty of headaches, sleepless nights, and frustrating hours. In retrospect, however it has instilled a greater appreciation for computer security, a greater awareness, and further, a stronger perseverence to keep the systems locked down.
Watching the news these days, and sites like Security Focus can be disheartening to say the least. SPAM is at an all-time high, windows viruses, trojans, and malware are wreaking havoc to corporate intranets, and the internet at large, and the situation only seems to get worse. Running a server on the internet nowadays is like opening shop in New York City back in the days of street crime and daily trouble.
Unfortunately some of us in the Unix and Macintosh world have grown a bit too confident. With all of the vulnerabilities being found in various versions of Windows, IIS and Internet Explorer, folks on the other side of the fence figure they have less to worry about. We may have less to worry about, but that certainly doesn’t mean nothing. So here is the story of what happened to us, and what we did about it.
We upgraded our systems in December of 2004, and figuring Mandrake 9.2 was more stable than 10.x we installed that. We spent the time recovering all of our websites from backups, rsyncing things accross the internet. Each website has it’s own document root as well as specific configuration lines in the Apache httpd.conf file. In addition the mail server had to be configured, as well as DNS changes. Lastly once the system was up and running, we mirrored everything on root for redundancy and protection of loss of a single drive. All told we spent about 30+ hours but we were back up and running soon enough. A lot of the bulk of that time was spent moving data accross the internet, and was unattended.
Around the end of January we started seeing some spikes in hits on some of our sites, but didn’t think much about it. A few weeks went by, but generally the systems were behaving normally, but starting to be a bit slow. By mid-February we were starting to have problems. The network we are hosting on was having trouble with bandwidth, browsing, and experiencing outages of their own. We also showed up on the Composite Blocking List and the Spamhaus List.
When that happened it opened our eyes, if only a bit. We knew something was happening which was originating from that network. So we did two things. First we tested our Postfix mail server for Open Mail Relay. We had experienced this a year earlier with a qmail misconfiguration, and since it is quite common, thought this might be the problem. However, we were setup correctly, and that was not the issue. Next, we scanned all of the windows and Macintosh machines on that network for viruses, trojans, and so on. We found a couple of things, and fixed them. We then removed ourselves from the CBL + spamhaus lists.
Once again our mail was flowing out, but a day later, the problem struck again. Being the Unix folks we our, we starting pointing fingers at the Windows machines. Sometimes Norton, MacAfee et al. don’t catch all the viruses. We suspected those pesky windows machines to be the culprit. Many of the malware programs that Windows users unwittingly install on their machines relay spam so that spammers can send email out anonymously. So your windows machine is coopted as a spam host, sending out thousands of messages a minute.
To get around the problem in the short term, we contacted some associates of ours, to relay mail through them. This is different than an open mail relay, since you are specifically requesting permission to send mail through another agent. So we could once again send mail, and our problem was temporarily solved. However, our server got slower, and so did our websites. It got to the point where the network hosting our server couldn’t send outbound traffic, or visit websites. Quite a problem.
The admin managing that network contacted Verizon, the broadband provider, and discussed the problem with their tech department. They suggested unplugging machines on the network one-by-one, until the traffic spike subsided. He proceeded to do just that, and what do you know but when our server was unplugged, the bandwidth usage dropped to ZERO. The support rep suspected we were streaming audio or video files, which of course we were not, so the only obvious conclusion was spam.
What to do, well first hide your head between your tail, and admit that your unix server has been hacked is a start. Next we rebuilt the server with Mandrake 10.1. There were some vulnerabilities in SSH that we were using, as well as Apache, and PHP, so upgrading to the latest Mandrake distro version upgraded all these packages in one go. We broke our mirrored drives, and installed Mandrake on one of them, and the did a disk to disk copy of all the data from /home to the new drive. Once that was complete we started up again, and things were looking good.
Back on the internet, things started slowing down again, so we started monitoring our Apache logs. We saw some strange activity in there, so blocked HTTP at the router, and found the performance problems, and bandwidth problems eliminated. So we knew there was something wrong with Apache. We searched for bugs, but didn’t find anything too heinous. Upon closer examination of the logs, however, we found strange redirects to port 25 on other machines. How was that happening?
Apache has a facility for acting as a proxy. That is it can get webpages, and in fact make other requests of remote machines, and proxy those requests back to an originating source. Imagine standing on a mountain top. You can see to the other side of the mountain, and are reading smoke signals from a village there. You then send those same smoke signals to the next village over. They can read your smoke signals, but don’t know the identity of the sender, only that you’re sending a message to them. You can understand the message, but can’t determine the sender. Proxying with internet based servers works much the same way. In fact the Open Mail Relay we discussed above is exactly that, which is why it’s so important that it be closed.
So we looked over these logs and found strangely that Apache was doing the same thing! In fact Apache was an open mail relay, and open proxy in general. This mod_proxy module came preinstalled with our apache, and though we did not configure it, it was working none the less. So we researched the issue, and found it was not considered a bug. It was in fact part of the software that when configured correctly can come in quite handy. Of course we didn’t need it, so we spent some time disabling through configuration changes in the httpd.conf. Despite these changes, we were still seeing some traffic, so we decided to play rough. We recompiled apache from scratch with the module completely disabled. Further attempts to configure httpd.conf using that module failed, proving to us that it was indeed no longer present in the software.
We disabled the block at our router, and watched things for a couple of days. We were still seeing funny traffic. Paranoid at this point, we blocked at the router, to analyze the logs some more. We could not figure out how this might still be happening, and checked the PHP forums for bugs related to this. Finding none, and not wanting to just start recompiling modules at random, we looked at the logs again.
We found that our server, when making a failed request, was redirecting the user to our homepage. So the proxy requests were failing, but redirecting the user to our homepage. Checking the stats confirmed this. We received 5000 hits that day, a 1000% above normal. Realizing these scans and attempts to proxy were failing, we began to relax. Knowing we were probably on some spammers top-10 hacked sites in North America list, we also figured that their automated systems would remove us from such a list once our server stopped server proxy requests. And that’s exactly what we found. After a couple days the hits dropped off to 2500, and then back below 1000 before weeks end.

Open Insights 27: Fragile Foundations

Issue 27 – Fragile Foundations
January 5, 2007

by Sean Hull

Founder and Senior Consultant
Heavyweight Internet Group

Happy New Year and welcome back to our Open Insights newsletter. Our readership is
now north of 3000 subscribers and growing everyday. Thanks to everyone for your
support and for forwarding us on to friends and colleagues!

Reading from your blackberry or other handheld device? We’ve made some formatting
changes which we hope improve the appearance on mobile devices. Let us know
if you have any suggestions or comments.

In This Issue:

1. Feature: Fragile Foundations
2. New Articles
3. Audio Interviews
4. Current Reading
5. Lightweight Humor
6. Past Issues
7. Technical Articles
8. About Heavyweight Internet Group

1. Feature: Fragile Foundations

The technologies we have surrounded ourselves are tremendously powerful and liberating. With many of the newest mobile phones, it’s quite easy to carry our office in our pocket. And with undersea cables wiring the globe, we have almost limitless telephone, and data capabilities turning the world into one very global marketplace. But as we’ll see in a couple of very familiar examples, these technologies remain quite fragile.

The Pacific Cable

If you follow international news, you probably heard about the earthquake in Asia that took out a few of the Pacific undersea cables. These provide internet & telephone connectivity from Taiwan to North America. Although there are four cables, two of them were damaged, leaving them at 40% normal capacity. Very quickly the bottleneck caused havoc in businesses, especially those doing financial transactions. Pen and paper quickly came back in vogue as businesses scrambled to hold things together. Fragile indeed.

A Story Closer to Home

I rang in the New Year with nary a blip on the radar. Clients were happy, and things were
generally quiet. Just three days in, and I drop my mobile phone. Now I’m not normally one to
be careless, but this was not a drop on the concrete, but rather just knocking it from the couch to
the floor. Sounds harmless enough, until I pick it up to see the screen begin to fizzle out, and
then go completely blank. Fear strikes. Panic, what am I going to do without my phone.
After trying to turn it on and off a few times, the first thing I thought of was, I need my contacts
and numbers. Being that my phone is a T-Mobile sidekick, I login to their website to find my contacts all faithfully stored, and available to me. I dig through the menus and find I cannot
export them to a simple datafile. In fact all I can really do is print them, all 18 pages worth!
I choose to print two pages per sheet to simplify things, and then get on the phone with
T-Mobile. After the usual struggle to get a helpful person on the phone, they attempt to transfer
me to the right department, only to drop the call. I can already see this turning into an all-day

After getting back on the line with T-Mobile, I spend 3/4 of an hour convincing the manager that
I did not do any undue damage to the phone, it effectively just stopped working. He finally accepts my explanation, and provides details on sending it to T-Mobile for replacement. Fortunately
there is a UPS store right here in Union Square, and luckily, they’re open for another hour! I rush over to the store, and the woman smiles at me. We get a lot of those phones in here, battery trouble, screen problems, etc etc. I know just where to send it. Do you have the Department Number?

Luckily I have T-Mobile, which uses SIMM cards. By removing the SIMM and placing it in my old RAZR phone, I’m back online after 20 minutes, albeit without all my contacts. But the experience as many of you can attest, is not rare. In fact I find every few weeks a friend or colleague returns a message left on voicemail or text, saying “Sorry, who is this? I lost my phone, and all my contacts…” or something to that affect. Unfortunately mobile phones still are not built to be interchangeable, grabbing information from the internet, though that technology is evolving.

Some Lessons

The first thing you’re probably likely to think after something like this is behind you is, thank god that’s over. But as quickly as it becomes a faded memory, a systems failure can happen again. What both of these examples illustrate, one small, and one huge, is we should strive first to build robust systems, second to have redundant copies of important information, and lastly to have redundant ways to perform the same business function.

In cases where systems cannot be built as robust as we would like, we need to be doubly vigilant, and build reasonable expectations into our business and our contracts. In addition we should consider these risks when we choose a solution. In some cases we may do well to choose lower tech solutions, out of technologies that have become commodity, those that are easy to replace, and where problems and issues are already well understood through years of past experience. It’s likely that those solutions won’t be as sexy as cutting edge technology, but when your business and reputation are at stake, what’s more important is consistency and reliability.

2. New Articles

Oracle 10g RAC versus DataGuard for High Availability

3. Audio Interviews

This month we have the opportunity to talk with William Hurley aka Whurley, the Chairman of the Open Management Consortium.

In our interview, we discuss open-source, and it’s impact on commercial software and solutions, and wrestle head on with some of the concerns people have on both sides of the fence.

William Hurley is the CTO at Qlusters, where he launched the openQRM project. He has been awarded IBM’s Master Inventor title, multiple awards for innovation at Apple Computer. Prior to joining Qlusters he was CTO and founder at Symbiot. He holds 11 patents for research and development at IBM, Tivoli Systems, and Apple Computer. He was recently elected Chairman of the Open Management Consortium.

4. Current Reading

If You Don’t Have Big Breasts, Put Ribbons on Your Pigtails: And Other Lessons I Learned from My Mom by Barbara Corcoran

Barbara Corcoran is the CEO of The Corcoran Group, the famous real estate company in New York City. By comparing the lessons she learned from her mother growing up, she puts her wisdom within our grasp. In story after story, she retells how she built her business offering up the successes and
failures for us to benefit from.

The Scientist in the Crib
by Andrew N. Meltzoff, and Patricia K. Kuhl

In this very readable book, Meltzoff and Kuhl discuss some of the latest discoveries in language, and human development, offering us insight into who we are, how we learn, and ultimately what makes
us tick.

The 48 Laws of Power by Robert Greene

Greene has distilled down human nature into forty eight simple laws. All of them we may not like to hear, but I think they resonate with how people are, whether that’s the way we’d like them to be. Use the laws to limit the power of others over you, or to build your own empire!

5. Lightweight Humor

The onion strikes again with Americans Celebrate 10 Millionth ‘Bring Yourself To Work Day’

6. Past Issues
Issue 26: Logistical Fitness
Issue 25: Which Red Button
Issue 24: Consulting Conflicts of Interest
Issue 23: Devil In The Details
Issue 22: Beware of Software Fashion
Issue 21: Open Season, Open Sesame?
Issue 20: Better Web Better Business
Archive: Past Issues

7. Technical Articles

Oracle DBA Interview: click here
Tools for the Intrepid DBA: click here
Oracle9i + RAC on Linux/Firewire: click here
Migrating MySQL to Oracle: click here
MySQL Disaster Recovery: click here

8. About Heavyweight Internet Group

In a nutshell, Oracle. Everything related to and surrounding the database technology we specialize in, but specifically setup, admin and tuning of Oracle technology. I have 10 years experience with Oracle, wrote a book on the technology, and write and lecture frequently. I’m founder and senior consultant of the company. In capacities where your company might hire Deloitte, AIG, or Oracle Consulting we can bring the same level of service and experience, at about half the price. Simple equation.

Looking for top-flight a DBA? Visit us on the web at

Your Database – A Long-Haul Truck Or A Sports Car?


Databases, even those running high-powered software like Oracle, can be incredibly touchy, demanding, and even fragile in their own way. For that reason we must take care to optimize and tune them based on characteristic usage.

Data Warehouse vs OLTP

In broad terms, database applications are divided into two large classes, Data Warehouse, and OLTP (online transaction processing – what a mouthful!). For the purposes of this dicussion, let’s call them a heavy lifting truck, and sports car. Now both have powerful engines, but they’re used for very different purposes.

Our Data Warehouse is characterized by large transactions, huge joins, all of which work to produce very large usually one-off reports. The reports may be run a handful of times. These databases do mostly read-only activity, occaisionally performing large dataloads to add to the archive of data.

On the other hand OLTP databases are characterized by thousands or tens of thousands of very small transactions. Web sites for instance, exhibit this characteristic. Each transaction is doing something quite small, but in aggregate, thousands of users put quite a heavy, and repeated load on the database, and they all expect instantaneous response!

We provide these two very different types of databases by laying out the database for its characteristic usage, and then tuning relevant parameters appropriately. We may allocate more memory to sorting, and less to the db cache for a Data Warehouse, whereas a large db cache might help us a lot with an OLTP application. We may enable parallel query, or partition large tables in our Data Warehouse application.

Choose One Or The Other

If you have a database serving a web-based application, and you are trying to do large ad-hoc reports against it, you will run into trouble. All that memory you’ve setup to cache small web transactions, will get wiped out with the first large report you run. What’s more the heavy disk I/O you perform reading huge tables, and then sorting and aggregating large datasets will put a huge load on your database which you setup specifically for your web application.


There are lots and lots of parameters and features in Oracle best suited to one or the other type of application, to the point where your database really will look like a sports car or a long-haul truck when you’re done tuning it. For that reason it is really essential that these types of applications be divided up into separate instances of Oracle preferrably on separate servers.

Oracle 10g Laptop/nodeless RAC Howto



Let me start by saying that this whole article is rather unorthodox.

That’s why I thought the analogy of hitchhiking was so apt. Also

if you are a hitchhiker, and you happen to be carrying your laptop,

well you can bring along an Oracle 10g Real Application Cluster

to show all your friends. Now there’s a road warrior!

Seriously though, this step-by-step guide does not describe any

supported solution by Oracle. So what good is it? Well a lot

actually. By taking the uncommon route, you often see the

colorful streets, the surprising highways, and undiscovered

nooks. What a great analogy because this is also true in software!

In 2002 I went to Open World and was very excited by the discovery

of the Open Source initiative headed up by Wim Coekerts. One of

their most exciting projects to me was this Oracle 9i RAC running

on Linux with a special Firewire driver patched to allow multiple

systems to mount the same filesystem. That was a tremendous learning

experience, and I wrote up an article, and presented that at the

New York Oracle User Group. Afterward, someone from the audience

came to me and asked if I would present at their user group out

in Edison New Jersey. Both presentations were exciting, and I

think well received.

Of late I’ve happened upon some articles floating around the internet

discussing a single-node AKA laptop RAC setup. How could that work,

I wondered? Perhaps they install virtualization software such as

VMWare to allow a single machine to look like more than one. This

would certainly work in theory. Once I started digging a bit more I

discovered Amit Poddar’s excellent article over at, and

I was intrigued. No virtualization seemed to be required other than

some virtual ethernet interfaces.

I decided to dig my heels in and give it a try. After struggling with

the illustrious and very universally beloved installer for a few months

I finally managed to get all the pieces in place, and get Oracle Real

Application Clusters running with no clustering hardware!!!

Here, my fine friends, are all the gory details of that adventure, which

I hope you’ll enjoy as much as I did writing them down.

Oracle’s model of clustering involves multiple instances (software processes)

talking to a single database (physical datafiles). We’re doing the same

thing here, but both instances will reside on the same machine. This

helps you travel light, save on transportation and learn concepts,

commands, and the architecture. Remember this is not an HA solution,

as there is little redundancy, and with a single disk, you will surely get

abysmal performance.

Let’s get started, what will we need to do? Here’s a quick outline of the

steps involved:

1. setup ip addresses of the virtual servers

2. setup ssh and rsh with autologin configured

3. setup the raw devices Oracle’s ASM software will use

4. install the clusterware softare, and then Oracle’s 10g software

5. setup the listener and an ASM instance

6. create an instance, start it, and register with srvctl

7. create a second instance & undo tablespace, & register it

1. Setup IP Addresses


Oracle wants to have a few interfaces available to it. To follow our analogy

of a hitchhiker traveling across America, we’ll name our server route66.

So add that name to your /etc/hosts/ file along with the private and vip

names: route66 route66-priv

# route66-vip

Notice that we’ve commented out route66-vip. We’ll explain more about

this later, but suffice it to say now that the clusterware installer

is very finicky about this.

In order for these two additional names to be reachable, we need

ethernet devices to associate with those IPs. It’s a fairly straightforward

thing to create with ifconfig as follows:

$ /sbin/ifconfig eth0:1 netmask broadcast

$ /sbin/ifconfig eth0:2 netmask broadcast

If your IPs, or network is configured differently, adjust the IP or broadcast

address accordingly.

2. setup ssh and rsh with autologin


Most modern Linux systems do *NOT* come with rsh installed. That’s for

good reason, because it’s completely insecure, and shouldn’t be used at

all. Why Oracle’s installer requires it is beyond me, but you’ll need

it. You can probably disable it once the clusterware is installed.

Head over to and see if you can find a copy for

your distro. You might also have luck using up2date or yumm if you

already have those configured, as they handle dependencies, and always

download the *right* version. With rpm, install this way:

$ rpm Uvh rsh-server-0.17-34.1.i386.rpm

$ rpm Uvh rsh-0.17-34.1.i386.rpm

Next enable autologin by adding names to your /home/oracle/.rhosts file.

After starting rsh, you should be able to login as follows:

$ rsh route66-priv

Once that works, move on the the sshd part. Most likely ssh is already on

your system, so just start it (as root):

$ /etc/rc.d/init.d/sshd start

Next, as the “oracle” user, generate the keys:

$ ssh-keygen -t dsa

Normally you would copy to a remote system, but for us we

just want to login to self. So copy as follows:

$ cd .ssh

$ cp authorized_keys

$ chmod 644 authorized_keys

Verify that you can login now:

$ ssh route66-priv

3. setup the raw devices


Most of the time when you think of files on a Unix system, you’re

thinking of files as represented through a filesystem. A filesystem

provides you a way to interact with the underlying disk hardware

through the use of files. A filesystem provides buffering, to

improve I/O performance automatically. However in the case of a

database like Oracle, it already has a sophisticated mechanism for

buffering which is smart in that it knows everything about it’s

files, and how it wants to read and write to them. So for an

application like Oracle unbuffered I/O is ideal. It bypasses a

whole layer of software, making your overall throughput faster!

You achieve this feat of magic using raw devices. We’re going to

hand them over to Oracle’s Automatic Storage Manager in a minute

but first let’s get to work creating the device files for our

RAC setup.

Create three 2G disks. These will be used as general storage space

for our ASM instance:

$ mkdir /asmdisks

$ dd if=/dev/zero of=/asmdisks/disk1 bs=1024k count=2000

$ dd if=/dev/zero of=/asmdisks/disk2 bs=1024k count=2000

$ dd if=/dev/zero of=/asmdisks/disk3 bs=1024k count=2000

Create two more smaller disks, one for the Oracle Cluster Registry,

and another for the voting disk:

$ dd if=/dev/zero of=/asmdisks/disk4 bs=1024k count=100

$ dd if=/dev/zero of=/asmdisks/disk5 bs=1024k count=20

Now we use a loopback device to make Linux treat these FILES as

raw devices.

$ /sbin/losetup /dev/loop1 /asmdisks/disk1

$ raw /dev/raw/raw1 /dev/loop1

$ chown oracle.dba /dev/raw/raw1

You’ll want to run those same three commands on disk2 through disk5 now.

4. Install the Clusterware & Oracle’s 10g Software


Finally we’re done with the Operating System setup, and we can move on

to Oracle. The first step will be to install the clusterware. I’ll

tell you in advance that this was the most difficult step in the entire

RAC on a laptop saga. Oracle’s installer tries to *HELP* you all along

the way, which really means standing in front of you!

First let’s make a couple of symlinks to our OCR and voting disks:

$ ln -sf /dev/raw/raw4 /home/oracle/product/disk_ocr

$ ln -sf /dev/raw/raw5 /home/oracle/product/disk_vot

As with any Oracle install, you’ll need a user, and group already

created, and you’ll want to set the usual environment variables such

as ORACLE_HOME, ORACLE_SID, etc. Remember that previous to this point

you already have ssh and rsh autologin working. If you’re not sure

go back and test again. That will certainly hold you up here, and

give you all sorts of confusing error messages.

If you’re running on an uncertified version of Linux, you may want

to fire up the clusterware installer as follows:

$ ./runInstaller -ignoreSysPrereqs

If your Linux distro is still giving you trouble, you might try

downloading from where you can find complete ISOs for

RHEL, various versions. You can also safely ignore memory warnings

during startup. If you’re short on memory, it will certainly slow things

down, but we’re hitchhikers right?

You’ll be asked to specify the cluster configuration details. You’ll

want route66-vip to be commented out, so if you haven’t done that and

get an error to the affect of route66-vip already in use go ahead and

edit your /etc/hosts file.

I also got messages saying “route66-priv not reachable”. Check again

that sshd is running, and possibly disable your firewall rules:

$ /etc/rc.d/init.d/iptables stop

Also verify that eth0:1, and eth0:2 are created. Have you rebooted

since you created them? Be sure they’re still there with:

$ /sbin/ifconfig -a

Specify the network interface. This defaults to PRIVATE, just edit

and specify PUBLIC.

The next two steps ask for the OCR disk and voting disk. Be sure to

specify external redundancy. This is your way of telling Oracle that

you’ll take care of mirroring these important disks yourself, as loss

of either of them will get you in deep doodoo. Of course we’re

hitchhikers so we’re not trying to build a system that is never going

to breakdown, but rather we want to get the feeling of the wind blowing

in our hair. Click through to install and you should be in good shape.

At the completion, the installer will ask you to run the

script. I found this worked fine up until the vipca (virtual ip

configuration assistant). I then ran this one manually. You’ll need

to uncomment route66-vip from your /etc/hosts file as well. Once

all configuration assistants have completed successfully, return to

the installer and click continue, and it will do various other sanity

checks of your cluster configuration.

Since the clusterware install is rather testy, you’ll probably be doing

it a few times before you get it right. Here’s the cleanup if you

have to run through it again:

$ rm rf /etc/oracle

$ rm rf /home/oracle/oraInventory

$ rm rf $CRS_HOME

# these two commands cleanup the contents of these disks

$ /bin/dd if=/dev/zero of=/asmdisks/disk4 bs=1024k count=100

$ /bin/dd if=/dev/zero of=/asmdisks/disk5 bs=1024k count=20

$ rm /etc/rc.d/init.d/

$ rm /etc/rc.d/init.d/init.crsd

$ rm /etc/rc.d/init.d/init.cssd

$ rm /etc/rc.d/init.d/init.evmd

$ rm /etc/rc.d/rc3.d/

$ rm /etc/rc.d/rc5.d/

Now reboot the server. This will kill any clusterware processes still running.

If you’ve finished the Oracle Universal Installer at this point, and things

seem to be working, check at the command line with the ps command:

$ ps auxw | grep 10.2.0s

root 3728 0.0 0.3 2172 708 ? S May30 0:00 /bin/su -l oracle -c sh -c ‘ulimit -c unlimited; cd /home/oracle/product/10.2.0s/log/bebel/evmd; exec /home/oracle/product/10.2.0s/bin/evmd ‘

root 3736 0.0 5.0 509608 11240 ? S May30 4:51 /home/oracle/product/10.2.0s/bin/crsd.bin reboot

oracle 4047 0.0 2.8 192136 6284 ? S May30 0:00 /home/oracle/product/10.2.0s/bin/evmd.bin

root 4172 0.0 0.3 2164 708 ? S May30 0:00 /bin/su -l oracle -c /bin/sh -c ‘ulimit -c unlimited; cd /home/oracle/product/10.2.0s/log/bebel/cssd; /home/oracle/product/10.2.0s/bin/ocssd || exit $?’

oracle 4173 0.0 0.4 4180 900 ? S May30 0:00 /bin/sh -c ulimit -c unlimited; cd /home/oracle/product/10.2.0s/log/bebel/cssd; /home/oracle/product/10.2.0s/bin/ocssd || exit $?

oracle 4234 0.0 3.9 180108 8808 ? S May30 0:16 /home/oracle/product/10.2.0s/bin/ocssd.bin

oracle 4476 0.0 2.0 24048 4576 ? S May30 0:00 /home/oracle/product/10.2.0s/bin/evmlogger.bin -o /home/oracle/product/10.2.0s/evm/log/ -l /home/oracle/product/10.2.0s/evm/log/evmlogger.log

oracle 6989 0.0 0.1 2676 404 ? S May30 0:00 /home/oracle/product/10.2.0s/opmn/bin/ons -d

oracle 6990 0.0 1.9 93224 4280 ? S May30 0:00 /home/oracle/product/10.2.0s/opmn/bin/ons -d

oracle 7891 0.0 0.2 3676 660 pts/3 S 23:35 0:00 grep 10.2.0s

Also list the nodes you have available to your clusterware software:

$ olsnodes n

route66 1

The Oracle 10g install itself is very trivial, assuming you’ve installed

Oracle before. Use the -ignoreSysPrereqs flag if necessary to start

up the Oracle Universal Installer, and use the software-only option,

as we’ll be creating our RAC database by hand. Also select Enterprise

Edition and things should proceed smoothly. Oracle will recognize that

you have the clusterware installed, and let you know during the


5. setup the listener and an ASM instance


The listener.ora file is setup as usual, the only difference is you will

include both route66 and route66-vip.





(ORACLE_HOME = /home/oracle/product/10.2.0)

(PROGRAM = extproc)













(ADDRESS = (PROTOCOL=TCP)(HOST=route66-vip)(PORT = 1521)(IP = FIRST))




You may also choose to use the actual IP addresses of these hostnames if

you like. One other difference is that you will use Oracle 10g’s new

srvctl utility to start th listener.

$ srvctl start nodeapps -n route66

Ok, on to the fun stuff. It’s time to configure our ASM instance.

Our instance name will be +ASM1, and we’ll set the ORACLE_SID as usual.

In the init+ASM1.ora file specify:






asm_diskstring=’/dev/raw/raw1′, ‘/dev/raw/raw2′,’/dev/raw/raw3’

Then in sqlplus startup the ASM instance:

SQL> startup nomount

Next tell ASM how we want to utilize the space we have by creating disk


SQL> create diskgroup DBDATA external redundancy disk


SQL> create diskgroup DBRECO external redundancy disk ‘/dev/raw/raw3’;

Now that you have diskgroups, you want to make a note of it in your


SQL> !echo “asm_diskgroups=’DBDATA’,’DBRECO'” >> init+ASM1.ora

Now startup your ASM instance:

SQL> startup force

Exit sqlplus and let srvctl know about the new ASM instance:

$ srvctl add asm -n route66 -i +ASM1 -o /home/oracle/product/

Now you can shutdown in sqlplus, and startup with srvctl:

SQL> shutdown immediate

$ srvctl start asm -n route66

And lastly use the ps command to check for your new instance.

6. create an instance, start it, and register with srvctl


We’re getting to our clustered database slowly but surely. We’re

just getting over the mountains now, and the open road is ahead of


We’re going to create the first of our two instances and call it

BEATNIK. Edit your initBEATNIK.ora as follows:


















You’ll also have to create the /home/oracle/admin/BEATNIK/* and

/home/oracle/admin/HIPPY/* directories.

Now edit the file crKEROUAC.sql as follows:








GROUP 2 size 10240k,

GROUP 3 size 10240k;

There are other parameters you can specify in this create statement, such as

maxinstances, maxlogmembers, and the sys password. However I’ve tried to

simplify it, to make it easier to review and understand. Check the Oracle

docs for details.

Now startup sqlplus and issue:

SQL> startup nomount pfile=/home/oracle/admin/BEATNIK/pfile/initBEATNIK.ora


Now get the names of your controlfiles from v$parameter and add them to

the initBEATNIK.ora file.

Now add a couple more parameters to your initBEATNIK.ora file:



Use sqlplus to stop and start the db again:

SQL> shutdown immediate

SQL> startup force

Now register our new database:

$ srvctl add database d KEROUAC -o /home/oracle/product/

$ srvctl add instance -d KEROUAC -i BEATNIK -n route66

Now one more time, shutdown with sqlplus, and then use srvctl to

start the db. From now on srvctl can stop + start the db.

$ srvctl start instance -d KEROUAC -i BEATNIK

7. create a second instance & undo tablespace, & register it


Since the database is already created, you don’t have to do that step

again. At this point all you have to do is create another instance.

First fire up sqlplus and create another undo tablespace:

SQL> create undo tablespace hippyundo datafile ‘+DBDATA’ size 100m;

Make a copy of your init.ora for the hippy instance like this:

$ cp initBEATNIK.ora initHIPPY.ora

Then set your ORACLE_SID and use sqlplus to startup:

SQL> startup

Finally register with srvctl:

$ srvctl add instance -i HIPPY -d KEROUAC -n route66

8. create data dictionary




10. Some Things to Understand


Automatic Storage Management

Global Cache Services

Global Enqueue Services

Clusterware procs crsd, evmd, ocssd,oprocd

RAC processes lms, lmd, lmon, lck0

GV$ data dictionary

11. Further Reading


Clusterware & RAC Install & Configuration Guide for Linux

Clusterware & RAC Administration Deployment Guide

Oracle Technology Network –

Please visit or email me at

Thanks to Amit Poddar &

Apress, Oracle Press books

Part 10: RAC/Linux/Firewire – Summary


We covered a lot of ground in this article and it should serve as an introduction to 9iRAC on cheap Linux hardware. There are plenty of other topics to dig into including tuning, backup, SQL*Net setup and Load Balancing, I/O Fencing, NIC Failover and so on.

9iRAC is *NOT* a silver bullet as any good DBA knows. It will protect you from a single instance failure because of memory, kernel panic, or an interconnect failure, but there are still cases where your database could go down, for instance if the cluster manager software fails, or you lose a datafile either from human error, or a storage subsystem problem. Further redundancy can help you, but there are risks associated with an accidentally deleted object, or even and Oracle software bug.

Take a look at some of the documents listed below for further reading.

Other References

Red Hat Linux Advanced Server 2.1 – docs

Oracle Technology Network’s Linux Center

Oracle Cluster Filesystem FAQ

Oracle Cluster File System docs

Internals of Real Application Clusters by Madhu Tumma

Oracle9i Real Application Clusters Concepts

Oracle9i Real Application Clusters Administration

Linux HOWTO Docs – Networking, kernel config, hardware compatability etc

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 9: RAC/Linux/Firewire – A Quick 9iRAC Example

A quick 9iRAC example

The names of the two instances I’m using are EAST and WEST, so I’ll use them here to refer to commands you’ll execute at the sqlplus prompt. This test assumes you’re logged into the same schema on both instances. I used ‘sys’ but you can create your own schema if you like.

1. On WEST do:

SQL> create table rac_test (c1 number, c2 varchar2 (64));

2. On EAST do:

SQL> desc rac_test

3. On WEST do:

SQL> insert into rac_test values (1, ‘SEAN’);

SQL> insert into rac_test values (2, ‘MIKE’);

SQL> insert into rac_test values (3, ‘JANE’);

4. On EAST do: (notice no rows are returned)

SQL> select * from rac_test;

5. On WEST do:

SQL> commit;

6. On EAST do: (notice the rows appear now)

SQL> select * from rac_test;

7. On WEST do:

SQL> update rac_test set c2 = ‘SHAWN’ where c1 = 1;

8. On EAST do: (notice the error Oracle returns)

SQL> select * from rac_test where c1 = 1 for update nowait;

select * from rac_test where c1 = 1 for update nowait


ERROR at line 1:

ORA-00054: resource busy and acquire with NOWAIT specified

9. Again on EAST do: (notice Oracle waits…)

SQL> update rac_test set c2 = ‘JOE’ where c1 = 1;

10. On WEST do:

SQL> commit;

11. On EAST the transaction completes.

This simple exercise illustrates that two sessions running on different instances, against the same database are behaving just like two sessions on a single instance or machine against a database. This is key. Oracle must maintain transactional consistency. Oracle maintains ACID properties which are Atomicity, Consistency, Isolation, and Duarability. Atomicity means a transaction either executes to completion, or fails. Consistency means that the database operates in discrete transactions, and moves from one consistent state to another. Isolation means that actions of other transactions are invisible until they are completed (commited). Finally Durability means when a transaction has finally completed and commited, it becomes permanent.

Our example above demonstrates that Oracle maintains all these promises, even in a clustered environment. How Oracle does this behind the scenes involves the null, shared, and exclusive locks we described above, and current and past image management. A lot of these details are reserved for a 9iRAC internals article. Take a look below at Madhu Tumma’s article for more on 9iRAC internals.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 8: RAC/Linux/Firewire – Review of Clustered Features + Architecture

Review of Clustered Features + Architecture

Oracle 9iRAC has some important hardware and software components which are distinct from a standard single-instance setup.

On the hardware side, you have the IPC interconnect. On high-end specialized hardware such as sun clusters, you have a proprietary interconnect. On our low-cost working-mans clustering solution, you simply use a private or public ethernet network. The Oracle software components which we’ll describe in detail below, use this interconnect for interprocess communication, sending messages to syncronize caches, locks, and datablocks between each of the instances. This sharing of cache information is called Cache Fusion, and creates what Oracle calls the Global Cache.

Another important piece of the 9iRAC pie is the storage subsystem, and the Oracle cluster filesystem. What we’ve created with our cheap firewire shared drive is affectively a SAN or Storage Area Network. In high-end systems this SAN would probably be built with fiber-channel technology and switches. This storage subsystem is sometimes called a shared-disk subsystem. In order to write to the same disk being accessed by two machines, you have your choice of raw devices, or OCFS. Raw devices can also be used with a single instance database. They eliminate completely the OS filesystem, and all associated caching and management, providing direct raw access to the device. This type of arrangement is more difficult to manage. You don’t have datafiles to work with, so your backups, and database management become a bit more complex. Also, adding new datafiles is always adding a new partition, thus they are more difficult to delete, resize, and rearrange. OCFS provides you this functionionality, but with the flexibility and simplicity of a filesystem. Definitely the recommended option.

Oracle’s cluster manager (the oracm process we started above) coordinates activities between the cluster of instances. It monitors resources, and makes sure all the instances are in sync. If one becomes unavailable, it handles that eventuality.

With a 9iRAC database, aside from the normal SMON, PMON, LGWR, CKPT, + DBWR processes, you have a number of new processes which show up. They are as follows:


——- —————– ———————-

LMSn global cache services controls the flow of data blocks + messages

LMON global enqueue monitor monitors global locks

LMD global enqueue service daemon: manages remote resource requests

LCK lock process manages local library and row cache req

DIAG diagnosability daemon reports process failures to alert.log

In 9iRAC there are two important components which manage shared resources. They are Global Cache Services (GCS) (Block Server Process or BSP in 8iOPS) and Global Enqueue Services (GES) components. GCS shares physical blocks from the buffer caches of each instance in the cluster, passing them back and forth as necessary. The GES shares locking information. In the local context you have three types of resource locks – null, shared, and exclusive. A null lock generally escalates to other types of locks, and strange as it may seem, doesn’t convey any access rights. Multiple instances can gain a null lock. Multiple instances can acquire a shared lock for reading, however, while it is in shared mode, other instances cannot write to it. And an exclusive lock can be held by only one instance. It gives exclusive access for writing. In the global context, ie whenever Cache Fusion is invoked, or whenever two instances in a cluster want the same data, you have those same three locks in two modes. Ownership of the current image or past image. The issue of the past image comes up because in a single instance, another session can construct the past image from undo, however, in the global context, this has to be put together and passed along to the other instance in the cluster.

The physical database in and Oracle 9iRAC environment has a lot in common with a single instance database. In 9iRAC, each instance has it’s own ORACLE_HOME where the Oracle software, ORACLE_BASE/admin/ORACLE_SID directory in OFA where the bdump, udump, cdump, pfile, and create directories all are. Each instance also has it’s own archive logs, if you are running in archivelog mode. The example above I was not running in archivelog mode, for simplicity sake. All the other files which make up your database are shared, including datafiles for data, datafiles for index, redo, system, temp, and other tablespaces, as well as controlfiles.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 7: RAC/Linux/Firewire – Cluster Database Setup

Cluster Database Setup

Setting up a clustered database is a lot like setting up an normal Oracle database. You have datafiles, controlfiles, redologs, rollback segments, and so on. With a clustered database you have a few new settings in your init.ora, and an second undo tablespace.

init.ora + config.ora setup

In a RAC environement, we finally see while Oracle has been recommending a separate config.ora and init.ora file all these years. config.ora contains instance specific parameters, such as the dump directories, name of the undo tablespace (there is one for each instance), and the instance and thread number. init.ora contains all common parameters two the database.

# config.ora for WEST instance








# config.ora for EAST instance








Notice that their are *TWO* undo tablespaces. In previous versions of Oracle this was rollback segment tablespace. At any rate each instance needs one. In the creating a RAC database section below, you’ll learn when and how these are created.

— initWEST.ora (on node 2 it’s initEAST.ora) —

# this is the only line that changes for each instance

ifile = /home/oracle/admin/WEST/pfile/configWEST.ora





# new Oracle9i parameter to set buffer cache size


# if you have more instances, this number will be higher


# see below for details






# some stuff for Java

dispatchers=”(PROTOCOL=TCP)(SER=MODOSE)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.GiopServer)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.SGiopServer)”, “(PROTOCOL=TCP)”


# notice db name is different than instance names











That should do it. You may have more or less memory so adjust these values accordingly. Many of them are standard for non-RAC databases, so you’ll already be familiar with them. The Oracle docs are decent on explaining these in more detail, so check them for more info.

The init.ora parameter filesystemio_options is no longer a hidden parameter as of Oracle 9.2. The setting I use above is from Wim Coekaerts documentation. Arup Nanda says in the OPS days, “setall” was the setting he usually used. Your mileage may vary.

Steve Adam’s recommenations with respect to this parameter:

17. Creating the RAC database

This is much like creating a normal database. Most of the special stuff is in the init.ora and config.ora. The only new stuff is creating and enabling a separate undo tablespace, as well as second sets of redologs. Well you’re probably used to mirroring these anyway. Run this from node1.

— crEASTWEST.sql —

— send output to this logfile

spool crEASTWEST.log

startup nomount

— the big step, creates initial datafiles

create database EASTWEST

maxinstances 5

maxlogfiles 10

character set “we8iso8859p1”

‘/ocfs/oradata/EASTWEST/sysEASTWEST01.dbf’ size 500m reuse

default temporary tablespace tempts tempfile ‘/ocfs/oradata/EASTWEST/tmpEASTWEST01.dbf’ size 50m reuse

undo tablespace UNDO_WEST datafile ‘/ocfs/oradata/EASTWEST/undEASTWEST01.dbf’ size 50m reuse

‘/ocfs/oradata/EASTWEST/logEASTWEST01a.dbf’ size 25m reuse,

‘/ocfs/oradata/EASTWEST/logEASTWEST01b.dbf’ size 25m reuse;

— create the data dictionary



— create the second undo tablespace

create undo tablespace UNDO_EAST datafile
‘/ocfs/oradata/EASTWEST/undEASTWEST02.dbf’ size 50m reuse;

— create a second set of redologs

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02a.dbf’ size 25m reuse;

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02b.dbf’ size 25m reuse;

alter database enable thread 2;

shutdown immediate;

18. Startup of all instances

The magic step. Not a lot to it if all the above steps went

properly, but exciting none the less.

First on node1

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Then the same thing on node2

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Voila! You should be up and running at this point.

Errors. If you’re getting ORA-32700 like this:

SQL> startup

ORACLE instance started.

Total System Global Area 93393188 bytes

Fixed Size 450852 bytes

Variable Size 88080384 bytes

Database Buffers 4194304 bytes

Redo Buffers 667648 bytes

ORA-32700: error occurred in DIAG Group Service

It probably means oracm didn’t start properly. This would probably

give you trouble *CREATING* a database as well.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 6: RAC/Linux/Firewire – Cluster Manager Setup

Cluster Manager Setup

The cluster manager software is how the Oracle instances communicate their activities. Obviously this is an important piece to the puzzle as well. Here I review the configs, and then show how to get it up and running on each node. I *DID NOT* patch the cluster manager with the db patch, but your mileage may vary.

Edit file $ORACLE_HOME/oracm/admin/cmcfg.ora


ClusterName=Oracle Cluster Manager, version 9i


PrivateNodeNames=zenith utopia

PublicNodeNames=zenith utopia







Note, if you patch oracm to, remove the two Watchdog lines, and uncomment and use the CmDiskFile.

Edit file $ORACLE_HOME/oracm/admin/ocmargs.ora

watchdogd -d /dev/null -l 0

oracm /a:0

norestart 1800

Note, if you patch oracm to, comment out the watchdog line.

Now *AS ROOT* start up the cluster manager:

$ ./$ORACLE_HOME/oracm/bin/

You should see 8 processes with “ps -auxw | grep oracm”. Note that if you are running RH8, there’s a new ps which needs a special option “m” to notice threads. Apparently oracm is threaded (Thanks Wim). This had me pulling my hair out for weeks, and I’m bald! Anyway if that is the case, use “ps auxwm | grep oracm”. One more little recommendation. oracm is communicating via a port which you define. If you’re using iptables/ipchains, or some other firewall solution, I would recommend disabling it, at least temporarily, until you know you’ve configured everything right. Then reenable it, being sure you are good at configuring just the ports you need.

15. Perform step 14 on node 2.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary