Category Archives: Technical Article

Your Database – A Long-Haul Truck Or A Sports Car?

Introduction

Databases, even those running high-powered software like Oracle, can be incredibly touchy, demanding, and even fragile in their own way. For that reason we must take care to optimize and tune them based on characteristic usage.

Data Warehouse vs OLTP

In broad terms, database applications are divided into two large classes, Data Warehouse, and OLTP (online transaction processing – what a mouthful!). For the purposes of this dicussion, let’s call them a heavy lifting truck, and sports car. Now both have powerful engines, but they’re used for very different purposes.

Our Data Warehouse is characterized by large transactions, huge joins, all of which work to produce very large usually one-off reports. The reports may be run a handful of times. These databases do mostly read-only activity, occaisionally performing large dataloads to add to the archive of data.

On the other hand OLTP databases are characterized by thousands or tens of thousands of very small transactions. Web sites for instance, exhibit this characteristic. Each transaction is doing something quite small, but in aggregate, thousands of users put quite a heavy, and repeated load on the database, and they all expect instantaneous response!

We provide these two very different types of databases by laying out the database for its characteristic usage, and then tuning relevant parameters appropriately. We may allocate more memory to sorting, and less to the db cache for a Data Warehouse, whereas a large db cache might help us a lot with an OLTP application. We may enable parallel query, or partition large tables in our Data Warehouse application.

Choose One Or The Other

If you have a database serving a web-based application, and you are trying to do large ad-hoc reports against it, you will run into trouble. All that memory you’ve setup to cache small web transactions, will get wiped out with the first large report you run. What’s more the heavy disk I/O you perform reading huge tables, and then sorting and aggregating large datasets will put a huge load on your database which you setup specifically for your web application.

Conclusion

There are lots and lots of parameters and features in Oracle best suited to one or the other type of application, to the point where your database really will look like a sports car or a long-haul truck when you’re done tuning it. For that reason it is really essential that these types of applications be divided up into separate instances of Oracle preferrably on separate servers.

Part 4: RAC/Linux/Firewire – Initial Oracle Setup

Initial Oracle Setup


Follow these instructions to get Oracle up and running on your new Linux box.

1. Setup the oracle account, and environment as follows.

Create an oracle user on your linux box:

$ adduser oracle

2. Login to the oracle account and edit .oraenv9i as follows (assumes bash):

# oracle environment variables

export ORACLE_BASE=”/home/oracle”

export ORACLE_HOME=”/home/oracle/product/9.2.0″

# EAST on machines #2

export ORACLE_SID=”WEST”

export PATH=$PATH:$ORACLE_HOME/bin

export LD_LIBRARY_PATH=”$ORACLE_HOME/lib”

export LD_ASSUME_KERNEL=2.2.5

# US7ASCII is the default, but WE8ISO8859P1 support more languages

export NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1

export ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data

export ORACLE_TERM=xterm

export ORACLE_OWNER=oracle

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib:/usr/local/lib

export TNS_ADMIN=$ORACLE_HOME/network/admin

# run the RH compatability stuff

. /usr/i386-glibc21-linux/bin/i386-glibc21-linux-env.sh

# setup Java

export JAVA_HOME=/usr/local/java

export CLASSPATH=$ORACLE_HOME/jdbc/lib/classes12.zip:$ORACLE_HOME/JRE:$ORACLE_HOM

E/jlib:$ORACLE_HOME/rdbms/jlib:$ORACLE_HOME/network/jlib:.

Add this line to your .bash_profile:

. /home/oracle/.oraenv9i

Next install the glibc backward compatability libs per

compat-

egcs-6.2-1.1.2.14.i386.rpm

compat-

glibc-6.2-2.1.3.2.i386.rpm

compat-libs-6.

2-3.i386.rpm

3. Next install JDK.

4. Edit the hosts file if necessary to include two machines on your local network

by name. I used “utopia” and “zenith”.

5. Next run the Oracle installer as follows (assuming you opened the packages in

/tmp). If you’re not on the X-window console, connect with SSH and you should ha

ve x tunneling by default, and can display remotely:

$ ./tmp/Disk1/install/linux/runInstaller

First install the cluster manager. You’ll be prompted for the local and remote h

ostname, a quorum disk, as well as some other things. Doesn’t matter what you en

ter right now, as we’re going to go back and edit those files and do things by ha

nd anyway.

6. Next go through the 9.2.0.1 software install. Here are some other notes with

various weblinks. Be sure to select “Enterprise Edition” and also “Software Only

Install”.

I got started with this Oracle 8 install doc which details which compatability li

braries you’ll need, how to setup the Oracle account, environment variables, and

Java.

link

>

I ran into some trouble with LD_ASSUME_KERNEL, and compat libs:

You’ll encounter problems with the context makefile, and get this error “Error in

invoking target install of makefile /opt/oracle/product/9.2.0/ctx/lib/ ins_ctx.m

k”. Sad but true, this is *NORMAL* behavior. I suspect making the installer run

flawlessly isn’t at the top of Oracle’s priority list.

link

I also encountered problems with Oracle Net Configuration Assistant “Can’t find $

ORACLE_HOME/jre/1.1.8/bin/../bin/i586/green_threads/jre” and I added this symlink

:

$ ln -s $ORACLE_HOME/jre/1.1.8/bin/i686
$ORACLE_HOME/jre/1.1.8/bin/i586

I got a simlar error for libjava.so:

$ ln -s $ORACLE_HOME/jre/1.1.8/lib/i686
$ORACLE_HOME/jre/1.1.8/lib/i586

Though you’ll probably want to create your own database from scratch later, it’s

sometimes instructive to let the Oracle Database Configuration Assistant create a

starter one for you, and look at what options they use. As with the rest, you’l

l run into an error. This time it is “ORA-27123 unable to attach to shared memor

y segment: Oracle Database Configuration Assistant”. Fix it by doing the followi

ng as root on your linux box:

$ cat /proc/sys/kernel/shmmax

33554432

$ echo `expr 1024 * 1024 * 1024` > /proc/sys/kernel/shmmax

$ cat /proc/sys/kernel/shmmax

1073741824

Obviously you’ll have to go through this process (living hell?) on both boxes you

‘ll be using in your cluster.

Next rerun the installer and specify the location of the itty bitty 235M patch, a

nd install that.

7. Enable RAC in Oracle9i

The Real Application Cluster feature is *NOT* enabled by default. Here’s how you

enable it:

As the oracle user:

$ cd $ORACLE_HOME/rdbms/lib

$ make -f ins_rdbms.mk rac_on

$ make -f ins_rdbms.mk ioracle

As root set permissions on rac_on

$ chown oracle /etc/rac_on

$ chgrp dba /etc/rac_on

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 5: RAC/Linux/Firewire – Firewire + OCFS Setup

Firewire + OCFS Setup


In this installment, we’ll discuss how to get the Firewire drive shared between your two Linux boxes.

8. Test Firewire drive

At this point you can test the firewire drive if you like, with the standard Linux driver. You won’t be able to share the drive between the two nodes yet, however.

As root do the following:

$ modprobe ohci1394

$ modprobe ieee1394

$ modprobe sbp2

$ modprobe scsi_mod

Grab a copy of rescan-scsi-bus.sh from here and run it:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

run rescan scsi bus

Now partition it with fdisk:

$ fdisk /dev/sda

Now try making an ext2 filesystem with mke2fs

$ mke2fs /dev/sda1

Now mount it

$ mount -t ext2 /dev/sda1 /mnt/test

Now unmount it

$ umount /mnt/test

9. Linux Kernel Setup w/Firewire patch

The Linux kernel is a complex beast, and compiling it can often be a challenge. Though I like rolling my own, I downloaded the patched firewire source distro off of OTN, and try as I might, I could not get those compiled kernels to work. If anyone *DOES* get it to work, please send me their “.config” from the kernel source directory. Also I’ve tried to encourage the Oracle/Linux Firewire team to build a patch-only distro which can be applied against a standard Linux source tree. No luck yet.

Assuming you’re not going to roll your own, just download linux-2.4.20rc2-orafw-up.tar.gz from here:

http://otn.oracle.com/tech/linux/open_source.html

Move to the “/” or root directory, and untar the file:

$ tar xvzf linux-2.4.20rc2-orafw-up.tar.gz

Edit your /etc/lilo.conf or /etc/grub.conf file to include the new kernel. Do *NOT* make it the default kernel, it may not boot.

Reboot. If you come up again, you’re in luck, the kernel works for your machine. Next you want to edit your /etc/modules.conf to include these lines:

# options for oracle firewire patched kernel

options sbp2 sbp2_exclusive_login=0

post-install sbp2 insmod sd_mod

post-remove sbp2 rmmod sd_mod

As root, load the modules like this:

$ modprobe ieee1394

$ modprobe ohci1394

$ modprobe ide-scsi

$ modprobe sbp2

$ modprobe scsi_mod

If you’re having trouble seeing the device, grab a copy of rescan-scsi-bus.sh from here:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

If you want to partition, now is a good time. Use fdisk as root like this:

$ fdisk /dev/sda

If you have other SCSI devices, it may be /dev/sdb or dev/sdc and so on.

10. Go through steps 1-8 on node 2

11. Cluster Filesystem setup (OCFS)

If you wanna play around, use mke2fs on the one of the partitions you created with fdisk, and then mount the partiton on machine a. Then mount the partition again on machine b. Create a file on one of the two boxes. The other machine *WON’T* reflect it. This is equivalent to unplugging a disk which is mounted, such as a USB device, or some such. You can and probably *HAVE* corrupted the filesystem. That’s ok, because we don’t have anything important on the disk yet. Ok, unmount on both machines. If you have trouble, you may need to reboot.

Having gone through the above example, you know why OCFS is so important. Ok, now the fun part. Install OCFS. There are good docs to be found in the linux_ocfs.pdf file here:

http://download.oracle.com/otn/linux/code/ocfs/linux_ocfs.pdf

Without RedHat Advanced Server, the RPMs are *NOT* going to work. Just grab a copy of ocfs-1.0-up.o and put it in
/lib/modules/2.4.20-rc2-orafw/kernel/fs.

Use ocfstool to create the /etc/ocfs.conf file. The pdf doc listed above is pretty good at explaining this.

Load the ocfs kernel module with load_ocfs. If everything goes right it will tell you like this:

$ cd /lib/modules/2.4.20-rc2-orafw/kernel/fs

$ load_ocfs

/sbin/insmod ocfs node_name=zenith
ip_address=192.168.0.9 ip_port=7000 cs=1865 guid=72C2AF5CA29FA17CB9CB000AE6312F24

Using /lib/modules/2.4.20-rc2-orafw/kernel/fs/ocfs.o

Next make the filesystem. ocfstool can do this too.

$ mkfs.ocfs -F -b 128 -L /ocfs -m /ocfs -u 1001 -g 1001 -p 0775 /dev/sda1

And finally mount the filesystem!

$ mount -t ocfs /dev/sda1 /ocfs

$ df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/hda2 55439548 20835260 31788096 40% /

/dev/hda1 101089 18534 77336 20% /boot

none 112384 0 112384 0% /dev/shm

/dev/cdrom 122670 122670 0 100% /mnt/cdrom

/dev/sda1 60049024 30080 60018944 1% /ocfs

12. Perform step 10 on node 2.

13. Test ocfs

Here we quickly verify that a file created on one instance is viewable on another.

On node1 do:

$ cd /ocfs

$ touch mytestfile

On node2 do:

$ cd /ocfs

$ ls

mytestfile

$

You’ll see to your astonishment that the file is now visible on node 2!

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 6: RAC/Linux/Firewire – Cluster Manager Setup

Cluster Manager Setup


The cluster manager software is how the Oracle instances communicate their activities. Obviously this is an important piece to the puzzle as well. Here I review the configs, and then show how to get it up and running on each node. I *DID NOT* patch the cluster manager with the 9.2.0.2 db patch, but your mileage may vary.

Edit file $ORACLE_HOME/oracm/admin/cmcfg.ora

HeartBeat=10000

ClusterName=Oracle Cluster Manager, version 9i

PollInterval=300

PrivateNodeNames=zenith utopia

PublicNodeNames=zenith utopia

ServicePort=9998

HostName=zenith

#CmDiskFile=/ocfs/oradata/foo

MissCount=5

WatchdogSafetyMargin=3000

WatchdogTimerMargin=6000

Note, if you patch oracm to 9.2.0.2, remove the two Watchdog lines, and uncomment and use the CmDiskFile.

Edit file $ORACLE_HOME/oracm/admin/ocmargs.ora

watchdogd -d /dev/null -l 0

oracm /a:0

norestart 1800

Note, if you patch oracm to 9.2.0.2, comment out the watchdog line.

Now *AS ROOT* start up the cluster manager:

$ ./$ORACLE_HOME/oracm/bin/ocmstart.sh

You should see 8 processes with “ps -auxw | grep oracm”. Note that if you are running RH8, there’s a new ps which needs a special option “m” to notice threads. Apparently oracm is threaded (Thanks Wim). This had me pulling my hair out for weeks, and I’m bald! Anyway if that is the case, use “ps auxwm | grep oracm”. One more little recommendation. oracm is communicating via a port which you define. If you’re using iptables/ipchains, or some other firewall solution, I would recommend disabling it, at least temporarily, until you know you’ve configured everything right. Then reenable it, being sure you are good at configuring just the ports you need.

15. Perform step 14 on node 2.

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Asterisk Calling Card Applications

Asterisk is a powerful PBX solution, that we already know. But what else can it do. In this article we’ll explain how to setup Asterisk to handle Call Data Records (CDR data) in MySQL. Once you have that configured, there are a number of calling card applications which can be integrated with Asterisk to provide you with the makings of a serious calling gateway.


Setup Asterisk CDR with MySQL

By default Asterisk pumps all it’s call data information to text-based log files. That’s fine for normal use, but what if you want to put that data to use in a calling card application? First you have to get Asterisk to use a database. Luckily the support is already there, all you have to do is configure it.


Start by editing your cdr_manager.conf file as follows:


enabled = yes

Next edit your modules.conf file, and somewhere in the [modules] section, add:


load => cdr_addon_mysql.so

We’re going to compile this, don’t worry. Next edit your cdr_mysql.conf file in /etc/asterisk or create it if necessary:


[global]

hostname=localhost

dbname=asteriskcdrdb

user=astxuser

;user=

password=astxpass

;password=

port=3306

sock=/var/lib/mysql/mysql.sock

;sock=/tmp/mysql.sock

userfield=1

Next install MySQL. Luckily for all you lazy bums out there, this is the simplest of all. You’ll need to download three RPMs and install them. You’ll need the latest version of mysql-server, mysql-client and finally mysql-devel.


Next you’ll create a database called “asteriskcdrdb” with mysqladmin, create a table named “cdr” with the Asterisk provided script, and then set user grants.


Now it’s time to compile the asterisk-addons package. Be sure you have zlib-devel and mysql-devel packages installed on your system or you may get errors. Checkout the source from cvs. I got some strange errors which I had to track down on the email lists, and then edit the makefile as shown below:


CFLAGS+=-DMYSQL_LOGUNIQUEID

Now stop asterisk, and start it up again, and monitor the asterisk logfile for errors as follows:


tail -f /var/log/asterisk/messages

You can finally verify that you are dumping cdr information into mysql as follows:


$ mysql asteriskcdrdb

mysql> select uniqueid, src, calldate from cdr;



There should be one entry for every call. Make some calls to local

extensions and verify that records show up here. New cdr records

will still show up in the /var/log/asterisk/cdr-csv/Master.csv

file. Not sure if this can be disabled.


Calling Card Applications


ASTCC

Though the homepage is just a voip-info wiki page

and the download available through CVS, this calling card application was updated in late December 2004. This application seems to be the winner in terms of popularity on the voip-info wiki. It comes from Digium, it supports MySQL, and setup is pretty straightforward.

AreskiCC

With a strange name, it nevertheless seems a pretty complete system. Last updated end of December, 2004, it includes a web interface, though no support for MySQL. That’s fine, but my MySQL setup instructions will need to change slightly as you’ll need to configure Asterisk to dump CDR data into Postgres.


Asterisk Billing – Prepaid application

Last updated in July, I had trouble compiling this application. There is a basic sourceforge download page, but no real homepage. I’m guessing this one is still sort of in the development stages. Also, it doesn’t come with any sound files, so you’ll have to record your own, or *borrow* from some of these other applications.

Part 7: RAC/Linux/Firewire – Cluster Database Setup

Cluster Database Setup


Setting up a clustered database is a lot like setting up an normal Oracle database. You have datafiles, controlfiles, redologs, rollback segments, and so on. With a clustered database you have a few new settings in your init.ora, and an second undo tablespace.

init.ora + config.ora setup

In a RAC environement, we finally see while Oracle has been recommending a separate config.ora and init.ora file all these years. config.ora contains instance specific parameters, such as the dump directories, name of the undo tablespace (there is one for each instance), and the instance and thread number. init.ora contains all common parameters two the database.

# config.ora for WEST instance

background_dump_dest=/home/oracle/admin/WEST/bdump

core_dump_dest=/home/oracle/admin/WEST/cdump

user_dump_dest=/home/oracle/admin/WEST/udump

undo_tablespace=UNDO_WEST

instance_name=WEST

instance_number=1

thread=1

# config.ora for EAST instance

background_dump_dest=/home/oracle/admin/EAST/bdump

core_dump_dest=/home/oracle/admin/EAST/cdump

user_dump_dest=/home/oracle/admin/EAST/udump

undo_tablespace=UNDO_EAST

instance_name=EAST

instance_number=2

thread=2

Notice that their are *TWO* undo tablespaces. In previous versions of Oracle this was rollback segment tablespace. At any rate each instance needs one. In the creating a RAC database section below, you’ll learn when and how these are created.

– initWEST.ora (on node 2 it’s initEAST.ora) –

# this is the only line that changes for each instance

ifile = /home/oracle/admin/WEST/pfile/configWEST.ora

control_files=
(/ocfs/oradata/EASTWEST/cntlEASTWEST01.ctl,

/ocfs/oradata/EASTWEST/cntlEASTWEST02.ctl,

/ocfs/oradata/EASTWEST/cntlEASTWEST03.ctl)

db_block_size=8192

# new Oracle9i parameter to set buffer cache size

db_cache_size=37108864

# if you have more instances, this number will be higher

cluster_database_instances=2

# see below for details

filesystemio_options=”directIO”

open_cursors=300

timed_statistics=TRUE

db_domain=localdomain

remote_login_passwordfile=EXCLUSIVE

# some stuff for Java

dispatchers=”(PROTOCOL=TCP)(SER=MODOSE)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.GiopServer)”, “(PROTOCOL=TCP)(PRE=Oracle.aurora.server.SGiopServer)”, “(PROTOCOL=TCP)”

compatible=9.0.0

# notice db name is different than instance names

db_name=EASTWEST

java_pool_size=12428800

large_pool_size=10485760

shared_pool_size=47440512

processes=150

fast_start_mttr_target=300

resource_manager_plan=SYSTEM_PLAN

sort_area_size=524288

undo_management=AUTO

cluster_database=true

That should do it. You may have more or less memory so adjust these values accordingly. Many of them are standard for non-RAC databases, so you’ll already be familiar with them. The Oracle docs are decent on explaining these in more detail, so check them for more info.

The init.ora parameter filesystemio_options is no longer a hidden parameter as of Oracle 9.2. The setting I use above is from Wim Coekaerts documentation. Arup Nanda says in the OPS days, “setall” was the setting he usually used. Your mileage may vary.

Steve Adam’s recommenations with respect to this parameter:

http://www.ixora.com.au/notes/filesystemio_options.htm

17. Creating the RAC database

This is much like creating a normal database. Most of the special stuff is in the init.ora and config.ora. The only new stuff is creating and enabling a separate undo tablespace, as well as second sets of redologs. Well you’re probably used to mirroring these anyway. Run this from node1.

– crEASTWEST.sql –

– send output to this logfile

spool crEASTWEST.log

startup nomount

– the big step, creates initial datafiles

create database EASTWEST

maxinstances 5

maxlogfiles 10

character set “we8iso8859p1″

datafile
‘/ocfs/oradata/EASTWEST/sysEASTWEST01.dbf’ size 500m reuse

default temporary tablespace tempts tempfile ‘/ocfs/oradata/EASTWEST/tmpEASTWEST01.dbf’ size 50m reuse

undo tablespace UNDO_WEST datafile ‘/ocfs/oradata/EASTWEST/undEASTWEST01.dbf’ size 50m reuse

logfile
‘/ocfs/oradata/EASTWEST/logEASTWEST01a.dbf’ size 25m reuse,

‘/ocfs/oradata/EASTWEST/logEASTWEST01b.dbf’ size 25m reuse;

– create the data dictionary

@?/rdbms/admin/catalog.sql

@?/rdbms/admin/catproc.sql

– create the second undo tablespace

create undo tablespace UNDO_EAST datafile
‘/ocfs/oradata/EASTWEST/undEASTWEST02.dbf’ size 50m reuse;

– create a second set of redologs

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02a.dbf’ size 25m reuse;

alter database add logfile thread 2 ‘/ocfs/oradata/EASTWEST/logEASTWEST02b.dbf’ size 25m reuse;

alter database enable thread 2;

shutdown immediate;

18. Startup of all instances

The magic step. Not a lot to it if all the above steps went

properly, but exciting none the less.

First on node1

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Then the same thing on node2

$ sqlplus /nolog

SQL> connect / as sysdba

SQL> startup

Voila! You should be up and running at this point.

Errors. If you’re getting ORA-32700 like this:

SQL> startup

ORACLE instance started.

Total System Global Area 93393188 bytes

Fixed Size 450852 bytes

Variable Size 88080384 bytes

Database Buffers 4194304 bytes

Redo Buffers 667648 bytes

ORA-32700: error occurred in DIAG Group Service

It probably means oracm didn’t start properly. This would probably

give you trouble *CREATING* a database as well.


Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary

Part 8: RAC/Linux/Firewire – Review of Clustered Features + Architecture

Review of Clustered Features + Architecture

Oracle 9iRAC has some important hardware and software components which are distinct from a standard single-instance setup.

On the hardware side, you have the IPC interconnect. On high-end specialized hardware such as sun clusters, you have a proprietary interconnect. On our low-cost working-mans clustering solution, you simply use a private or public ethernet network. The Oracle software components which we’ll describe in detail below, use this interconnect for interprocess communication, sending messages to syncronize caches, locks, and datablocks between each of the instances. This sharing of cache information is called Cache Fusion, and creates what Oracle calls the Global Cache.

Another important piece of the 9iRAC pie is the storage subsystem, and the Oracle cluster filesystem. What we’ve created with our cheap firewire shared drive is affectively a SAN or Storage Area Network. In high-end systems this SAN would probably be built with fiber-channel technology and switches. This storage subsystem is sometimes called a shared-disk subsystem. In order to write to the same disk being accessed by two machines, you have your choice of raw devices, or OCFS. Raw devices can also be used with a single instance database. They eliminate completely the OS filesystem, and all associated caching and management, providing direct raw access to the device. This type of arrangement is more difficult to manage. You don’t have datafiles to work with, so your backups, and database management become a bit more complex. Also, adding new datafiles is always adding a new partition, thus they are more difficult to delete, resize, and rearrange. OCFS provides you this functionionality, but with the flexibility and simplicity of a filesystem. Definitely the recommended option.

Oracle’s cluster manager (the oracm process we started above) coordinates activities between the cluster of instances. It monitors resources, and makes sure all the instances are in sync. If one becomes unavailable, it handles that eventuality.

With a 9iRAC database, aside from the normal SMON, PMON, LGWR, CKPT, + DBWR processes, you have a number of new processes which show up. They are as follows:

PROCESS NAME DESCRIPTION

——- —————– ———————-

LMSn global cache services controls the flow of data blocks + messages

LMON global enqueue monitor monitors global locks

LMD global enqueue service daemon: manages remote resource requests

LCK lock process manages local library and row cache req

DIAG diagnosability daemon reports process failures to alert.log

In 9iRAC there are two important components which manage shared resources. They are Global Cache Services (GCS) (Block Server Process or BSP in 8iOPS) and Global Enqueue Services (GES) components. GCS shares physical blocks from the buffer caches of each instance in the cluster, passing them back and forth as necessary. The GES shares locking information. In the local context you have three types of resource locks – null, shared, and exclusive. A null lock generally escalates to other types of locks, and strange as it may seem, doesn’t convey any access rights. Multiple instances can gain a null lock. Multiple instances can acquire a shared lock for reading, however, while it is in shared mode, other instances cannot write to it. And an exclusive lock can be held by only one instance. It gives exclusive access for writing. In the global context, ie whenever Cache Fusion is invoked, or whenever two instances in a cluster want the same data, you have those same three locks in two modes. Ownership of the current image or past image. The issue of the past image comes up because in a single instance, another session can construct the past image from undo, however, in the global context, this has to be put together and passed along to the other instance in the cluster.

The physical database in and Oracle 9iRAC environment has a lot in common with a single instance database. In 9iRAC, each instance has it’s own ORACLE_HOME where the Oracle software, ORACLE_BASE/admin/ORACLE_SID directory in OFA where the bdump, udump, cdump, pfile, and create directories all are. Each instance also has it’s own archive logs, if you are running in archivelog mode. The example above I was not running in archivelog mode, for simplicity sake. All the other files which make up your database are shared, including datafiles for data, datafiles for index, redo, system, temp, and other tablespaces, as well as controlfiles.


Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary