Part 5: RAC/Linux/Firewire – Firewire + OCFS Setup

Firewire + OCFS Setup


In this installment, we’ll discuss how to get the Firewire drive shared between your two Linux boxes.

8. Test Firewire drive

At this point you can test the firewire drive if you like, with the standard Linux driver. You won’t be able to share the drive between the two nodes yet, however.

As root do the following:

$ modprobe ohci1394

$ modprobe ieee1394

$ modprobe sbp2

$ modprobe scsi_mod

Grab a copy of rescan-scsi-bus.sh from here and run it:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

run rescan scsi bus

Now partition it with fdisk:

$ fdisk /dev/sda

Now try making an ext2 filesystem with mke2fs

$ mke2fs /dev/sda1

Now mount it

$ mount -t ext2 /dev/sda1 /mnt/test

Now unmount it

$ umount /mnt/test

9. Linux Kernel Setup w/Firewire patch

The Linux kernel is a complex beast, and compiling it can often be a challenge. Though I like rolling my own, I downloaded the patched firewire source distro off of OTN, and try as I might, I could not get those compiled kernels to work. If anyone *DOES* get it to work, please send me their “.config” from the kernel source directory. Also I’ve tried to encourage the Oracle/Linux Firewire team to build a patch-only distro which can be applied against a standard Linux source tree. No luck yet.

Assuming you’re not going to roll your own, just download linux-2.4.20rc2-orafw-up.tar.gz from here:

http://otn.oracle.com/tech/linux/open_source.html

Move to the “/” or root directory, and untar the file:

$ tar xvzf linux-2.4.20rc2-orafw-up.tar.gz

Edit your /etc/lilo.conf or /etc/grub.conf file to include the new kernel. Do *NOT* make it the default kernel, it may not boot.

Reboot. If you come up again, you’re in luck, the kernel works for your machine. Next you want to edit your /etc/modules.conf to include these lines:

# options for oracle firewire patched kernel

options sbp2 sbp2_exclusive_login=0

post-install sbp2 insmod sd_mod

post-remove sbp2 rmmod sd_mod

As root, load the modules like this:

$ modprobe ieee1394

$ modprobe ohci1394

$ modprobe ide-scsi

$ modprobe sbp2

$ modprobe scsi_mod

If you’re having trouble seeing the device, grab a copy of rescan-scsi-bus.sh from here:
http://www.fifi.org/cgi-bin/man2html/usr/share/man/man8/rescan-scsi-bus.sh.8.gz

If you want to partition, now is a good time. Use fdisk as root like this:

$ fdisk /dev/sda

If you have other SCSI devices, it may be /dev/sdb or dev/sdc and so on.

10. Go through steps 1-8 on node 2

11. Cluster Filesystem setup (OCFS)

If you wanna play around, use mke2fs on the one of the partitions you created with fdisk, and then mount the partiton on machine a. Then mount the partition again on machine b. Create a file on one of the two boxes. The other machine *WON’T* reflect it. This is equivalent to unplugging a disk which is mounted, such as a USB device, or some such. You can and probably *HAVE* corrupted the filesystem. That’s ok, because we don’t have anything important on the disk yet. Ok, unmount on both machines. If you have trouble, you may need to reboot.

Having gone through the above example, you know why OCFS is so important. Ok, now the fun part. Install OCFS. There are good docs to be found in the linux_ocfs.pdf file here:

http://download.oracle.com/otn/linux/code/ocfs/linux_ocfs.pdf

Without RedHat Advanced Server, the RPMs are *NOT* going to work. Just grab a copy of ocfs-1.0-up.o and put it in
/lib/modules/2.4.20-rc2-orafw/kernel/fs.

Use ocfstool to create the /etc/ocfs.conf file. The pdf doc listed above is pretty good at explaining this.

Load the ocfs kernel module with load_ocfs. If everything goes right it will tell you like this:

$ cd /lib/modules/2.4.20-rc2-orafw/kernel/fs

$ load_ocfs

/sbin/insmod ocfs node_name=zenith
ip_address=192.168.0.9 ip_port=7000 cs=1865 guid=72C2AF5CA29FA17CB9CB000AE6312F24

Using /lib/modules/2.4.20-rc2-orafw/kernel/fs/ocfs.o

Next make the filesystem. ocfstool can do this too.

$ mkfs.ocfs -F -b 128 -L /ocfs -m /ocfs -u 1001 -g 1001 -p 0775 /dev/sda1

And finally mount the filesystem!

$ mount -t ocfs /dev/sda1 /ocfs

$ df -k

Filesystem 1K-blocks Used Available Use% Mounted on

/dev/hda2 55439548 20835260 31788096 40% /

/dev/hda1 101089 18534 77336 20% /boot

none 112384 0 112384 0% /dev/shm

/dev/cdrom 122670 122670 0 100% /mnt/cdrom

/dev/sda1 60049024 30080 60018944 1% /ocfs

12. Perform step 10 on node 2.

13. Test ocfs

Here we quickly verify that a file created on one instance is viewable on another.

On node1 do:

$ cd /ocfs

$ touch mytestfile

On node2 do:

$ cd /ocfs

$ ls

mytestfile

$

You’ll see to your astonishment that the file is now visible on node 2!

Part 1 – Introduction

Part 2 – Basic Costs + Hardware Platform Outline

Part 3 – Software Requirements, Versions, etc

Part 4 – Initial Oracle Setup

Part 5 – Firewire + OCFS Setup

Part 6 – Cluster Manager Setup

Part 7 – Cluster Database Setup

Part 8 – Review of Clustered Features + Architecture

Part 9 – A quick 9iRAC example

Part 10 – Summary