The karma project is no longer maintained.
If you’d like more information about Oracle Consulting please visit our Oracle Professional Services page.
The karma project is no longer maintained.
If you’d like more information about Oracle Consulting please visit our Oracle Professional Services page.
The beauty of reading a book by a publisher not sanctioned by Oracle and by an author who doesn’t work for Oracle is that they can openly mention bugs. And there are oh-so-many! This book is a superb introduction to the Cost Based Optimizer, and is not afraid to discuss it’s many shortcomings. In so doing it also explains how to patch up those shortcomings by giving the CBO more information, either by creating a histogram here and there, or by using the DBMS_STATS package to insert your own statistics in those specific cases where you need to.
Another interesting thing is how this book illustrates, though accidentally, the challenges of proprietary software systems. Much of this book and the authors time is spent reverse engineering the CBO, Oracle’s bread and butter optimizing engine. Source code, and details about its inner workings are not published or available. And of course that’s intentional. But what’s clear page after page in this book is that for the DBA and system tuner, going about their day to day tasks, they really need inside information about what the optimizer is doing, and so this book goes on a long journal to illuminate much of what the CBO is doing, or in some cases provide very educated guesses and some speculation. In contrast, as we know and hear about often, the Open Source alternative provides free access to source code, though not necessarily to the goods themselves. What this means in a very real way is that a book like this would not need to be written for an alternative open source application, because the internal code would be a proverbial open book. That said it remains difficult to imagine how a company like Oracle might persue a more open strategy given that their bread and butter really is the secrets hidden inside their Cost Based Optimizing engine. At any rate, let’s get back to Jonathan’s book.
Reading this book was like reading a scientists notebook. I found it:
o of inestimable value, but sometimes difficult to sift through
o very anecdotal in nature, debugging, and constantly demonstrating that the CBO is much more faulty and prone to errors than you might imagine
o may not be easy to say I have a query of type X, and it is behaving funny, how do I lookup information on this?
o his discussion of the evolution of the product is so good I’ll quote it:
“A common evolutionary path in the optimizer code seems to be the following: hidden by undocumented parameter and disabled in first release; silently enabled but not costed in second release; enabled and costed in third release.”
o has excellent chapter summaries which were particularly good for sifting, and boiling down the previous pages into a few conclusions.
o it will probably be of particular value to Oracle’s own CBO development teams
CH2 – Tablescans
explains how to gather system stats, how to use dbms_stats to set ind. stats manually, bind variables can make the CBO blind, bind variable peeking may not help, partition exchange may break global stats for table, use CPU costing when possible
CH3 – Selectivity
big problem with IN lists in 8i, fixed in 9i/10g, but still prob. with NOT IN, uses very good example of astrological signs overlapping birth months, and associated CBO cardinality problems, reminds us that the optimizer isn’t actually intelligent per se, but merely a piece of software
CH4 BTree Access
cost based on depth, #leaf blocks, and clustering factor, try to use CPU costing (system statistics)
CH5 – Clustering Factor
mainly a measure of the degree of random distribution of your data, very important for costing indx scans, use dbms_stats to correct when necessary, just giving CBO better information, freelists (procID problem) + freelist groups discussion with RAC
CH6 – Selectivity Issues
there is a big problem with string selectivity, Oracle uses only first seven characters, will be even more trouble for urls all starting with “http://”, and multibyte charactersets, trouble when you have db ind. apps which use string for date, use histrograms when you have problems, can use the tuning advisor for “offline optimization”, Oracle uses transitive closure to transform queries to more easily opt versions, moves predicates around, sometimes runs astray
CH7 – Histograms
height balanced > 255 buckets (outside Oracle called equi-depth),
otherwise frequency histograms, don’t use cursor sharing as it forces bind variables, blinds CBO, bind var peeking is only first call, Oracle doesn’t use histograms much, expensive to create, use sparingly, dist queries don’t pull hist from remote site, don’t work well with joins, no impact if you’re using bind vars, if using dbms_stats to hack certain stats be careful of rare codepaths
CH8 – Bitmap Indexes
don’t stop at just one, avoid updates like the plague as can cause deadlocking, opt assumes 80% data tightly packed, 20% widely scattered
CH9 – Query Transformation
partly rule based, peeling the onion w views to understand complex queries, natural language queries often not the most efficient, therefore this transformation process has huge potential upside for Oracle in overall optimization of app code behind the scenes by db engine, always remember Oracle may rewrite your query, sometimes want to block with hints, tell CBO about uniqueness, not NULL if you know this
CH10 – Join Cardinality
makes sensible guess at best first table, continues from there,
don’t hide useful information from the CBO, histograms may help with some difficult queries
CH11 – Nested Loops
fairly straightforward costing based on cardinality of each returned set multiplied together
CH12 – Hash Joins
Oracle executes as optimal (all in memory), onepass (doesn’t quite fit so dumped to disk for one pass) and multipass (least attractive sort to disk), avoid scripts writing scripts in prod, best option is to use workarea_size_policy=AUTO, set pga_aggregate_target & use CPU costing
CH 13 – Sorting + Merge Joins
also uses optimal, onepass, & multipass algorithms, need more than 4x dataset size for in memory sort, 8x on 64bit system, increasing sort_area_size will incr. CPU util so on CPU bottlenecked machines sorting to disk (onepass) may improve performance, must always use ORDER BY to guarentee sorted output, Oracle may not need to sort behind the scenes, Oracle very good at avoiding sorts, again try to use workarea_size_policy=AUTO
CH 14 – 10053 Trace
reviews various ways to enable, detailed rundown of trace with comments inline, and highlights; even mentions a VOL 2 + 3 of the book is coming!
be careful when switching from analyze to dbms_stats, in 10g some new hist will appear w/default dbms_stats options, 10g creates job to gather stats
I found this book to be full of gems of information that you won’t find anywhere else. If you’re at the more technical end of the spectrum, this is a one of a kind Oracle book and a
must-have for your collection. Keep in mind something Jonathan mentions in appendix A: “New features that improve 99% of all known queries may cripple your database because you fall into the remaining 1% of special cases”. If these cases are your concern, then this book will surely prove to be one-of-a-kind for you!
I have a confession to make. I haven’t read an Oracle book cover-to-cover in almost three years. Sure I skim through the latest titles for what I need and of course check out documentation of the latest releases. That’s what good docs provide, quick reference when you need to check syntax, or details of a particular parameter, or feature, but have you ever read some documentation, sift through a paragraph, page or two, and say to yourself, that’s great, but what about this situation I have right now? Unfortunately documentation doesn’t always
speak to your real everyday needs. It is excellent for reference, but doesn’t
have a lot of real-world test cases, and practical usage examples. That’s where Tom Kyte’s new book comes in, and boy is it a killer.
I’ve read Tom’s books before, and always enjoyed them. But his new APress title really stands out as an achievement. Page after page and chapter after chapter he uses straightforward examples pasted right from the SQL*Plus prompt to illustrate, demonstrate, and illuminate concepts that he is explaining. It is this practical hands on, relentless approach that makes this book 700 pages of goodness.
Already an expert at Oracle? You’ll become more of one after reading this book. With reviewers like Jonathan Lewis I expected this book to be good from the outset I have to admit. But each chapter delves into a bit more depth around subjects that are central to Oracle programming and administration.
No SCREEN SHOTS!
One of the things I loved about this book most of all is its complete lack of screenshots! But how does one illustrate a concept then, you might ask? These days with graphical interfaces becoming more and more popular even among technical folks, I run into the question of the command line over an over again. How can you be doing sophisticated database administration of the latest servers running Oracle with the command line? Or another question I often get is, can you really do everything with the command line? The answer to both is a resounding yes, in fact you can do much more with the command line. Luckily for us, Tom is of this school too, and page after page of his book are full of real examples and commands that you can try for yourself, with specific instructions on
setting up the environment, using statistics gathering packages, and so on. In an era of computing where GUIs seem to reign like magazines over the best literature of the day, it is refreshing to see some of the best and most technical minds around Oracle still advocate the best tool, command line as the interface
of choice. In fact it is the command line examples, and happily the complete lack of screenshots that indeed makes this book a jewel of a find.
As a DBA you might wonder why I’m talking so highly of a book more focused towards developers. There are a couple of reasons. First this book is about the Oracle architecture, as it pertains to developers. In order for developers to best take advantage of the enterprise investment in Oracle *** they need to thoroughly understand the architecture, how specific features operate, which features are appropriate, and how to optimize their code for best interaction with them. Of course a DBA who is trying to keep a database operating in tip top shape needs to be aware of when developers are not best using Oracle, to identify,
and bring attention to bottlenecks, and problem areas in the application. Second, it is often a DBAs job to tune an existing database, and the very largest benefits come from tuning application SQL. For instance if a developer has chosen to use a bitmap index on an INSERT/UPDATE intensive table, they’re in for serious problems. Or if a developer forgot to index a foreign key column. This book directly spearheads those types of questions, and when necessary does mention a thing or two of direct importance to DBAs as well.
Chapter 2 has an excellent example of creating an Oracle database. You simply write one line to your init.ora “db_name=sean” for example, and then from the SQL> prompt issues “startup nomount” and then “create database”. Looking at the processes Oracle starts, and the files that are created can do wonders for your understanding of database, instance, and Oracle in general.
Chapter 3 covers files, files, and more files. Spfile replaces a text init.ora allowing parameters to be modified while an instance is running *AND* stored persistently. He covers redolog files, flashback logs, and change tracking file
s, as well as import/export dump files, and lastly datapump files.
Chapter 4 covers memory, and specifically some of the new auto-magic options, how they work, and what to watch out for.
Chapter 5 covers processes.
Chapter 6, 7, and 8 cover lock/latching, multiversioning, and transactions respectively. I mention them all here together because to me these chapters are the real meat of the book. And that’s coming from a vegetarian! Seriously these
topics are what I consider to be the most crucial to understanding Oracle, and modern databases in general, and the least understood. They are the darkest corners, but Tom illuminates them for us. You’ll learn about optimistic versus pessismistic locking, page level, row level, and block level locking in various modern databases such as SQLServer, Informix, Sybase, DB2 and Oracle. Note Oracle is by far in the lead in this department, never locking more than it needs to, which yields the best concurrency with few situations where users block each other. Readers never block, for instance, because of the way Oracle implements all of this. He mentions latch spinning, which Oracle does to avoid a context switch, that is more expensive, how to detect, and reduce this type of contention. You’ll learn about dirty reads, phantom reads, and non-repeatable reads, and about Oracle’s Read-committed versus Serializable modes. What’s more you’ll learn about the implications of these various models on your applications, and what type of assumptions you may have to unlearn if you’re coming from developing on another database to Oracle. If I were to make any criticism at all, I might mention that in this area Tom becomes ever so slightly preachy about Oracle’s superb implementation of minimal locking, and non-blocking reads. This is in large part due I’m sure to running into so many folks who are used to developing on databases which do indeed dumb you down *BECAUSE* of their implementation, encouraging bad habits with respect to transactions, and auto-commit for instance. One thing is for sure you will learn a heck of a lot from these three chapters, I know I did.
Chapter 9 Redo & Undo describes what each is, how to avoid checkpoint not complete and why you want to, how to *MEASURE* undo so as to reduce it, how to avoid log file waits (are you on RAID5, are your redologs on a buffered filesystem?), and what block cleanouts are.
Chapter 10 covers tables. After reading it I’d say the most important types are normal (HEAP), Index Organized, Temporary, and External Tables. Use ASSM where possible as it will save you in many ways, use DBMS_METADATA to reverse engineer objects you’ve created to get all the options, don’t use TEMP tables to avoid inline views, or complex joins, your performance will probably suffer, and how to handle LONG/LOB data in tables.
Chapter 11 covers indexes, topics ranging from height, compression count, DESC sorted, colocated data, bitmap indexes and why you don’t want them in OLTP data
bases, function based indexes and how they’re most useful for user defined functions, why indexing foreign keys is important, and choosing the leading edge of an index. Plus when to rebuild or coalesce and why.
Chapter 12 covers datatypes, why never to use CHAR, using the NLS features, the CAST function, the number datatypes and precision versus performance, raw_to_hex, date arithmatic, handling LOB data and why not to use LONG, BFILEs and the new UROWID.
Chapter 13 discusses partitioning. What I like is he starts the chapter with the caveat that partitioning is not the FAST=TRUE option. That says it all. For OLTP databases you will achieve higher availability, and ease of administration of large options, as well as possibly reduced contention on larger objects,
but it is NOT LIKELY that you will receive query performance improvements because of the nature of OLTP. With a datawarehouse, you can use partition elimination on queries that do range or full table scans which can speed up queries dramatically. He discusses range, list, hash, and composite partitioning, local indexing (prefixed & non-prefixed) and global indexing. Why datawarehouses tend to use local, and OLTP databases tend to use global indexes, and even how you
can rebuild your global indexes as you’re doing partition maintenance avoiding a costly rebuild of THE ENTIRE INDEX, and associated downtime. He also includes a great auditing example.
Chapter 14 covers parallel execution such as parallel dml, ddl, and so on. Here is where a book like Tom’s is invaluable, as he comes straight out with his opinions on a weighty topic. He says these features are most relevant to DBAs doing one-off maintenance and data loading operations. That is because even in
datawarehouses, todays environments often have many many users. The parallel features are designed to allow single session jobs to utilize the entire system resources. He explains that Oracle’s real sweet spot in this real is parallel
DDL, such as CREATE INDEX, CREATE TABLE AS SELECT, ALTER INDEX REBUILD, ALTER TABLE MOVE, and so on.
Chapter 15, the final chapter covers loading and unloading data. A significant portion of the chapter covers SQL*Loader for completeness, but he goes on to celebrate the wonders of external tables for loading data into Oracle. In particular there is an option in SQL*Loader to generate the CREATE statement for an
external table that does the SAME load! This is great stuff. External tables provide advantages over SQL*Loader in almost every way, except perhaps loading over a network, concurrent user access, and handling LOB data. External tables can use complex where clauses, merge data, do fast code lookups, insert into multiple tables, and finally provide a simpler learning curve.
Yum. If you love Oracle, you’ll want to read this book. If you need to know more about Oracle say, for your job, that’s another reason you might read this book. Oracle is fascinating technology, and Tom’s passion for understanding every last bit of it makes this book both a necessary read, and a very gratifying
Security experts will probably tell you it’s not a good idea to be a dummy and also in charge of your own firewall. They’re probably right, but it’s a catchy title. In this article, I’ll quickly go over some common firewall rules for iptables under linux.
First things first. If you don’t have the right kernel, you’re not going to get anywhere. A quick way to find out of all the right pieces are in place is to try to load the iptables kernel module.
$ modprobe iptable_nat
If you get errors you may need to compile various support into your kernel, and of course you may need to compile the iptable_nat module itself. The easiest way is to download the source RPM for your installed distribution, and do ‘make menuconfig’ with it’s default configuration, that way all the things that are currently working with your kernel won’t break when you forget to select them. For details see the Linux Firewall using IPTables HOWTO.
Once the module is loaded, start the service:
$ /etc/rc.d/init.d/iptables start
You will also have to have your interfaces up. I did this as follows:
# startup dhcp /usr/sbin/dhcpd eth0 # bring up twc cable connection to internet ifup eth1
You’ll need to set some rules. Be sure to get your internet interface, and local network interface right on these commands. First to setup masquerade which allows multiple machines behind your firewall to all share your single dynamically assigned IP address from your internet provider:
$ iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE
On my firewall, eth1 is the device which talks to the ISP, and gets the IP address we’ll use on the internet. The other interface, eth0 is for my local internal network.
Next be sure to enable VPN traffic through the firewall if you have a VPN connection to your office:
iptables -A INPUT -s 10.0.0.0/24 -p 50 -j ACCEPT iptables -A INPUT -s 10.0.0.0/24 -p 51 -j ACCEPT iptables -A INPUT -s 10.0.0.0/24 -p udp --dport 500 -j ACCEPT
Lastly enable ip forwarding:
echo 1 > /proc/sys/net/ipv4/ip_forward
If you’re experiencing a real database emergency, please call us right away. We’re located conveniently in Manhattan, and can get to your business site quickly, if necessary.
Running into issues or problems with your production Unix systems running SuSE or CentOS? Please contact us for details. We’re conveniently located in Manhattan, and accept paypal as payment. We can get started immediately.
OPEN INSIGHTS Newsletter
Issue 40 – Self Taught
January 1, 2008
by Sean Hull
Happy New Year 2008!
It’s now the end of our third year publishing the Open Insights newsletter. We thought we’d celebrate by relaunching our popular blog Oracle + Open Source. The design is new, there’s a lot more content, things are easier to find, and overall it is indexing great on Google, even better!
Like what you see here? Forward us to a friend. And let us know if you have any suggestions or comments. They are always welcome.
In This Issue:
Recently I was having lunch with a colleague of mine. We were catching up as we hadn’t seen each other in some months. She had recently started a new and challenging job, one she was very excited about. The new challenges involved a lot of open-source technology, so we got to talking about that.
She made a very interesting observation to me, one that has stayed with me, and one that I continue to mull over. She said that open-source is very autodidactic. Ok, I admit, I didn’t know the word either, at first. Maybe some of you can feel my pain! Basically it means self-taught, and what she was getting at is that open-source is this teeming wilderness of software, without instruction manuals, the familiar structural framework of corporations, and business processes that one can latch onto, study, and follow. Of course I thought, how else would you want it? And therein lay my own misunderstandings.
I’ve been involved in open-source technologies for years. Even while at University, most of the software projects that a computer science major comes in contact with are open-source, or sort of do-it-yourself research projects, or experiments on what’s possible, and how things work. So for me jumping into the commercial world of Oracle came later, and it all fit together very well like a puzzle, as I understood where many of the Oracle programmers thinking was, or could often guess.
Many of the people who come to Oracle professionally did not start out as engineers or computer science majors in their University days. They’ve come to the technology from the top-down. This can be a real godsend on the business side, because they understand business processes, can communicate to real business needs, and translate what is happening under the hood from a shared vantage point. But for these folks open-source really is very much a freewheeling world, one which requires constantly to learn and relearn, reverse-engineer pieces and components, and think outside the box.
I think both types of people are really crucial to business. And I also think that for this reason, the continual embrace of open-source technologies by Oracle corporation, such as Linux, Apache, and PHP will both build Oracle’s business and stock price, and at the same time grow the demand for requisite skilled engineers. This is good news for everyone, both in the commercial Oracle space, and in the open-source space.
For business, it’s worth considering, given this dual nature, which type of folks you’re looking for when you’re hiring full-time employees, consultants, and outsourced resources.
** Sponsored Ad ** Sponsored Ad **
In other news, we’ve submitted a series of five abstracts to the IOUG Collaborate 2008 Conference and another five to the O’Reilly MySQL Conference in April 2008. You can find them on our relaunched Oracle + Open Source site by clicking "abstract", or by following this link.
In our most recent interview we had the opportunity to talk with Norman Yamada CTO of Millburn Corporation.
Norman shares with us his experiences providing world-class computing solutions, and the pros and cons of doing it with open source.
Chip Conley is the CEO of Joie de Vivre Hospitality a company which manages a boutique collection of hotels in the Bay Area. Surviving the dot-com and post-911 downturn was not easy for anyone, but for the hotel industry, especially in the Bay Area, it was a very very dramatic crash. So to say their company thrived and expanded during this period, is really to underline the real-world success of Chip’s ideas. He’s really ironed them out, and presented them here for the rest of us, to learn and grow our own business. Are you in a job, a career or a calling? This book will challenge you to ask the question.
I warn you in advance, Mr Pinker will challenge your assumptions, and all of our modern assumptions about human nature. From racism, to gender, politics, to violence, he pulls out all the stops, and isn’t afraid to touch hot-button issues. A bit dense at times, Pinker is a great modern thinker, and asks us to do the same. I definitely don’t agree with all of what he says, he certainly challenges us to think deeply about our modern conceptions, and assumptions about our basic nature, what we have control over, and what is more linked to our own million year evolution.
A little inane humor to brighten up your day…
Google Announces Plan To Destroy All Information It Can’t Index. It doesn’t get much better than that!
We all want to optimize our sites for Google. I mean other than a select few, that’s where most of our traffic comes from, so the more our site plays well with Google, the more users, readers, customers, and clients will find their way to us.
Most of the SEO material I’ve read has been pretty sparse, and unclear. But I’ve been following the topic over at my good Felix’s #comments blog, and I’m starting to get it. So you can too! Take a read: Google loves me, again!
If you haven’t been following the news on the topic, take a look over at this NY Times piece: Silicon Valley Start-Ups Awash in Dollars, Again. Personally I don’t think there is much hysteria this time around, sure there’s some, but not much. The industry is more mature now, and computers in general have lost their initial wow factor, so people are general more sober, and able to step back and see what is actually useful, can make money, is making money, or might well make money. That’s the root of investing smart.
Issue 38: Are You Fast Failing
Issue 37: A Real Open Book
Issue 36: Rarity of Excellence
Issue 34: Hindsight Is Always 20/20
Issue 33: Market For Experts
Issue 32: Different Heritages
Archive: Past Issues
Oracle DBA Interview: click here
Tools for the Intrepid DBA: click here
Oracle9i + RAC on Linux/Firewire: click here
Migrating MySQL to Oracle: click here
MySQL Disaster Recovery: click here
In a nutshell, Oracle. Everything related to and surrounding the database technology we specialize in, but specifically setup, admin and tuning of Oracle technology. I have 10 years experience with Oracle, wrote a book on the technology, and write and lecture frequently. I’m founder and senior consultant of the company. In capacities where your company might hire Deloitte, AIG, or Oracle Consulting we can bring the same level of service and experience, at about half the price. Simple equation.
Looking for a top-flight DBA? Visit us on the web at www.iheavy.com.
Encountering a problem with your Oracle systems? We’re available for emergency support services, and are conveniently located in Manhattan. We can arrange payment via Paypal, as a deposit for a quick start on an urgent matter.
Your production enterprise database needs round the clock monitoring, and support. We can provide the best proactive monitoring, and support, and we’re conveniently located in Manhattan. We provide 100% Service Level Agreement (SLA) for MySQL, and Oracle support services. Call us for details. +1-866-268-9448
Oracle has two very different technologies, each with it’s own strengths and weaknesses that implement high availability solutions. In choosing between the two technologies, it’s important to factor in the relevant risks, both small and large, to put the entire picture into perspective.
RAC or Real Application Clusters, is essentially an always-on solution. You have multiple instances or servers accessing the same database on shared storage in your network. With existing technology limitations, in practical terms, these different servers must be on the same local network, in the same datacenter.
Oracle’s DataGuard technology, formerly called Standby database in previous versions, provides a rolling copy of your production database. The standby database is started in read-only mode, constantly receiving change data, sent over from the production database, keeping it always in sync at all times, and at most only a few minutes behind. Were the production server to fail, that server could take over in less than the time the DNS change or IP swap would take. What’s more the standby copy can be at another datacenter, or on another continent!
Before we compare the strengths and weaknesses, let’s talk about software risks. In the real-world, you can have operator errors, which means someone made a mistake at the keyboard, or someone decided to drop the wrong table, and realized only later their mistake. None of these solutions protect you from that. You would have to recover either point-in-time, or from an export. You could also encounter bugs in software that could cause a crash (downtime) or corruption (data loss and downtime to repair). There are also potential configuration errors, so the more components you have the more potential problems. And then lastly there is the risk of buying into technologies for which experienced help is hard to find.
You could have hardware failure of your server, motherboard, memory, nic card, or related problems. You could also have failure of a powersupply in the disk subsystem, failure of one of those boards, or of the fibre channel switch or IP switch. Hence redundancy in these areas is crucial as well. But you can also have power failure on that floor or in the datacenter as a whole, or someone could trip the chord.
Also in a very real sense, the power grid is at some risk. If the Northeast is any indication, a 24 hours of outage every 20-30 years is not unusual. Beyond power, their is the potential for fires earthquakes, and other natural disasters.
Strengths and Weaknesses
For RAC, it’s strength is it’s always-on aspect. The second instance is always available, so in as much as hardware failure at the server level goes, it protects you very well.
In terms of weaknesses, however, anything outside the server, disk subsystem, power grid failure, or natural disaster that impacts the hosting facility, it does not protect you against. Furthermore there are more software components in the mix, so more software that will have bugs, and hurdles you can stumble over. Lastly, it may be harder to find resources who have experience with RAC, as it certainly is a bigger can of worms to administer.
For DataGuard, it’s strength is that the failover server can be physically remote, even on another continent. This really brings peace of mind, as everything is physically separate. It will survive any failure in the primary system.
In terms of weaknesses, however, there is a slight lag, depending on network latency, amount of change data being generated, and how in-sync you keep the two systems.
In 10g, Oracle really brings to the table world-class High Availability solutions. Both DataGuard and RAC have their strengths and weaknesses. Some sites even use both. Each makes sense in particular circumstances but more often than not, DataGuard will prove to be a robust solution for most enterprises.