Oracle Database 12c review: Finally, a true cloud database

Pluggable databases bring a new level of efficiency and ease to database consolidation, while a wealth of other new features address performance, availability, and more

Share

In development for roughly four years, Oracle Database 12c introduces so many important new capabilities in so many areas -- database consolidation, query optimisation, performance tuning, high availability, partitioning, backup and recovery -- that even a lengthy review has to cut corners. Nevertheless, in addition to covering the big ticket items, I'll give a number of the lesser enhancements their due.

Having worked with the beta for many months, I can tell you that the quality of software is also impressive, starting with a smooth RAC cluster installation. As with any new software release, I did encounter a few minor bugs. Hopefully these have been resolved in the production release that arrived yesterday.

Pluggable databases

Consolidation is an important business strategy to reduce the costs of infrastructure and operational expenses. In many production database servers, a big portion of CPU cycles go unused. By consolidating many databases into fewer database servers, both the hardware and operational staff can be more effectively utilised.

But database consolidation is easier said than done. Critical issues such as database workload characteristics, the ability to maintain performance service levels, and point-in-time recovery needs of different databases must be considered during consolidation efforts. Ideally, consolidation would not only reduce the need to purchase and allocate less physical CPU, RAM, and I/O (because physical servers are underutilised), but would also reduce actual resource consumption (because multiple instances share some overhead). However, in the past, we've seen that co-locating databases physically in the same server does not reduce overall resource usage.

Oracle's new pluggable database feature reduces the risk of consolidation because the DBA can easily plug or unplug an existing database to or from a container database. There is no need to change any code in the application. When the user connects to a plugged database, the database environment looks exactly as if the user had connected to a traditional database.

Further, pluggable databases do lower resource consumption. Memory and processes are owned by the container database and shared by all pluggable databases, improving resource usage overall. It is also easy to unplug a database and convert the pluggable database to a traditional database if required. In addition, you can back up and recover pluggable databases independently of the container database; you can also perform a point-in-time recovery of a pluggable database. Further, Resource Manager can be used to control resources consumed by a pluggable database.

In short, database consolidation can be done much more effectively with pluggable databases than ever before. Finally, we have a true cloud database.

Optimiser features

Version 12c introduces a few useful SQL Optimiser features, and most of these are automatically enabled.

Although Optimiser has matured over the years, it is still not uncommon for Optimiser to choose an inefficient execution plan due to incorrect cardinality estimates, invalid statistics, or even stale statistics. This can have dire results. A SQL statement estimated to run for a few seconds might take hours to execute if the chosen execution plan is not optimal.

Cardinality Feedback -- a feature introduced in Version 11g -- monitors the execution of SQL statements and reoptimises if the actual cardinality, such as the number of rows returned from the query, varies greatly from the cardinality estimates. A new feature in 12c called Adaptive Plan takes the next step in SQL auto-tuning. Instead of choosing the final execution plan at parse time, Optimiser defers the final choice among multiple sub-plans until execution time.

Essentially, Optimiser introduces a piece of code, aptly named Statistics Collector, into SQL execution. Statistics Collector buffers rows from early steps in the execution plan. Depending upon the number of rows retrieved, the Optimiser chooses the final execution plan. The chosen plan will be reused for subsequent executions of the statement if the cursor is shared. A just-in-time Optimiser!

Adaptive Reoptimisation, similar to the Cardinality Feedback feature, affects only the subsequent executions of a SQL statement. If the Optimiser estimates are vastly different from the execution statistics, then the Optimiser uses the execution statistics as a feedback mechanism and reparses the SQL statement during the next execution.

Generally, the quality of the statistics directly equates to the quality of execution plans generated by the Optimiser, bugs notwithstanding. In Version 12c, if the quality of available statistics is not good enough, then the Optimiser can dynamically sample the tables to recollect statistics. This dynamic statistics collection uses the same methods as dynamic sampling available in earlier releases, except that, in Database 12c, these statistics are also stored for future use.

Performance features

Version 12c introduces numerous performance enhancements. I will review just a few of the more important ones.

Traditionally, queries with union or union all branches execute one after another, meaning that one branch of the union or union all is executed, followed by the next branch, and so on. Version 12c introduces concurrent execution of union branches, meaning that one set of parallel servers will be executing one branch, a second set of parallel servers will be executing a different union all branch, and so on, all at the same time.

This concurrent execution feature will be very useful if the majority of the query execution time is spent outside of the database, such as when waiting for a SQL*Net message from a remote database link or for an SOA call response. The effective use of this new feature could reduce wait time dramatically, improving SQL elapsed time. (Incidentally, with Version 12c, SQL*Net packets can be compressed for database traffic, helping to reduce latency in a WAN environment.)

A classic problem with parallelism is that all of the parallel servers required for an operation may not be available at the moment of parallel statement initiation, leading to parallel statements executing across a smaller number of parallel servers. Parallel statement queuing -- a feature introduced in Version 11.2 -- resolved the problem by queuing up the sessions whenever sufficient parallel servers were not available. With Database 12c the user can construct multiple parallel statement queues using the database resource manager, bypass parallel statement queueing for critical statements, and group multiple parallel statement together to reduce wait time in parallel statement queues.

Also new in Version 12c, multiple indexes can be created on the same set of columns. For example, the user can create bitmap and b-tree indexes on the same set of columns, or even create a unique and non-unique index on the same set of columns. Multiple indexes will be useful whenever you want to convert an index from one type to another type, or convert a partitioned index to a non-partitioned or vice versa, with minimal downtime. Of course, the user can choose to maintain multiple indexes on the same set of columns permanently, too.

High availability

In Oracle Database, the concept of a service involves a connection property specified to connect to a desired instance. We commonly use services to balance the workload among the instances of a Real Application Cluster, for example. Version 12c introduces Global Data Services, which balances the workload not only among instances, but also among the databases.

Imagine a global environment where different databases are used to serve different segments of users. For example, a database in New York serves users on the East Coast, while a database in San Francisco serves users on the West Coast, and both of these databases are synchronised by replication software. In Database 12c, services are truly global, and a global equivalent of the familiar SCAN (Single Client Access Name) listener, called the Global Data Listener, is utilised to redirect the application connection to a database that can better serve the specific client. This feature also improves availability because new connections to failed databases can be redirected quickly to a surviving database.

After a service failover to another instance, applications usually do not know the status of in-flight transactions. While the changes made by a committed transaction are permanent, as dictated by the ACID properties of Oracle Database, commit status messages to the application are transient. The result is that an instance failure creates a classic dilemma: If the application reissues an already committed transaction it can lead to logical corruption, but if the application does not reissue a failed transaction then changes can be permanently lost.

Database 12c resolves this dilemma with Transaction Guard, a new feature that maintains transaction status permanently. Transaction Guard assigns a unique global transaction ID to each transaction, and maintains the status of that global transaction for a pre-defined period of time. After a failover, the application can requery the status of a transaction and take corrective action deterministically.

Oracle did not stop at merely providing a mechanism to identify transaction status. Version 12c also introduces Application Continuity. With this feature, a new client-side replay driver remembers submitted SQL statements, and after the failure detection, statements are replayed to insert failed transactions into the database. Note that code changes may be required to safely integrate the replay driver with the application, though.

Find your next job with computerworld UK jobs