I’ve been AWOL from the blogging world for quite awhile. Been busy for the past few weeks; working on another ITC white paper about RAC, assembled and delivered some internal 11g presentations and right now I’m in DC working with a client… all that has added up and kept me from blogging much.
I had the privilege to attend two days of partner training downtown in Chicago right before 11g was released. In fact Greg Pike from Piocon was in the same class with me and blogged about it a few weeks ago (part one and part two). There was quite a bit of information packed into the class and most of it is available right in the manuals on OTN so it’s fair game for blogging. I would strongly recommend reading his posts – they provide a really good overview of 11g new features.
I personally filled 16 sheets of notebook paper during the course of both days and so I’ll try to get a few of my highlights into this post.
They had a sales briefing at the very beginning but I skipped it (I’m more interested in the technical aspects). However it did seem like someone mentioned that a major theme for 11g will be agility. Trying to piggyback on the whole Agile coding thing I guess.
Here’s one way of breaking down many of new features into categories:
|High Availability||Data Guard|
|Grid and OLTP||RAC|
|VLDB and ILM||Partitioning|
|BI and DW||ETL Options|
|Mgmt and Change||Mgmt Packs|
Flashback Data Archive / Total Recall
There’s two big caveats with this feature in my opinion. But before I can explain that, a bit about how it works.
First, you pick a tablespace in which to “archive” undo data. When you enable flashback data archive for an object you specify the archive tablespace. Once this is setup, a new background process called FBDA wakes up every so often (default is every 5 minutes but it tunes itself somehow) and copies data from the rollback segments into the flashback tablespace(s).
The two big caveats both revolve around the fact that this feature is architected to GURANTEE data retention:
In order to make any major changes – including nearly any DDL – you must delete your entire history for the object. I don’t know if there’s any way to export or save the history but it didn’t sound like it. This sounds like a big weakness to me.
You now have another process – like the log archiver – that could potentially hang the database if there’s no free space. I asked this question point blank during the class and the instructor said that he didn’t know the answer. Seems pretty important to me.
There are two big things and Greg also covered them this in his posts. First, Physical Standby with Real-Time Query: have the database open for query and have it apply logs at the same time. Second is Snapshot Standby – open the database read-write then flash it back and convert it back into a physical standby. This was possible in 10g however the big improvement in 11g is that it’s SO MUCH EASIER!!! I just have to show you how simple it is:
ALTER DATABASE CONVERT TO SNAPSHOT STANDBY; ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
If you’ve ever done this in 10g you realize that we’ve eliminated a BUNCH of steps here. Cool.
Additionally, SQL Apply will now work with more data types.
It is claimed that performance will be up to twice as fast just from upgrading 10.2 -> 11.1.
New flashback feature – will flashback a transaction and all “dependent” transactions.
Great new feature – you can now backup a single file in parallel. Also, it is claimed that compression will be much faster.
Adding column with default value
Another cool new feature – if you’ve ever added a new column with a default value you know that Oracle goes out there and writes the value to every row in the database – which can take forever. Starting with 11g it’s only a data dictionary change!! Should be nearly instantaneous!
People have already been talking about this in the blogosphere. It’s basically an index that gets maintained but isn’t used by the optimizer. Can be used to test the effects of dropping an index while avoiding having to rebuild it if you want it back.
It seems that there are no big RAC-related changes in 11.1 – however it was suggested that 11g release 2 may include a few big changes.
There were some optimizations to the cache fusion protocol. It was claimed that one internally tested artifical read-intensive workload saw a 70% performance improvement.
Also DB Control and ADDM have been improved in the RAC area. I think that there are a few new “cluster view” screens on the console and ADDM can come up with cluster-level recommendations. (I’m pretty sure that ADDM was not cluster-aware in 10g.)
Lastly, an interesting statistic – apparently it was floating around that 60% of Oracle’s RAC customers are running their databases on ASM. Bold claim about the adoption of ASM; I still think OCFS is strong technology – especially for shops with good SA’s – but it seems that ASM is catching on.
Note that these are all turned off by default and must be explicitly enabled.
PL/SQL Result Cache: remembers the results of deterministic PL/SQL functions and shares these between sessions. As Jonathan Lewis has pointed out, the CBO already did this internally within queries but now the sharing can potentially be much broader.
SQL Query Result Cache: remembers the results of query blocks. This means that it can remember partial query results. Very immature technology; doesn’t have good cache management features yet. No touch counts or LRU lists – it looks like when the cache fills up it stops caching stuff. Interesting implications for DBLinks – won’t cache them by default but could save BIG time there, at the risk of inconsistent results.
Both of these cache results can live in two places:
Server Cache: takes space out of the shared pool. This is ENABLED by default even though caching will not get used by default! So just remember that some of your shared pool will get reserved for something you may not use.
OCI Consistent Client Cache: configured in the client’s SQLNET.ORA file and lives in the OCI client process’s memory. One interesting note about this: since ITC is under the beta program NDA, I had access to the beta documentation but can’t talk about the interesting notes that were in the margins. But lets just say that I’ve been saying ever since I heard about this feature (before I saw any beta software) that Oracle stole the idea from MySQL – and I haven’t changed my mind on that.
Lots more coming on this. I did a presentation with detailed info about this feature and plan to post some stuff about it next week.
SQL Plan Management: big new feature. Stores and verifies new plans before changing them at run-time! I will also post more info on this next week.
Controlled statistics publishing: after gathering new stats they can go into a “pending” state, to be verified before becoming “published”.
Extended statistics: hoping to address the well-documented problem of how the CBO deals with correlated columns. Creates a virtual column on more than one column and gathers stats on that virtual column.
Better statistics sampling: it is claimed that there’s very little reason to “compute stats” anymore.
New Background Processes
There are a bunch of them! KATE, MARK, PSPO, RMSn, RMSN, and SMC0/n are a few that I wrote down… not sure how accurate that list is though. Need to double-check with the docs and a few running instances.
Also, apparently the LMDx processes run in real-time now. This is pretty key and I plan to investigate that a bit.
Case-sensitive Database Passwords
Finally!!! It’s about time. Here’s how it works for upgrades: passwords remain case-insensitive until you change them. New passwords on upgraded databases are of course case-sensitive.
They are officially deprecated. SQL Plan Management provides all their functionality – as long as you own the appropriate license. Boo, Oracle – boo. You’ve taken away important functionality that’s been included for years and made it an extra-cost option.
Well that’s all I have time for right now… have to get over to work! Hopefully I’ll have some time to write a second part next week!