Today I am sure would have been very informative if I did not have so many production issues to resolve, as I did not get to pay too much attention, bad times. There were more lectures and more labs that I would have loved to take a more active part in, but it was not to be. Below is a high level look at the at what was covered and
DB2 10 for Linux, UNIX and Windows Bootcamp – Day 2 – What have we done – my prospective
DB2 Backup and Recovery
So due to today’s errors I will be part taking in some recovery over the weekend. The first slide is interesting in this section as it was extolling the virtues of using a backup, surely it is a no brainer? The beginning part was a little basic covering concepts of back-up, recovery and logging. Most of the concepts in this talk I already knew about or use every day.
DB2 Storage Optimisation
I did not get to listen to any of this or part take in the lab, but we already make use of storage optimisation and the compression of data it brings. I am excited about the adaptive compression what that will bring. This section from looking at the slides seems to have also had a bit of a sales pitch at the end, well Storage optimisation is a paid for feature!
Adaptive compression looks like it will be a good thing, default on new tables in your V10 DB but in an upgrade it will be an alter statement and a regorg with a dictionary recreation, which may be a little hard to sell to managers if your tables are going to be offline for a while! Apparently we can expect overall storage savings on a single DB of between 50% and 65%, very impressive.
Data Partitioning in DB2
Again I did not get to listen to all of this or part take in the lab, due to the production issues. This again did not have a lot in it that I have not come across, read about or implemented myself. It covered DPF, Range partitioning, MDC tables the ways to combine these three to reduce a theoretical 64 page search to an 4 page and all the rows search, this basically comes down to breaking your data down so much that there is very little searching needed by DB2 and it can find your data very quickly.
DB2 Temporal Data Management
The final topic of the day and again one that I would have liked to take more part in but was unable too. I think this feature will be very good for historical fact tables in a data warehouse and for the normally advertised reason of auditing. The ways in which they work seems reasonably self explanatory, one GOTCHA is that DB2 assumes you want the current data not the “as of” business or system time so watch out in your stored procedures!!