• About Dangerous DBA
  • Table of Contents
Dangerous DBA A blog for those DBA's who live on the edge

Category Archives: Bootcamp

DB2 LUW Exam 611 – Holiday reading

April 5, 2013 1:30 am / Leave a Comment / dangerousDBA

This is a short post and I hope to get back into blogging proper once I am back from my hols. Its been a while since I last posted but seen as though we are on holiday and it is currently to hot to move I thought I would do a post. My holiday reading generally consists of technical manuals, papers , generally interesting stuff about history etc. This holiday has been no different I am currently trying to wade through the only currently IBM published material for the new IBM 611 DB2 LUW DBA Exam.

Preparation guide for DB2 10.1 LUW Exam 611

So this is the view that I currently have most days while trying to wade through the treacle that is this material:
image

This guide can be found here. while I don’t doubt that it is not all very good stuff, and I have to admit that some is bloggable when I get chance there is too much detail when compared to the past offerings from Roger Sanders.

The red, green and purple books offered the right level of detail for the exam but also enough to go off to the info centre and find out all the nitty gritty for yourself if needed. On the other hand IBM have gone the whole hog in this guide and included the whole info centre at 1121 pages giving you too much detail and no indication about what might actually be on the exam!

Too much detail

There are 45 or so pages on temporal tables (p253 – p301) whereas for indexes there are less than 30 (p327 – p351). So does this mean that temporal tables are new so they have devoted a lot more to it as indexes are old and everyone knows about them? Or does it mean that there will be a lot more questions on the topics that have more pages in this guide? Any offers?

I am also sorry to say the style offers no inspiration to carry on. It took me the best part of two days to clear the temporal table section, but I am fully caught up on any sleep I may have needed to catch up on!

Please someone release a less verbose updated version of this guide, like the good old green and purple books.

Posted in: DB2, DB2 Administration, DB2 Temporal Data Management, Exam, IBM, IBM DB2 LUW, Uncategorized / Tagged: 611, DB2, DB2 Administration, Exam, IBM DB2 LUW, V10.1

Record the size of your DB2 tables – SYSIBMADM.ADMINTABINFO

February 21, 2013 8:00 am / 2 Comments / dangerousDBA

Don’t know how your tables are growing or shrinking over time then this article should help you, and it uses built in DB2 administrative view called SYSIBMADM.ADMINTABINFO so nothing too complicated to do here; full details about SYSIBMADM.ADMINTABINFO can be found in the IBM Help Centre.

Below I will go through the DB2 objects that I have created to record this info and how you can implement this yourself.

The view using SYSIBMADM.ADMINTABINFO

So that I have something I can query during the day after I have added quantities of data or I can use it in an stored procedure to record the daily table sizes:


CREATE VIEW DB_MAIN.TABLE_SIZES AS (
    SELECT CURRENT_DATE AS STATS_DATE,
            TABNAME AS TABNAME,TABSCHEMA AS TABSCHEMA,TABTYPE AS TABTYPE,TOTAL_SIZE AS TOTAL_OBJECT_P_SIZE,DATA_SIZE AS DATA_OBJECT_P_SIZE,DICT_SIZE AS DICTIONARY_SIZE,INDEX_SIZE AS INDEX_OBJECT_P_SIZE,LOB_SIZE AS LOB_OBJECT_P_SIZE,LONG_SIZE AS LONG_OBJECT_P_SIZE,XML_SIZE AS XML_OBJECT_P_SIZE FROM table(SELECT 							
            TABNAME, 							
            TABSCHEMA, 							
            TABTYPE, 							
            DECIMAL(((data_object_p_size + index_object_p_size + long_object_p_size + lob_object_p_size + xml_object_p_size)/ 1024.0),10,3) as total_size, 							
      DECIMAL((DATA_OBJECT_P_SIZE / 1024.0),10,3) AS DATA_SIZE, 
      DECIMAL((DICTIONARY_SIZE / 1024.0),10,2) AS DICT_SIZE, 							
      DECIMAL((INDEX_OBJECT_P_SIZE / 1024.0),10,3) AS INDEX_SIZE, 
      DECIMAL((LOB_OBJECT_P_SIZE / 1024.0),10,3) AS LOB_SIZE, 							
      DECIMAL((LONG_OBJECT_P_SIZE / 1024.0),10,3) AS LONG_SIZE, DECIMAL((XML_OBJECT_P_SIZE / 1024.0),10,3) AS XML_SIZE 
    FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA NOT LIKE 'SYS%'							
    AND TABSCHEMA NOT LIKE 'SNAP%') as TABLESIZE
)

The view is not all the columns that are available in the view but are the ones that are the most useful for general day to day usage, there are many more here that you could use. The values are stored in Kb’s so need dividing by 1024 to get it too Mb’s. The other GOTCHA is that partitioned tables will appear as one row per partition.

Table sizes record table

Rubbish section title I know but have tried several different names. This is the meta table that will record the information from the cut down version of the view from the stored procedure below.


CREATE TABLE DB_MAIN.TABLE_SIZES_STATS  ( 
	STATS_DATE         	DATE NOT NULL,
	TABNAME            	VARCHAR(128),
	TABSCHEMA          	VARCHAR(128),
	TABTYPE            	CHARACTER(1),
	TOTAL_OBJECT_P_SIZE	DECIMAL(10,3),
	DATA_OBJECT_P_SIZE 	DECIMAL(10,3),
	DICTIONARY_SIZE    	DECIMAL(10,2),
	INDEX_OBJECT_P_SIZE	DECIMAL(10,3),
	LOB_OBJECT_P_SIZE  	DECIMAL(10,3),
	LONG_OBJECT_P_SIZE 	DECIMAL(10,3),
	XML_OBJECT_P_SIZE  	DECIMAL(10,3) 
	)
IN DB_MAIN_TS
COMPRESS YES

Please note that if you do not have the “Storage Optimisation Feature” from IBM then please do not include the line “COMPRESS YES”, otherwise if the big blue comes to do an audit you could be in trouble. The best thing to avoid this is set the licensing to hard

Stored procedure for recording table sizes using SYSIBMADM.ADMINTABINFO

This is the stored procedure that I use to stored the size of the at the time of running the SP.

CREATE PROCEDURE DB_MAIN.ADD_TABLE_SIZES_STATS   ()
LANGUAGE SQL
BEGIN
    INSERT INTO DB_MAIN.TABLE_SIZES_STATS
    SELECT *
    FROM DB_MAIN.TABLE_SIZES
    WITH UR;
END

What to do next

As stated earlier then you can use this to record the day to day table sizes, or if you are in the process of compressing your tables you can use this to record the sizes before and after. In a future article then I will be using this object created here to show how much table size has decreased in implementing adaptive compression.



Posted in: Blogging, DB2, DB2 Administration, DB2 Built in commands, DB2 built in Views, DB2 Data Types, DB2 Maintenance, DB2 Storage Optimisation, db2licm, Decimal, IBM, SYSIBMADM.ADMINTABINFO / Tagged: DB2, DB2 Administration, DB2 Development, db2licm, IBM DB2 LUW, Meta Data, SYSIBMADM.ADMINTABINFO, V10.1, V9.7

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 4

September 24, 2012 9:56 pm / Leave a Comment / dangerousDBA

Last day on the course, realisation of leaving the most excellent Hursley campus that reminded me a lot where I went to University at Keele, and that the migration from DB2 V9.7 to V10 on Tuesday. As promised on Thursday here is the late post for the final day of the course. A lot was covered on the last day so a quick round up

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 4 – What have we done – my prospective

Enhancing Analytic’s with DB2

So the new tools that fall under this section were my main interest in this section, but that’s not to belittle the other work that IBM have done in this area with the improved index scanning and query performance.

You can now create JAVA based table functions that could be called to return the content of a flat file into the FROM part of your query. So this means that you could at the most basic level have something like :


SELECT *
FROM TABFUNC.GET_DATA_CSV('Some File Name') AS (Col1 Integer, Col2, Varchar(20))

It would then return you a table that can be queried and used in a sub-query etc

Continuous Data Ingest for Large Environments

The new INGEST utility, now there was a little bit of confusion over the usage according to the instructor, needing AESE edition of DB2, but the material supplied on the course says it is no additional cost! The hands on lab showed it off brilliantly, with the light ETL ability of the tool, I cant wait to start to use it.

DB2 pureScale

So this is touted as being better than HADR as it is nearly impossible to take offline unless major site failure and now to get round that IBM have developed the product offering to be geographically dispersed with examples that they can give on request of getting it too work over 30 to 40 km. This means that your OLTP processing system can have more 9’s added to the end of your 99%.

Workload Management in DB2

IBM have introduced a new way too regulate your users, along with all the old features there is now a way to use your storage groups to help this out. With a storage group is the new concept of a DATA TAG. This DATA TAG you can give your storage groups a relative higher or lower priority. Therefore the SSD storage groups (this quarters data) could have a DATA TAG of 1 and a SATA drive (this years data) could have a DATA TAG of 5 and a “really” old PATA drive (past five years data) to a TAG of 10. This then means you could have two queries, one that touches the most recent data with the DATA TAG of 1 being set up in Workload manager to run a lot quicker, than the other query which compared this month to a month five years ago, would get limited and “retarded” as it has hit the DATA TAG of 10, which is relatively lower than 1.

Data Concurrency & DB2 pureXML Overview

There has not been a lot change here so I have grouped these two topics together as I cant think of anything to mention here, sorry IBM.

Bootcamp Overview and Exam results

This was very worth going thank you to Mark for presenting and proctoring the exams. It gave a great overview of the new features, and hands on experience of features that I do not get to use at work. I am looking forward to moving tomorrow on to V10 and testing out new features like adaptive compression. I also passed the “DB2 V10 fundamentals” exam, so maybe at some point I will get round to doing the DBA exam.



Posted in: Bootcamp, Continuous Data Ingest for Large Environments, Data Concurrency, DB2 Administration, DB2 Built in commands, DB2 built in functions, DB2 Built-in Stored Procedures, DB2 Data Types, DB2 Development, DB2 pureScale, DB2 pureXML Overview, Enhancing Analytic's with DB2, IBM DB2 LUW, V10, V9.7, Workload Management in DB2 / Tagged: Bootcamp, DB2, DB2 Administration, DB2 Development, IBM DB2 LUW, V10.1, V9.7

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 3

September 20, 2012 11:11 pm / 3 Comments / dangerousDBA

It looked like today was not going to be useful, with the agenda for the day looking at the differences between Oracle and DB2 and trying to get you to convert, and security, with a test at the end that was really for business partners so looked like I could take a nap, but actually each part of the day had some useful content.

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 3 – What have we done – my prospective

Breaking free from Oracle with DB2 10

So this from the title sounded like it would have absolutely no content that was of relevance to me, but it had several things in it that will be very useful, if not straight away. The first is that there are features developed for DB2 compatibility with Oracle, but you can use them without having the compatibility mode (ORA) on! One of these was DB configuration parameter of AUTO_REVAL can be set DEFERRED_FORCE and this will mean you don’t have to create your database objects in order e.g. you can create a view that uses a table and then create the table.

The second was that you can turn parts of compatibility mode on without having to have the full ORA or MYSQL set, this means that you could just take advantage of the TRUNCATION command that IBM developed for Oracle compatibility can be used, most useful no more LOAD into table from /dev/null!

IBM have created a free tool where you just have to supply the appropriate drivers and it will move data from nearly any mainstream database (MS SQL, MySQL, Postgres, Oracle, etc, etc). The IBM Data Movement tool looks very useful, as it has many useful features and functions from getting data from your other databases to your DB2. It will generate DDL’s and export and load data, loading takes place through files or pipes and will generate files so you can review progress and see what it has done.

DB2 Security

So this was a little boring but I got two things I got out of this ROLES, which are not new and TRUSTED CONTEXT’s. So ROLE’s look useful and if we get chance to design them and implement them properly then it could sort out a lot of issues we have. Then TRUSTED CONTEXT’s look useful for catching people out in either 1) record that the users connection is not coming from the right place 2) stopping people completely using users from the wrong location (or other connection variable). Watch out dev’s at HX.

High Availability and Disaster Recovery

This again is nothing new to us, the extra features that come with DB2 V10 like being able to have multiple standbys and the new “Super Asynchronous” mode. I also questioned the instructor over the delicacy of the standby to not having the copy files present, and the the way it takes tablespaces of line and they get fried and you have to start again, he had a smile and then said that was one of the features and in future it may be addressed; but it was good to know that I am not doing anything wrong and it is non recoverable from that state.

N18 IBM Information Management DB2 10 Technical Mastery Test v3

So this was an exam more for business partners than DBA’s and was free to take, so what the hell, I passed brilliant I have “1 point” on business partner status?

Late post

Tomorrow I will be travelling home so the likelihood is that I will not be able to post tomorrow about day four till maybe Sunday. Have a nice weekend all.



Posted in: Bootcamp, Breaking free from Oracle with DB2 10, DB2 Administration, DB2 Built in commands, DB2 built in functions, DB2 Built-in Stored Procedures, DB2 Data Types, DB2 Development, DB2 Security, High Availability and Disaster Recovery, IBM DB2 LUW, N18 IBM Information Management DB2 10 Technical Mastery Test v3, V10, V9.7 / Tagged: Bootcamp, DB2, DB2 Administration, DB2 Development, IBM DB2 LUW, V10.1, V9.7

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 2

September 19, 2012 11:53 pm / Leave a Comment / dangerousDBA

Today I am sure would have been very informative if I did not have so many production issues to resolve, as I did not get to pay too much attention, bad times. There were more lectures and more labs that I would have loved to take a more active part in, but it was not to be. Below is a high level look at the at what was covered and

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 2 – What have we done – my prospective

DB2 Backup and Recovery

So due to today’s errors I will be part taking in some recovery over the weekend. The first slide is interesting in this section as it was extolling the virtues of using a backup, surely it is a no brainer? The beginning part was a little basic covering concepts of back-up, recovery and logging. Most of the concepts in this talk I already knew about or use every day.

DB2 Storage Optimisation

I did not get to listen to any of this or part take in the lab, but we already make use of storage optimisation and the compression of data it brings. I am excited about the adaptive compression what that will bring. This section from looking at the slides seems to have also had a bit of a sales pitch at the end, well Storage optimisation is a paid for feature!

Adaptive compression looks like it will be a good thing, default on new tables in your V10 DB but in an upgrade it will be an alter statement and a regorg with a dictionary recreation, which may be a little hard to sell to managers if your tables are going to be offline for a while! Apparently we can expect overall storage savings on a single DB of between 50% and 65%, very impressive.

Data Partitioning in DB2

Again I did not get to listen to all of this or part take in the lab, due to the production issues. This again did not have a lot in it that I have not come across, read about or implemented myself. It covered DPF, Range partitioning, MDC tables the ways to combine these three to reduce a theoretical 64 page search to an 4 page and all the rows search, this basically comes down to breaking your data down so much that there is very little searching needed by DB2 and it can find your data very quickly.

DB2 Temporal Data Management

The final topic of the day and again one that I would have liked to take more part in but was unable too. I think this feature will be very good for historical fact tables in a data warehouse and for the normally advertised reason of auditing. The ways in which they work seems reasonably self explanatory, one GOTCHA is that DB2 assumes you want the current data not the “as of” business or system time so watch out in your stored procedures!!



Posted in: Bootcamp, Data Partitioning in DB2, DB2 Administration, DB2 Backup and Recovery, DB2 Built in commands, DB2 built in functions, DB2 Built-in Stored Procedures, DB2 Data Types, DB2 Development, DB2 Storage Optimisation, DB2 Temporal Data Management, IBM DB2 LUW, V10, V9.7 / Tagged: Bootcamp, DB2, DB2 Administration, DB2 Development, IBM DB2 LUW, V10.1, V9.7

Post Navigation

← Older Posts
 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 757 other subscribers

Recent Posts

  • Self generating Simple SQL procedures – MySQL
  • Google Cloud Management – My Idea – My White Whale?
  • Position Tracker – The Stub – Pandas:
  • Position Tracker – The Stub
  • Position Tracker – In the beginning
  • Whats been going on in the world of the Dangerous DBA:
  • QCon London Day 1
  • Testing Amazon Redshift: Distribution keys and styles
  • Back to dangerous blogging
  • DB2 10.1 LUW Certification 611 notes 1 : Physical Design

Dangerous Topics

added functionality ADMIN_EST_INLINE_LENGTH Bootcamp colum convert data types DB2 db2 DB2 Administration DB2 Development db2advis db2licm Decompose XML EXPORT GCP Google IBM IBM DB2 LUW idug information centre infosphere IOT LOAD merry christmas and a happy new year Position Tracking python Recursive Query Recursive SQL Reorganisation Reorganise Reorganise Indexes Reorganise Tables Runstats sql statement Stored Procedures SYSPROC.ADMIN_CMD Time UDF User Defined Functions V9.7 V10.1 Varchar XML XML PATH XMLTABLE

DangerousDBA Links

  • DB2 for WebSphere Commerce
  • My Personal Blog

Disclaimer:

The posts here represent my personal views and not those of my employer. Any technical advice or instructions are based on my own personal knowledge and experience, and should only be followed by an expert after a careful analysis. Please test any actions before performing them in a critical or nonrecoverable environment. Any actions taken based on my experiences should be done with extreme caution. I am not responsible for any adverse results. DB2 is a trademark of IBM. I am not an employee or representative of IBM.

Advertising

© Copyright 2023 - Dangerous DBA
Infinity Theme by DesignCoral / WordPress