• About Dangerous DBA
  • Table of Contents
Dangerous DBA A blog for those DBA's who live on the edge

Tag Archives: Db2licm

Getting an estimate – DB2 LUW V10.1 Compression

May 20, 2013 8:00 am / Leave a Comment / dangerousDBA

So you want to add compression to your house you need to get a tradesman in to give you an estimate, then carry out the work, DB2 can do all of this. Just like building an extension you need to make sure that you need all the appropriate permissions from the “council” (IBM) in place, you either need to buy the Storage Optimisation as a “feature” or as part of Advanced Enterprise Edition of DB2. Please be careful when trying to use compression because as soon as you include “COMPRESSION YES” it will set the features used to YES for compression and if you get audited you could face a hefty bill.

Benefit’s to extending to compression

At a high level the there are three ways of looking at this.
No compression
Benefits
Not having to pay the licensing fee to IBM for compression.
Costs
Large amounts of disk space used for the data, minimal amounts of data in your bufferpools as the page sizes are not made any smaller
Classic Compression
Benefits
Data is compressed on disk and saves you here, data is also compressed in the bufferpools so more pages in them; less I/0 quicker queries. Data is also compressed in the backup images.
Costs
Licensing fee to IBM. Slight increase in CPU usage for the compression dictionary usage. You need to reset the dictionary with a REORG from time to time to make sure that you get the most out of the compression.
Adaptive Compression
Benefits
Data is compressed on disk, data is also compressed in the bufferpools so more pages in them; less I/0 quicker queries. Data is also compressed in the backup images. Data is continually compressed, no need for the RESETDICTIONARY REORG in the same way as the Classic compression.
Costs
Licensing fee to IBM. Increase in CPU usage for the compression dictionary usage. Only available in the latest DB2 V10.1

Here’s what you could be saving – SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO

Handley IBM have included a very useful table function SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO. The full information for this can be found in the information centre here. This table function will estimate the savings that you will get with no compression, “standard” compression and adaptive compression, GOTCHA’s for this are below:

SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO – GOTCHA’s

  1. Tables that are partitioned will come through the stored procedure as multiple rows. You do get a partition ID which you will be able to either join out too or look up in the table SYSCAT.DATAPARTITIONS.
  2. If the table has an (or more) XML column then you will get an additional row in the results returned, a “DATA” and an “XML” compression estimation row. Together with the other gotcha you could end up a lot of a rows returned for a partitioned table with XML columns.

Getting an estimate – SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO

This procedure can be used to get information on either a table or an entire schema, obviously the later can take some time to run from what I have found especially when the tables are large. The most simple form of the stored procedure is:


SELECT * 
FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO({SCHEMA NAME}, {TABLE NAME}))

This will get you a result a little like this (sorry for the formatting):


TABSCHEMA   TABNAME   DBPARTITIONNUM   DATAPARTITIONID     OBJECT_TYPE     ROWCOMPMODE     PCTPAGESSAVED_CURRENT     AVGROWSIZE_CURRENT     PCTPAGESSAVED_STATIC     AVGROWSIZE_STATIC     PCTPAGESSAVED_ADAPTIVE     AVGROWSIZE_ADAPTIVE    
------------        ----------     -----------------     ------------------  --------------  --------------  ------------------------  ---------------------  -----------------------  --------------------  -------------------------  ---------------------- 
SCHEMA         TABLE      0                  0                   DATA            S               0                         495                    65                       173                   65                         170   

The example above shows that this table currently is using “Classic” compression, represented by the S, a blank would mean no row compression and an A would be the new adaptive compression in DB2. As you can see it gives you an estimate on the average row size in the different compression modes, this is in bytes and you will then need to work out what the full Gb / Mb size might be based on the cardinality of the table.

The table function is telling us though that there are potentially 65% savings to be made in both adaptive and classic compression, but there is a 3 byte difference and adaptive compression in my opinion is far better so I would ALTER TABLE to COMPRESS YES ADAPTIVE.

If you want to run the table function against a schema leave the table part a blank string


SELECT * 
FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO({SCHEMA NAME}, ''))

This will get you a row per table in the schema (plus any extra for XML / partitioned tables)

The future

In a future post I will look at using this table function to record the values for all tables, you can then look at a before and after and therefore prove that the change in compression and the associated REORG’s have worked.



Posted in: DB2, DB2 Administration, DB2 Built in commands, DB2 Built-in Stored Procedures, DB2 Maintenance, db2licm, IBM, IBM DB2 LUW, SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO / Tagged: ADMIN_GET_TAB_COMPRESS_INFO, DB2, DB2 Administration, db2licm, IBM DB2 LUW, Stored Procedures, SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO, V10.1, V9.7, XML

Record the size of your DB2 tables – SYSIBMADM.ADMINTABINFO

February 21, 2013 8:00 am / 2 Comments / dangerousDBA

Don’t know how your tables are growing or shrinking over time then this article should help you, and it uses built in DB2 administrative view called SYSIBMADM.ADMINTABINFO so nothing too complicated to do here; full details about SYSIBMADM.ADMINTABINFO can be found in the IBM Help Centre.

Below I will go through the DB2 objects that I have created to record this info and how you can implement this yourself.

The view using SYSIBMADM.ADMINTABINFO

So that I have something I can query during the day after I have added quantities of data or I can use it in an stored procedure to record the daily table sizes:


CREATE VIEW DB_MAIN.TABLE_SIZES AS (
    SELECT CURRENT_DATE AS STATS_DATE,
            TABNAME AS TABNAME,TABSCHEMA AS TABSCHEMA,TABTYPE AS TABTYPE,TOTAL_SIZE AS TOTAL_OBJECT_P_SIZE,DATA_SIZE AS DATA_OBJECT_P_SIZE,DICT_SIZE AS DICTIONARY_SIZE,INDEX_SIZE AS INDEX_OBJECT_P_SIZE,LOB_SIZE AS LOB_OBJECT_P_SIZE,LONG_SIZE AS LONG_OBJECT_P_SIZE,XML_SIZE AS XML_OBJECT_P_SIZE FROM table(SELECT 							
            TABNAME, 							
            TABSCHEMA, 							
            TABTYPE, 							
            DECIMAL(((data_object_p_size + index_object_p_size + long_object_p_size + lob_object_p_size + xml_object_p_size)/ 1024.0),10,3) as total_size, 							
      DECIMAL((DATA_OBJECT_P_SIZE / 1024.0),10,3) AS DATA_SIZE, 
      DECIMAL((DICTIONARY_SIZE / 1024.0),10,2) AS DICT_SIZE, 							
      DECIMAL((INDEX_OBJECT_P_SIZE / 1024.0),10,3) AS INDEX_SIZE, 
      DECIMAL((LOB_OBJECT_P_SIZE / 1024.0),10,3) AS LOB_SIZE, 							
      DECIMAL((LONG_OBJECT_P_SIZE / 1024.0),10,3) AS LONG_SIZE, DECIMAL((XML_OBJECT_P_SIZE / 1024.0),10,3) AS XML_SIZE 
    FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA NOT LIKE 'SYS%'							
    AND TABSCHEMA NOT LIKE 'SNAP%') as TABLESIZE
)

The view is not all the columns that are available in the view but are the ones that are the most useful for general day to day usage, there are many more here that you could use. The values are stored in Kb’s so need dividing by 1024 to get it too Mb’s. The other GOTCHA is that partitioned tables will appear as one row per partition.

Table sizes record table

Rubbish section title I know but have tried several different names. This is the meta table that will record the information from the cut down version of the view from the stored procedure below.


CREATE TABLE DB_MAIN.TABLE_SIZES_STATS  ( 
	STATS_DATE         	DATE NOT NULL,
	TABNAME            	VARCHAR(128),
	TABSCHEMA          	VARCHAR(128),
	TABTYPE            	CHARACTER(1),
	TOTAL_OBJECT_P_SIZE	DECIMAL(10,3),
	DATA_OBJECT_P_SIZE 	DECIMAL(10,3),
	DICTIONARY_SIZE    	DECIMAL(10,2),
	INDEX_OBJECT_P_SIZE	DECIMAL(10,3),
	LOB_OBJECT_P_SIZE  	DECIMAL(10,3),
	LONG_OBJECT_P_SIZE 	DECIMAL(10,3),
	XML_OBJECT_P_SIZE  	DECIMAL(10,3) 
	)
IN DB_MAIN_TS
COMPRESS YES

Please note that if you do not have the “Storage Optimisation Feature” from IBM then please do not include the line “COMPRESS YES”, otherwise if the big blue comes to do an audit you could be in trouble. The best thing to avoid this is set the licensing to hard

Stored procedure for recording table sizes using SYSIBMADM.ADMINTABINFO

This is the stored procedure that I use to stored the size of the at the time of running the SP.

CREATE PROCEDURE DB_MAIN.ADD_TABLE_SIZES_STATS   ()
LANGUAGE SQL
BEGIN
    INSERT INTO DB_MAIN.TABLE_SIZES_STATS
    SELECT *
    FROM DB_MAIN.TABLE_SIZES
    WITH UR;
END

What to do next

As stated earlier then you can use this to record the day to day table sizes, or if you are in the process of compressing your tables you can use this to record the sizes before and after. In a future article then I will be using this object created here to show how much table size has decreased in implementing adaptive compression.



Posted in: Blogging, DB2, DB2 Administration, DB2 Built in commands, DB2 built in Views, DB2 Data Types, DB2 Maintenance, DB2 Storage Optimisation, db2licm, Decimal, IBM, SYSIBMADM.ADMINTABINFO / Tagged: DB2, DB2 Administration, DB2 Development, db2licm, IBM DB2 LUW, Meta Data, SYSIBMADM.ADMINTABINFO, V10.1, V9.7

db2licm – DB2 Licencing command – First steps

August 24, 2012 2:00 pm / Leave a Comment / dangerousDBA

In the beginning

Back in the day DB2 9.5, IBM introduced data compression. This was the time I was first starting with DB2 having moved job and from SQL server 2000/2005, and nearly straight away I was put on a project to migrate the old DB2 V8.1 Linux machine to V9.5 Unix architecture. Being new and being told my the senior team member to use the wonderful Control Centre, and I noticed in the create table dialog box a lovely little check box that said “Compress Data”. Apparently though this is a paid for extra!!

Obviously this was not something I wanted to fall foul of again, I think having soft licensing is a very bad thing, it is equally an education and knowledge thing, that I now know. DB2 soft licensing something you have to be very aware of so you do not fall foul of it to that end you need to get the db2licm command in your servers life!

db2licm – the command

To this end I have hopefully some useful advice here on using the db2licm command, the full official command can be found here and some more here. So the first one I always use now is:

db2licm -e db2ese HARD

This command above will not let your DB2 enterprise server edition use any features that you have not applied a licence key for, so to apply a licence key:

db2licm -a db2ese.lic

So this will apply your licence file for your DB2 enterprise edition. This can be used with any licensing file you just have to supply the full path to the file, if you are not running the command from where the file is being held. Finally you may want to produce a report to make sure that you are compliant:

db2licm -g db2lic-report.txt

So this will generate a report at the file path specified. You must be a DBA user or have SYSADM. The final command just tells you the version and level of the product that is installed and so is especially useful for servers that you have just taken over or some developer “has set up for you”:

db2licm -l

Hopefully this will have given you a quick overview of the command and what it can do for you and the costs it could save.

Posted in: DB2 Administration, DB2 Built in commands, db2licm, IBM DB2 LUW / Tagged: DB2, DB2 Administration, db2licm, IBM DB2 LUW

DB2 Licence Audit – When the big blue comes a knocking

August 22, 2012 4:31 pm / Leave a Comment / dangerousDBA

So we all know that we each year we pay out a not small amount of money for ongoing licencing and maintenance costs, and you may or may not know that every (apparently) three to five years IBM come round and want to check that what you pay for is what you use. In this article I want to cover what I / we have experienced so far an I will hope to do a future post on the outcomes.

What Happened?

The main IBM contact at our business was contacted by Deloitte and was told that we were going to be audited, this was not a particularly good time for us as we were moving our data centre and moving to the cloud, so genuinely we asked them if it would be OK to postpone for a couple of weeks, and comfortingly it was. The first meeting they had with us was to introduce the process and the people that would be involved. The process is that Deloitte will come in and gather the facts of your DB2 estate, but equally will want to make sure you have fully up to date on all other IBM and “IBM Company” products e.g. Lotus notes etc. They will send you a copy of the factual report for “agreement on the factual accuracy” before they “step out” of the process, hand the report to IBM and you either get a pat on the head for being a good little customer, or you pay up at the going rate for what you are over using with no special bid process.

What I did to survive!

For me this was a particularly interesting process in that the person that used to deal with the actual licensing had left the company and it had not really been picked up by anyone, so once the audit was looming it fell to me to look into and it sort out. We licence with the CPU option so I can only comment on that but the pre-audit preparation that I carried out:

1. Work out how many PVU’s a single core of your processor in your server(s) is using the Processor Value Unit Calculator provided by IBM.

2. Go to passport advantage and login and download (print ?) your latest licence documents which will have the amount of PVU’s that you have been paying for, and for what products these are on, either actual (DB2 ESE) or features (DB2 Storage Optimisation) .

3. Work out how many cores DB2 is using, either physical or virtual. So if one core is 50 PVU’s and you have two quad core processors in your server then it would be 50 * 4 * 2 = 400 PVU’s for that server to licence a product that cant be “core limited” (if there are any). The thing to note here is that if you are running a Virtual Machines (VM’s) then if you over allocate processors (e.g. not a 1 to 1 relationship) in the VM to the physical system then you need extra PVU’s for that. If you have an 8 core physical machine, but you allocate 3 VM’s with 3 cores each (3 * 3 = 9) then you will need 50 * 9 = 450 PVU’s. This “sub capacity licencing” apparently should be covered by the reports from the Tivoli Licence Compliance Manager which we had not installed anywhere. I was never asked to but I found some exceptions that I think is the reason I was never told too by the previous incumbent of licencing that can be found here

4. Compare what you think you are using to what your licencing documents say, see if there is a mismatch in the number of PVU’s. Make sure you account for the right product on the right servers by running db2licm -l. Make sure that you have enough for the product and any features, if your server is 400 PVU’s and ran DB2 ESE with Storage Optimisation, you will need 400 PVU’s of DB2 ESE and 400 PVU’s of Storage Optimisation. If you are in the red then probably time to contact your vendor and get PVU’s bought (will come to why later) and if break even or are in the black for some reason then probably no need to worry. Make sure you also work out the PVU’s used and licensed for all the features that you have (e.g. Storage Optimisation) also as they will check that too.

5. Apply your licence keys to all your instances for the product and features and make sure that when you run the db2licm -l or db2licm -g <Output file name> that everything comes back as in compliance. Otherwise get on the Passport advantage site again and download and apply those keys.

6. Find out from all parties in your business that may have an instance or two of DB2, or a legacy system that might run DB2 (basically any DB2 that you do not manage) if they know anything about it and if it is free (express-c) or licensed and who looks after that aspect. If the answers are don’t know probably best you take more work on your broad shoulders and say you will take a look at it and carry out the above steps for those systems because as they will choose this server to report on as part of the subset any unlicensed product (as sods law goes).

What do Deloitte’s do

So the first thing to note here is that Deloitte are a middle man they just “gather facts”, and these are the facts on the day that they (or you) come round to check the system, so if you are in the red like in point four above it might be possible to get PVU’s bought before they come round. This if you do it before you may be able to get them at less than list price, if it is after the audit (as a consequence of) then it will be at the IBM list price! The other point to note is that they will want you to do most of the checking yourself, so not only are they getting paid by IBM to do the audit, they expect you to collect their evidence for them. The spreadsheet they sent us was for the wrong licensing format (user as opposed to CPU) and so we got them to come in and “do what they needed to do”, but as it turns out we still had to most of it as they were not allowed and provide them with the evidence, for them to collect or put into a spreadsheet!

Evidence they wanted off me as the DBA was the output from db2licm -g and information on the number of cores DB2 was running on. From our system admins they wanted scripts running on a subset of our whole server estate and screen shots of various things.

The stage we are at currently is that we are waiting for the report on factual accuracy, so that we can sign it off before IBM see it to give us either a pat on the head or a caning by the big blue headmaster.

Pass or Fail

Well in the end we passed, which was good. There was some wrangling over the fact we had less than 1000 employees and the type of licensing monitoring we needed to do, which was less than they thought.

Future Posts

Look at some more of the db2lic commands features to make auditing easier.



Posted in: DB2 Administration, DB2 Built in commands, db2licm, IBM DB2 LUW / Tagged: DB2, DB2 Administration, db2licm, Deloitte, IBM, IBM DB2 LUW

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 757 other subscribers

Recent Posts

  • Self generating Simple SQL procedures – MySQL
  • Google Cloud Management – My Idea – My White Whale?
  • Position Tracker – The Stub – Pandas:
  • Position Tracker – The Stub
  • Position Tracker – In the beginning
  • Whats been going on in the world of the Dangerous DBA:
  • QCon London Day 1
  • Testing Amazon Redshift: Distribution keys and styles
  • Back to dangerous blogging
  • DB2 10.1 LUW Certification 611 notes 1 : Physical Design

Dangerous Topics

added functionality ADMIN_EST_INLINE_LENGTH Bootcamp colum convert data types DB2 db2 DB2 Administration DB2 Development db2advis db2licm Decompose XML EXPORT GCP Google IBM IBM DB2 LUW idug information centre infosphere IOT LOAD merry christmas and a happy new year Position Tracking python Recursive Query Recursive SQL Reorganisation Reorganise Reorganise Indexes Reorganise Tables Runstats sql statement Stored Procedures SYSPROC.ADMIN_CMD Time UDF User Defined Functions V9.7 V10.1 Varchar XML XML PATH XMLTABLE

DangerousDBA Links

  • DB2 for WebSphere Commerce
  • My Personal Blog

Disclaimer:

The posts here represent my personal views and not those of my employer. Any technical advice or instructions are based on my own personal knowledge and experience, and should only be followed by an expert after a careful analysis. Please test any actions before performing them in a critical or nonrecoverable environment. Any actions taken based on my experiences should be done with extreme caution. I am not responsible for any adverse results. DB2 is a trademark of IBM. I am not an employee or representative of IBM.

Advertising

© Copyright 2023 - Dangerous DBA
Infinity Theme by DesignCoral / WordPress