• About Dangerous DBA
  • Table of Contents
Dangerous DBA A blog for those DBA's who live on the edge

Category Archives: Blogging

Whats been going on in the world of the Dangerous DBA:

May 11, 2020 3:19 pm / Leave a Comment / dangerousDBA

It’s been a while ….. again:

So since I last posted way back in 2016 and QCon there has been a lot of water under the bridge in terms of technologies used and abused, and either taken into the stack or discarded to the ummm it nearly worked pile.

Things that have changed in that time:

Where a I work has seen a massive increase in the number of people in the data team in response to the business becoming far more data driven. The team has gone from two or three of us to three distinct sections all with a minimum of four people in them.

We now embrace what was just emerging as a widespread phenomenon in 2016 – streaming data; and make good use of this for micro services and data products.

I now have processed to a “senior” role in the business; with juniors and none prefixed or suffixed data engineers in the data products team.

Things that have NOT changed in that time:

We are still moving off DB2 but due to several large scale events over the years and the increasing size of the team then this is rapidly gathering pace but as a business we still need the old beast, but it has been on somewhat of a diet!

Where are things going:

Well for us from the Data Centre and Amazon Web Services (AWS) and to Google Cloud. Making extensive use of everything that it has to offer because of it nature we face far less of the size issues that we did on the data centre or even on AWS with some of its offerings

What are my interests (white whales) now:

Well I am going to be looking at a few things now:

  1. Google Cloud Management – is it possible to create a tool that will aid this from a green field prospective and also brownfield. It seems (not that I can google so they need to do a better effort at SEO) that there is not a product out there that does this in an obvious way
  2. IOT – This is going to be something that is “easily implemented” but then has individual elements that can taken WAY further, currently working on something and look put for upcoming posts

Posted in: 2020, Big Data, Blogging, General Blog / Tagged: AWS, GCP

Record the size of your DB2 tables – SYSIBMADM.ADMINTABINFO

February 21, 2013 8:00 am / 2 Comments / dangerousDBA

Don’t know how your tables are growing or shrinking over time then this article should help you, and it uses built in DB2 administrative view called SYSIBMADM.ADMINTABINFO so nothing too complicated to do here; full details about SYSIBMADM.ADMINTABINFO can be found in the IBM Help Centre.

Below I will go through the DB2 objects that I have created to record this info and how you can implement this yourself.

The view using SYSIBMADM.ADMINTABINFO

So that I have something I can query during the day after I have added quantities of data or I can use it in an stored procedure to record the daily table sizes:


CREATE VIEW DB_MAIN.TABLE_SIZES AS (
    SELECT CURRENT_DATE AS STATS_DATE,
            TABNAME AS TABNAME,TABSCHEMA AS TABSCHEMA,TABTYPE AS TABTYPE,TOTAL_SIZE AS TOTAL_OBJECT_P_SIZE,DATA_SIZE AS DATA_OBJECT_P_SIZE,DICT_SIZE AS DICTIONARY_SIZE,INDEX_SIZE AS INDEX_OBJECT_P_SIZE,LOB_SIZE AS LOB_OBJECT_P_SIZE,LONG_SIZE AS LONG_OBJECT_P_SIZE,XML_SIZE AS XML_OBJECT_P_SIZE FROM table(SELECT 							
            TABNAME, 							
            TABSCHEMA, 							
            TABTYPE, 							
            DECIMAL(((data_object_p_size + index_object_p_size + long_object_p_size + lob_object_p_size + xml_object_p_size)/ 1024.0),10,3) as total_size, 							
      DECIMAL((DATA_OBJECT_P_SIZE / 1024.0),10,3) AS DATA_SIZE, 
      DECIMAL((DICTIONARY_SIZE / 1024.0),10,2) AS DICT_SIZE, 							
      DECIMAL((INDEX_OBJECT_P_SIZE / 1024.0),10,3) AS INDEX_SIZE, 
      DECIMAL((LOB_OBJECT_P_SIZE / 1024.0),10,3) AS LOB_SIZE, 							
      DECIMAL((LONG_OBJECT_P_SIZE / 1024.0),10,3) AS LONG_SIZE, DECIMAL((XML_OBJECT_P_SIZE / 1024.0),10,3) AS XML_SIZE 
    FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA NOT LIKE 'SYS%'							
    AND TABSCHEMA NOT LIKE 'SNAP%') as TABLESIZE
)

The view is not all the columns that are available in the view but are the ones that are the most useful for general day to day usage, there are many more here that you could use. The values are stored in Kb’s so need dividing by 1024 to get it too Mb’s. The other GOTCHA is that partitioned tables will appear as one row per partition.

Table sizes record table

Rubbish section title I know but have tried several different names. This is the meta table that will record the information from the cut down version of the view from the stored procedure below.


CREATE TABLE DB_MAIN.TABLE_SIZES_STATS  ( 
	STATS_DATE         	DATE NOT NULL,
	TABNAME            	VARCHAR(128),
	TABSCHEMA          	VARCHAR(128),
	TABTYPE            	CHARACTER(1),
	TOTAL_OBJECT_P_SIZE	DECIMAL(10,3),
	DATA_OBJECT_P_SIZE 	DECIMAL(10,3),
	DICTIONARY_SIZE    	DECIMAL(10,2),
	INDEX_OBJECT_P_SIZE	DECIMAL(10,3),
	LOB_OBJECT_P_SIZE  	DECIMAL(10,3),
	LONG_OBJECT_P_SIZE 	DECIMAL(10,3),
	XML_OBJECT_P_SIZE  	DECIMAL(10,3) 
	)
IN DB_MAIN_TS
COMPRESS YES

Please note that if you do not have the “Storage Optimisation Feature” from IBM then please do not include the line “COMPRESS YES”, otherwise if the big blue comes to do an audit you could be in trouble. The best thing to avoid this is set the licensing to hard

Stored procedure for recording table sizes using SYSIBMADM.ADMINTABINFO

This is the stored procedure that I use to stored the size of the at the time of running the SP.

CREATE PROCEDURE DB_MAIN.ADD_TABLE_SIZES_STATS   ()
LANGUAGE SQL
BEGIN
    INSERT INTO DB_MAIN.TABLE_SIZES_STATS
    SELECT *
    FROM DB_MAIN.TABLE_SIZES
    WITH UR;
END

What to do next

As stated earlier then you can use this to record the day to day table sizes, or if you are in the process of compressing your tables you can use this to record the sizes before and after. In a future article then I will be using this object created here to show how much table size has decreased in implementing adaptive compression.



Posted in: Blogging, DB2, DB2 Administration, DB2 Built in commands, DB2 built in Views, DB2 Data Types, DB2 Maintenance, DB2 Storage Optimisation, db2licm, Decimal, IBM, SYSIBMADM.ADMINTABINFO / Tagged: DB2, DB2 Administration, DB2 Development, db2licm, IBM DB2 LUW, Meta Data, SYSIBMADM.ADMINTABINFO, V10.1, V9.7

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 1

September 18, 2012 10:49 pm / 2 Comments / dangerousDBA

Today is the first day of the DB2 10 for Linux, Unix and Windows bootcamp at IBM Hursley and so far it has been informative. A special thank you has to go out to Triton Consulting and Iqbal Goralwalla (@iqbalgoralwalla) Head of mid-range at Triton for getting IBM to put one of these on in the UK. With a thank you also going to IBM for doing all it for free!

DB2 10 for Linux, UNIX and Windows Bootcamp – Day 1 – What have we done – my prospective

First gem that the course instructor let onto and I did not spot anywhere so far (twitter, channel db2 etc) is that there is a fixpack for 10 already, that fixes some early bugs that have been reported.

DB2 Features Spotlight

Basically going through all the details and features that make DB2 V10 a good proposition, wetting you appetite like any good sales pitch should do. The are promising improving the pillars of DB2, low cost operation, ease of development and reliability. In low cost operation they are offering things like adaptive compression and multi temperature data management. Multi temperature data management is the most interesting here as it raises interesting questions around if you have a SAN that is clever enough to realise the hot blocks of data is it better to let the SAN handle hot data or is it better to set up storage groups and define hot storage groups.

DB2 Fundamentals

This section before lunch was split into two main parts, one covering a lot more than the other. The first was a product overview and the second was the fundamentals. In the product overview I was surprised to see that HP-UX was still a supported platform and that there was no mention of the mobile edition that has been tweeted about a lot by the @DB2_IBM account recently. Also raising some questions over the INGEST utility being available in which edition, which according to the matrix which is in the notes then it is only in Advanced Enterprise Server Edition which is not the version that I work with, and put a spanner in the works for our future plans.

The fundamentals section is little changed from DB2 V9.7 and its fundamentals. There are a few changes to commands, and extensions to others. You can now get your db2diag log to rotate based on size, which is useful, but I would have preferred the splitting to happen based on time, like every 24 hours.

Bye Bye Control Centre, Hello data studio

This was an attempt to introduce the IBM Data studio that replaces Control Center. I am still reserving jugement on Data studio as it does not work very well on my Windows VM with 4 CPUs and 3GB of RAM (Mac, VMWare Fusion), but on the SUSE linux VM that was running 2 CPU’s and 2GB of RAM (Windows 7, VMware) seemed to run a LOT better for our hands on lab.

You will also need to upgrade your Data studio to 3.1.1 to exploit all the features of DB2 V10. This means I might invest some time in setting up a SUSE Linux Vm and get this working properly. We did not go through the web console, but that might be later in the course.

Storage design and Multi-Temperature Storage

This covered some of the older concepts of the bufferpools and table-spaces and how they have been spiced up with the introduction of storage groups and how these can be set up to create Multi-Temperature data and manage where it is stored. I think this will be interesting and will lead to many debates with your storage architect over which level (storage or application) will decide where data is stored.

We did a hands on lab using some of these concepts and it was quite interesting, but the small size of the SAMPLE database belittles the REBALANCING and other issues around moving storage groups and adding and removing storage areas.

My main point here would be interesting to see the differences between DB2 being set up on storage with SSD etc that are in the IBM examples VS hot block swapping on the SAN and non of the Multi-Temperature data management, and see which is more efficient and over what types of workload; but who has the time or the resource to justify doing this, (IBM?)

Basic Maintenance & Autonomic Features

Some of the concepts looked into here seemed to some as a bit of a shock to some of the delegates, but not to us, apparently needing to REBIND packages after you have carried out RUNSTATS operations was not the done thing. Covering STMM and other automatic features and there was not a lot in here that was new.

Tomorrow

Tomorrow we will do the Hand on Lab for Basic Maintenance & Autonomic Features, and carry on working through the slides and the labs, should be good.



Posted in: Basic Maintenance & Autonomic Features, Blogging, Bootcamp, Bye Bye Control Centre, DB2 Administration, DB2 Built in commands, DB2 built in functions, DB2 Built-in Stored Procedures, DB2 Data Types, DB2 Development, DB2 Features Spotlight, DB2 Fundamentals<, Fixpack, Hello data studio, IBM DB2 LUW, Storage design and Multi-Temperature Storage, V10, V9.7 / Tagged: Bootcamp, data studio, DB2, DB2 Administration, DB2 Development, IBM DB2 LUW, V10.1, V9.7

Whats this blog about

July 3, 2012 10:16 pm / Leave a Comment / dangerousDBA

The main point of this blog is to share my knowledge, developments and findings in the world of databases. I make no bones about it is going to mainly going to be about IBM DB2 LUW as thats what I work with day in and day out in my role as a DB2 DBA at Holiday Extras. I have previousely blogged at my old “personal” site but I want to take this in a different direction, and after taking inspration from a book recommended by another DB2 Blogger @ember_crooks (db2commerce.com) I have decided to make a dedicated database site.

Future Content

I am going to re-blog some content from my old blog so if your were an avid reader before then some of it will be similar but I also plan to look into doing some new things that I have not wrote about before and are challenges that I have plans to overcome in future. I will be looking into overcoming common and less common issues in our day to day development and administration lives.

I will be looking at issues on the development side like how to get a random set of rows for a result set and what you can do with that or how to automate some of the mundane tasks that we as DB2 DBA’s have to do. But equally how to solve those annoying errors that you get that have a non simple answer.

Call for content

Just like any good conference I want this please to be a melting pot of of ideas and resources. So if you feel you have something you would like to add to the blog then please feel free to get in touch “admin at dangerousdba dot com”. I am looking to get guest bloggers in to bring in other knowledge bases to enrich the content of the blog, like some posts from mainframe side of DB2. I would be happy to have articles from people who work with other RDBMS’s or even graph databases as well on how you guys do the same things to get comparisons.



Posted in: Blogging, IBM DB2 LUW / Tagged: Blogging, DB2, DB2 Administration, DB2 Development, IBM DB2 LUW

A DB2 ETL Tool

June 24, 2012 7:14 am / Leave a Comment / dangerousDBA

This is going to be a really short post as it is more of a question. A request for enlightenment into what the rest of the DB2 community does in terms of getting their data from however you guys receive into your database (data warehouse). When you Google for “DB2 ETL Tools” then the first hit is on Developer Works which by the title (ETL solutions for IBM DB2 Universal Database) you think great there’s going to be a list on here, but instead it just points you too the EXPORT, LOAD and IMPORT commands and some very basic script examples of how to get these to work. The next few links are not that good either and googling around the subject you get to various products, non of which are designed to work with DB2 other than through a Java or worse ODBC driver, apart from IBM Data Stage which is very expensive.

I think to myself after seeing this then “How does everyone else do it?”. Having come from a SQL Server background with SSIS and DTS then initially I was shocked at how few tools there are for DB2, without using some verbose logging in text file and emailing you the content, this leads me to think “This is not the way the wide world can do it?”. I have over 400 ETL jobs where ~85% of them need to run everyday, therefore with another developer that has now left we deigned a way to load and record the loading of the jobs. He has now left and support for the tool he made is now becoming harder to do and even when it is it rarely goes smoothly.

So my question to the community is:

Posted in: Blogging, DB2, DB2 Administration, DB2 Ecosystem, DB2 Maintenance, IBM / Tagged: added functionality, data stage, data types, db2 community, db2 universal database, information centre, infosphere, sql statement, technology

Post Navigation

← Older Posts
 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 757 other subscribers

Recent Posts

  • Self generating Simple SQL procedures – MySQL
  • Google Cloud Management – My Idea – My White Whale?
  • Position Tracker – The Stub – Pandas:
  • Position Tracker – The Stub
  • Position Tracker – In the beginning
  • Whats been going on in the world of the Dangerous DBA:
  • QCon London Day 1
  • Testing Amazon Redshift: Distribution keys and styles
  • Back to dangerous blogging
  • DB2 10.1 LUW Certification 611 notes 1 : Physical Design

Dangerous Topics

added functionality ADMIN_EST_INLINE_LENGTH Bootcamp colum convert data types DB2 db2 DB2 Administration DB2 Development db2advis db2licm Decompose XML EXPORT GCP Google IBM IBM DB2 LUW idug information centre infosphere IOT LOAD merry christmas and a happy new year Position Tracking python Recursive Query Recursive SQL Reorganisation Reorganise Reorganise Indexes Reorganise Tables Runstats sql statement Stored Procedures SYSPROC.ADMIN_CMD Time UDF User Defined Functions V9.7 V10.1 Varchar XML XML PATH XMLTABLE

DangerousDBA Links

  • DB2 for WebSphere Commerce
  • My Personal Blog

Disclaimer:

The posts here represent my personal views and not those of my employer. Any technical advice or instructions are based on my own personal knowledge and experience, and should only be followed by an expert after a careful analysis. Please test any actions before performing them in a critical or nonrecoverable environment. Any actions taken based on my experiences should be done with extreme caution. I am not responsible for any adverse results. DB2 is a trademark of IBM. I am not an employee or representative of IBM.

Advertising

© Copyright 2022 - Dangerous DBA
Infinity Theme by DesignCoral / WordPress