• About Dangerous DBA
  • Table of Contents
Dangerous DBA A blog for those DBA's who live on the edge

Tag Archives: Db2

DB2 10.1 LUW Certification 611 notes 1 : Physical Design

June 3, 2013 1:00 pm / Leave a Comment / dangerousDBA

As part of my career development it has been on the cards for me to do some certification at my current company for about four and a half years, and now they are giving me some time for it again. As you know from a previous post then I have being trying to revise for this using something that IBM published just after DB2 10.1 LUW came out and it is very verbose and boring (sorry). It seems more like a technical manual of all the ins and outs of the application than revision guide. There are now some better resources produced and can be found here. This blog post is going to cover some of the things I learn’t from Part 2: Physical Design which can be found here, please take the time to read the doc for yourself it is full of good stuff some of which I think I will be trying out in future.

Physical Design – Exam notes

WITH CHECK OPTION

This seems to be a crafty little three word clause that can add a little security to your views, but it seems at the cost of having more of a maintenance overhead for an application. The example in the document says “WHERE dept = 10“, and goes on to say how if you tried the insert for any other dept no it would not work. This is good as it would stop insertion for dept 11, but would mean that you would need one of these views for each “dept” that you needed.

Informational Constraints

I have come across these before still not really sure what they are useful for, and there is not a good explanation of them in this doc. If they are ENFORCED then how is this different from a check constraint? and if it is ENABLED QUERY OPTIMIZATION then why would you want to make it difficult to find the wrong data rows?

Storage Groups / Multi Temperature data

This is a new feature to DB2 LUW 10.1 and I have found it hard so far to get a clear definition of how the original paths you supply a create database for its data paths and automated storage relate too multi temperature data storage groups. This doc does not make it as easy as I think I have now got it but I will give it a go in a series of bullet points (how I understand it):

  • The data paths that you define when you create a database go to create the default storage group : IBMSTOGROUP.
  • You can then create additional storage groups using the CREATE STOGROUP command for hot etc data
  • The three basic tables spaces that are created at database creation (USERSPACE1, TEMPSPACE1 and SYSCATSPACE1) are created in the default IBMSTOGROUP, that is on the paths that you specified to begin with.
  • Tablespaces are then created USING the newly created storage groups and rather than the data being stored in the default data path it is stored where specified.
  • If you are lucky enough to have workload manager then Storage groups can be given tags to optimize the performance of this – bonus.
  • As I understand it though if you only specify one path in you storage group then you data will not be striped as only one container will be created
  • You then create your partitioned tables (or tables) in these tablespaces appropriately. So the hot part of your partitioned table will go in the hot STOGROUP attached tablespace.

So hopefully the above makes this a little clearer, it does for me. We are currently looking at completely re architecturing our data-pile at the moment and a lot of the design decisions that this offers will be taken into consideration.

Range Clustered Tables

This seems like a most useful feature for star schema data warehouses where there are surrogate or business keys that are generally some form of whole number (SMALLINT, INTEGER or BIGINT). There does seem to be two massive GOTCHA’s in the text though (taken from the text):

  1. DB2 pre-allocates the disk space required for the RCT at creation time. This is done by calculating the number of distinct key values and multiplying it by the table row size. This mandates that the space required for holding the entire table should be available at creation time
  2. you cannot issue an ALTER TABLE statement to alter the physical characteristics of an RCT after its creation

Now I can see this being an issue if you have the full range of a BIGINT * the size of the table fields for each row allocated and used at time of creation, but you can limit this by using the “starting from” and “ending” clauses to the key sequences.

Range clustered tables also have another useful clause of “DISALLOW OVERFLOWS”, this apparently means that reorganization operations are not required, but you will not be allowed to insert any rows that exist outside the limits that you have set, so could be another way to stop GIGO.

Admin_move_table

This seems a most useful stored procedure for moving tables from say DEV to PRODUCTION or if you find that your creation script has accidentally left the wrong table space in the statement and your table needs moving out. Tables can be moved online but this takes more resource in terms of processor and disk space. Most interesting in terms of keeping your data online and generating a new compression dictionary (and therefore also a reorg?). The full command can be found here

Temporal Tables

These are new and will defiantly feature when we re-architecture the data dump that we have currently. I have had to code logic to capture the versions of the subject and then other objects to find the latest version efficiently across multi million row data sets. The new temporal tables will take care of this for you, and still allow you to access the old data in a nearly standard query. The other bonus is that these tables can also be partitioned, but you must make sure the history table can cope with the rows that will be inserted into it.

Conclusion

Well these are my high level take away’s on the extra stuff that seems to be on the paper compared too the V9.7 exam (that I never got round to sitting). There seems to be a lot less detail in this paper compared to the one that has taken me so long to read as it contain so much detail, which leaves me to wonder is the exam a middle ground of the two? Would highly suggest reading the serise and I will be doing more posts in the mean time on the other papers in the set, found at the link at the top of the article.



Posted in: ADMIN_MOVE_TABLE, DB2, DB2 Administration, DB2 Built-in Stored Procedures, DB2 DBA Certification, DB2 DBA Exam 611, IBM, IBM DB2 LUW, Physical Design, Physical Design / Tagged: ADMIN_MOVE_TABLE, Certification, DB2, DB2 Administration, DB2 DBA Exam 611, DB2 Development, Exam, IBM DB2 LUW, Informational Constraints, Multi temperature data, Storage groups, Stored Procedures, Temporal tables

Getting an estimate – DB2 LUW V10.1 Compression

May 20, 2013 8:00 am / Leave a Comment / dangerousDBA

So you want to add compression to your house you need to get a tradesman in to give you an estimate, then carry out the work, DB2 can do all of this. Just like building an extension you need to make sure that you need all the appropriate permissions from the “council” (IBM) in place, you either need to buy the Storage Optimisation as a “feature” or as part of Advanced Enterprise Edition of DB2. Please be careful when trying to use compression because as soon as you include “COMPRESSION YES” it will set the features used to YES for compression and if you get audited you could face a hefty bill.

Benefit’s to extending to compression

At a high level the there are three ways of looking at this.
No compression
Benefits
Not having to pay the licensing fee to IBM for compression.
Costs
Large amounts of disk space used for the data, minimal amounts of data in your bufferpools as the page sizes are not made any smaller
Classic Compression
Benefits
Data is compressed on disk and saves you here, data is also compressed in the bufferpools so more pages in them; less I/0 quicker queries. Data is also compressed in the backup images.
Costs
Licensing fee to IBM. Slight increase in CPU usage for the compression dictionary usage. You need to reset the dictionary with a REORG from time to time to make sure that you get the most out of the compression.
Adaptive Compression
Benefits
Data is compressed on disk, data is also compressed in the bufferpools so more pages in them; less I/0 quicker queries. Data is also compressed in the backup images. Data is continually compressed, no need for the RESETDICTIONARY REORG in the same way as the Classic compression.
Costs
Licensing fee to IBM. Increase in CPU usage for the compression dictionary usage. Only available in the latest DB2 V10.1

Here’s what you could be saving – SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO

Handley IBM have included a very useful table function SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO. The full information for this can be found in the information centre here. This table function will estimate the savings that you will get with no compression, “standard” compression and adaptive compression, GOTCHA’s for this are below:

SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO – GOTCHA’s

  1. Tables that are partitioned will come through the stored procedure as multiple rows. You do get a partition ID which you will be able to either join out too or look up in the table SYSCAT.DATAPARTITIONS.
  2. If the table has an (or more) XML column then you will get an additional row in the results returned, a “DATA” and an “XML” compression estimation row. Together with the other gotcha you could end up a lot of a rows returned for a partitioned table with XML columns.

Getting an estimate – SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO

This procedure can be used to get information on either a table or an entire schema, obviously the later can take some time to run from what I have found especially when the tables are large. The most simple form of the stored procedure is:


SELECT * 
FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO({SCHEMA NAME}, {TABLE NAME}))

This will get you a result a little like this (sorry for the formatting):


TABSCHEMA   TABNAME   DBPARTITIONNUM   DATAPARTITIONID     OBJECT_TYPE     ROWCOMPMODE     PCTPAGESSAVED_CURRENT     AVGROWSIZE_CURRENT     PCTPAGESSAVED_STATIC     AVGROWSIZE_STATIC     PCTPAGESSAVED_ADAPTIVE     AVGROWSIZE_ADAPTIVE    
------------        ----------     -----------------     ------------------  --------------  --------------  ------------------------  ---------------------  -----------------------  --------------------  -------------------------  ---------------------- 
SCHEMA         TABLE      0                  0                   DATA            S               0                         495                    65                       173                   65                         170   

The example above shows that this table currently is using “Classic” compression, represented by the S, a blank would mean no row compression and an A would be the new adaptive compression in DB2. As you can see it gives you an estimate on the average row size in the different compression modes, this is in bytes and you will then need to work out what the full Gb / Mb size might be based on the cardinality of the table.

The table function is telling us though that there are potentially 65% savings to be made in both adaptive and classic compression, but there is a 3 byte difference and adaptive compression in my opinion is far better so I would ALTER TABLE to COMPRESS YES ADAPTIVE.

If you want to run the table function against a schema leave the table part a blank string


SELECT * 
FROM TABLE(SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO({SCHEMA NAME}, ''))

This will get you a row per table in the schema (plus any extra for XML / partitioned tables)

The future

In a future post I will look at using this table function to record the values for all tables, you can then look at a before and after and therefore prove that the change in compression and the associated REORG’s have worked.



Posted in: DB2, DB2 Administration, DB2 Built in commands, DB2 Built-in Stored Procedures, DB2 Maintenance, db2licm, IBM, IBM DB2 LUW, SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO / Tagged: ADMIN_GET_TAB_COMPRESS_INFO, DB2, DB2 Administration, db2licm, IBM DB2 LUW, Stored Procedures, SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO, V10.1, V9.7, XML

Usage Lists – Big Brother is watching, but only where he is looking

April 22, 2013 12:00 pm / 1 Comment / dangerousDBA

From my holiday reading and from watching the most excellent DB2 Night Show and specifically an episode that was done a while back by Iqbal Goralwalla of Triton Consulting (@iqbalgoralwalla) on “DB2 LUW 10.1 Cool Features No One is Talking About” I have come across Usage Lists for tables and indexes in DB2.

Why use – Usage Lists

Have you got a table or an index and you never know when or how it is used; Stored Procedures, screens, systems or dynamic SQL, or do you want to monitor the SQL that runs against a table or index then and what work is done then this could save you ploughing through a lot of code, but means it won’t be an instant fix as the code has to run.

Usage Lists

Usage Lists in DB2 essentially allow you to monitor the SQL that runs against a tables or indexes that you have identified that you want monitoring. This does not come without costs and a list of the GOTCHA’s can be found on the “Notes” section of the page here and in “Chapter 26. Usage lists” of the “Preparation Guide for Exam 611” (not sure on how much these will come up in the exam?)

GOTCHA

  • Please note in the above paragraph the words “you have identified that you want monitoring” as you will see you will only get the stats if the table is monitored and you have set up the individual monitor!

Usage Lists – Creation

Not going to lie there is a page of the IBM Info Centre that has a version of this information but it is a little hard to find unless you type in the exact words but it can be found here, as you can see from the title then it is not really close to usage lists!

First you need to set a database configuration parameter MON_OBJ_METRICS:

db2 UPDATE DATABASE CONFIGURATION USING MON_OBJ_METRICS EXTENDED

On the page mentioned above then it says you need to set this so that "statistics are collected for each entry in the usage list" but on the small scale of the testing that I did I have not found any difference in captured data.

Then for each table that you want to monitor then you need to run at a minimum:

db2 CREATE USAGE LIST {Some Memorable Name} FOR TABLE {Schema}.{Table}

There are other parts to this command that can be found here and it has some useful parts like the ability too "turn its self off" when a certain number of different statements have been run by doing something like:

db2 CREATE USAGE LIST {Some Memorable Name} FOR TABLE {Schema}.{Table} LIST SIZE {Some Number} WHEN FULL DEACTIVATE

Or a rolling list, but this might create difficulties if you want repeatability:

db2 CREATE USAGE LIST {Some Memorable Name} FOR TABLE {Schema}.{Table} LIST SIZE {Some Number} WHEN FULL WRAP

From testing if you unless you specify a LIST SIZE then the collection will continue for as long as the list is active, which is the next statement to run to get it too work.

db2 SET USAGE LIST {Some Memorable Name} STATE = ACTIVE

And to disable it again:

db2 SET USAGE LIST {Some Memorable Name} STATE = INACTIVE

So above is a quick look at how to get this to work and the links to get a better lets move on to look at what it collects.

Usage Lists - The output

The output is quite useful and the full output of the MON_GET_TABLE_USAGE_LIST table function can be found here. It is also a little disappointing because this does not return the statement only an identifier (EXECUTABLE_ID) that you can supply to the MON_GET_PKG_CACHE_STMT table function which info for this can be found at here.

You can do something like this and potentially get a lot of data on what your MON_GET_TABLE_USAGE_LIST captured and the statements from MON_GET_PKG_CACHE_STMT when joined together:


SELECT *
FROM TABLE(MON_GET_TABLE_USAGE_LIST(NULL,{Some Memorable Name},0)) A 
   LEFT JOIN
     TABLE(MON_GET_PKG_CACHE_STMT(NULL, NULL, NULL, -2)) B
    ON A.EXECUTABLE_ID = B.EXECUTABLE_ID

Looking at a version that yields some more focused information:


SELECT B.STMT_TEXT AS SQL_STATEMENT, 
       A.LAST_UPDATED AS LAST_RUN,
       A.NUM_REF_WITH_METRICS AS NO_TIMES_RUN,
       A.ROWS_READ,
       A.ROWS_INSERTED,
       A.ROWS_UPDATED,
       A.ROWS_DELETED,
       A.LOCK_WAIT_TIME,
       A.OBJECT_DATA_L_READS AS BUFFERPOOL_READS,
       A.OBJECT_DATA_P_READS AS NON_BUFFERPOOL_READS
FROM TABLE(MON_GET_TABLE_USAGE_LIST(NULL,{Some Memorable Name},0)) A 
   INNER JOIN
     TABLE(MON_GET_PKG_CACHE_STMT(NULL, NULL, NULL, -2)) B
    ON A.EXECUTABLE_ID = B.EXECUTABLE_ID

This enables you to see how efficient the query is in terms of how often it is run and the number of times, with the work that they did and how much of the data resides in the bufferpools (BUFFERPOOL_READS) and how much has to come from disk (NON_BUFFERPOOL_READS). As you can see from my not very good test system query tracking below:


 SQL_STATEMENT                                                                                                 LAST_RUN             NO_TIMES_RUN     ROWS_READ     ROWS_INSERTED     ROWS_UPDATED     ROWS_DELETED     LOCK_WAIT_TIME     BUFFERPOOL_READS     NON_BUFFERPOOL_READS    
 ------------------------------------------------------------------------------------------------------------  -------------------  ---------------  ------------  ----------------  ---------------  ---------------  -----------------  -------------------  ----------------------- 
 insert into {schema}.{table} ({field},{field},{field}) VALUES ({value},{value},{value})  21/04/2013 10:10:54  1                0             1                 0                0                0                  1                    0                       

As you can see here this update all happened inside the bufferpool as it was very small on a table with no data. If you can find a statement that you are interested in because it has a large amount of non logical data reads you can use the captured code and pass it through db2advis to get suggestions on how to make the query better with indexes etc. Please see my blog post on db2advis if you are un-familar with it.

The future

I am currently looking at automating db2advis and monitoring its suggestions. Which once you are capturing the SQL becomes a lot easier.



Posted in: DB2, DB2 Administration, DB2 Built in commands, DB2 Development, DB2 Maintenance, DB2 Table Functions, IBM, IBM DB2 LUW, MON_GET_PKG_CACHE_STMT, MON_GET_TABLE_USAGE_LIST / Tagged: Create Usage List, DB2, DB2 Administration, DB2 Development, db2advis, IBM DB2 LUW, MON_GET_PKG_CACHE_STMT, MON_GET_TABLE_USAGE_LIST, Stored Procedures, Usage List, Usage List Status, V10.1

DB2 LUW Exam 611 – Holiday reading

April 5, 2013 1:30 am / Leave a Comment / dangerousDBA

This is a short post and I hope to get back into blogging proper once I am back from my hols. Its been a while since I last posted but seen as though we are on holiday and it is currently to hot to move I thought I would do a post. My holiday reading generally consists of technical manuals, papers , generally interesting stuff about history etc. This holiday has been no different I am currently trying to wade through the only currently IBM published material for the new IBM 611 DB2 LUW DBA Exam.

Preparation guide for DB2 10.1 LUW Exam 611

So this is the view that I currently have most days while trying to wade through the treacle that is this material:
image

This guide can be found here. while I don’t doubt that it is not all very good stuff, and I have to admit that some is bloggable when I get chance there is too much detail when compared to the past offerings from Roger Sanders.

The red, green and purple books offered the right level of detail for the exam but also enough to go off to the info centre and find out all the nitty gritty for yourself if needed. On the other hand IBM have gone the whole hog in this guide and included the whole info centre at 1121 pages giving you too much detail and no indication about what might actually be on the exam!

Too much detail

There are 45 or so pages on temporal tables (p253 – p301) whereas for indexes there are less than 30 (p327 – p351). So does this mean that temporal tables are new so they have devoted a lot more to it as indexes are old and everyone knows about them? Or does it mean that there will be a lot more questions on the topics that have more pages in this guide? Any offers?

I am also sorry to say the style offers no inspiration to carry on. It took me the best part of two days to clear the temporal table section, but I am fully caught up on any sleep I may have needed to catch up on!

Please someone release a less verbose updated version of this guide, like the good old green and purple books.

Posted in: DB2, DB2 Administration, DB2 Temporal Data Management, Exam, IBM, IBM DB2 LUW, Uncategorized / Tagged: 611, DB2, DB2 Administration, Exam, IBM DB2 LUW, V10.1

Record the size of your DB2 tables – SYSIBMADM.ADMINTABINFO

February 21, 2013 8:00 am / 2 Comments / dangerousDBA

Don’t know how your tables are growing or shrinking over time then this article should help you, and it uses built in DB2 administrative view called SYSIBMADM.ADMINTABINFO so nothing too complicated to do here; full details about SYSIBMADM.ADMINTABINFO can be found in the IBM Help Centre.

Below I will go through the DB2 objects that I have created to record this info and how you can implement this yourself.

The view using SYSIBMADM.ADMINTABINFO

So that I have something I can query during the day after I have added quantities of data or I can use it in an stored procedure to record the daily table sizes:


CREATE VIEW DB_MAIN.TABLE_SIZES AS (
    SELECT CURRENT_DATE AS STATS_DATE,
            TABNAME AS TABNAME,TABSCHEMA AS TABSCHEMA,TABTYPE AS TABTYPE,TOTAL_SIZE AS TOTAL_OBJECT_P_SIZE,DATA_SIZE AS DATA_OBJECT_P_SIZE,DICT_SIZE AS DICTIONARY_SIZE,INDEX_SIZE AS INDEX_OBJECT_P_SIZE,LOB_SIZE AS LOB_OBJECT_P_SIZE,LONG_SIZE AS LONG_OBJECT_P_SIZE,XML_SIZE AS XML_OBJECT_P_SIZE FROM table(SELECT 							
            TABNAME, 							
            TABSCHEMA, 							
            TABTYPE, 							
            DECIMAL(((data_object_p_size + index_object_p_size + long_object_p_size + lob_object_p_size + xml_object_p_size)/ 1024.0),10,3) as total_size, 							
      DECIMAL((DATA_OBJECT_P_SIZE / 1024.0),10,3) AS DATA_SIZE, 
      DECIMAL((DICTIONARY_SIZE / 1024.0),10,2) AS DICT_SIZE, 							
      DECIMAL((INDEX_OBJECT_P_SIZE / 1024.0),10,3) AS INDEX_SIZE, 
      DECIMAL((LOB_OBJECT_P_SIZE / 1024.0),10,3) AS LOB_SIZE, 							
      DECIMAL((LONG_OBJECT_P_SIZE / 1024.0),10,3) AS LONG_SIZE, DECIMAL((XML_OBJECT_P_SIZE / 1024.0),10,3) AS XML_SIZE 
    FROM SYSIBMADM.ADMINTABINFO WHERE TABSCHEMA NOT LIKE 'SYS%'							
    AND TABSCHEMA NOT LIKE 'SNAP%') as TABLESIZE
)

The view is not all the columns that are available in the view but are the ones that are the most useful for general day to day usage, there are many more here that you could use. The values are stored in Kb’s so need dividing by 1024 to get it too Mb’s. The other GOTCHA is that partitioned tables will appear as one row per partition.

Table sizes record table

Rubbish section title I know but have tried several different names. This is the meta table that will record the information from the cut down version of the view from the stored procedure below.


CREATE TABLE DB_MAIN.TABLE_SIZES_STATS  ( 
	STATS_DATE         	DATE NOT NULL,
	TABNAME            	VARCHAR(128),
	TABSCHEMA          	VARCHAR(128),
	TABTYPE            	CHARACTER(1),
	TOTAL_OBJECT_P_SIZE	DECIMAL(10,3),
	DATA_OBJECT_P_SIZE 	DECIMAL(10,3),
	DICTIONARY_SIZE    	DECIMAL(10,2),
	INDEX_OBJECT_P_SIZE	DECIMAL(10,3),
	LOB_OBJECT_P_SIZE  	DECIMAL(10,3),
	LONG_OBJECT_P_SIZE 	DECIMAL(10,3),
	XML_OBJECT_P_SIZE  	DECIMAL(10,3) 
	)
IN DB_MAIN_TS
COMPRESS YES

Please note that if you do not have the “Storage Optimisation Feature” from IBM then please do not include the line “COMPRESS YES”, otherwise if the big blue comes to do an audit you could be in trouble. The best thing to avoid this is set the licensing to hard

Stored procedure for recording table sizes using SYSIBMADM.ADMINTABINFO

This is the stored procedure that I use to stored the size of the at the time of running the SP.

CREATE PROCEDURE DB_MAIN.ADD_TABLE_SIZES_STATS   ()
LANGUAGE SQL
BEGIN
    INSERT INTO DB_MAIN.TABLE_SIZES_STATS
    SELECT *
    FROM DB_MAIN.TABLE_SIZES
    WITH UR;
END

What to do next

As stated earlier then you can use this to record the day to day table sizes, or if you are in the process of compressing your tables you can use this to record the sizes before and after. In a future article then I will be using this object created here to show how much table size has decreased in implementing adaptive compression.



Posted in: Blogging, DB2, DB2 Administration, DB2 Built in commands, DB2 built in Views, DB2 Data Types, DB2 Maintenance, DB2 Storage Optimisation, db2licm, Decimal, IBM, SYSIBMADM.ADMINTABINFO / Tagged: DB2, DB2 Administration, DB2 Development, db2licm, IBM DB2 LUW, Meta Data, SYSIBMADM.ADMINTABINFO, V10.1, V9.7

Post Navigation

← Older Posts
 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 757 other subscribers

Recent Posts

  • Self generating Simple SQL procedures – MySQL
  • Google Cloud Management – My Idea – My White Whale?
  • Position Tracker – The Stub – Pandas:
  • Position Tracker – The Stub
  • Position Tracker – In the beginning
  • Whats been going on in the world of the Dangerous DBA:
  • QCon London Day 1
  • Testing Amazon Redshift: Distribution keys and styles
  • Back to dangerous blogging
  • DB2 10.1 LUW Certification 611 notes 1 : Physical Design

Dangerous Topics

added functionality ADMIN_EST_INLINE_LENGTH Bootcamp colum convert data types DB2 db2 DB2 Administration DB2 Development db2advis db2licm Decompose XML EXPORT GCP Google IBM IBM DB2 LUW idug information centre infosphere IOT LOAD merry christmas and a happy new year Position Tracking python Recursive Query Recursive SQL Reorganisation Reorganise Reorganise Indexes Reorganise Tables Runstats sql statement Stored Procedures SYSPROC.ADMIN_CMD Time UDF User Defined Functions V9.7 V10.1 Varchar XML XML PATH XMLTABLE

DangerousDBA Links

  • DB2 for WebSphere Commerce
  • My Personal Blog

Disclaimer:

The posts here represent my personal views and not those of my employer. Any technical advice or instructions are based on my own personal knowledge and experience, and should only be followed by an expert after a careful analysis. Please test any actions before performing them in a critical or nonrecoverable environment. Any actions taken based on my experiences should be done with extreme caution. I am not responsible for any adverse results. DB2 is a trademark of IBM. I am not an employee or representative of IBM.

Advertising

© Copyright 2023 - Dangerous DBA
Infinity Theme by DesignCoral / WordPress