• About Dangerous DBA
  • Table of Contents
Dangerous DBA A blog for those DBA's who live on the edge

Category Archives: Reorg Index

Automated DB2 Reorganisation, Runstats and Rebinds – Version 2

December 11, 2011 8:56 pm / 1 Comment / dangerousDBA

A while back I did the first version of this code (can be found here). Over time I have been running this code on our production servers, it started out by working fine but sometimes it would over run and interfere with the morning batch, so a different solution was needed. In a previous article I discussed if it was better to let the included automated DB2 functionality take care of the maintenance of tables etc, or to create your own process that uses included stored procedures to identify the tables that need reorganising.

So this new version of the script will only work between certain times and only do offline reorganisations, but is still possible to just reorganise a single partition of a range partitioned table. The reason for the time restriction is to take a leaf from the included automated scripts having an offline maintenance window, and to stop the scripts that I have created before overrunning into the morning batch. The previous version of the reorganisation script attempted to be to “clever” and do an online reorg of non partitioned tables and an offline reorg of the partitions of the range partitioned tables. The problem with this is that capturing when the online reorgs have finished (as they are asynchronous), so that the table can have it statistics run so that it is not identified again by the SYSPROC.REORGCHK_TB_STATS stored procedure. Equally another issue is that you would have to reorganise the index’s on the tables that you have on-line reorganised as they would not have been done, where as an offline reorganisation also does the indexes at the same time.

So I made the decision to do all the reorganisations offline, followed by a runstats and a rebind. The main controlling stored procedure looks like:

CREATE PROCEDURE DB_MAIN.RUN_ALL_AUTOMATED_MAINTENANCE(IN MAINT_SCHEMA VARCHAR(255), IN REORG_FINISH_TIME TIME, IN RUNSTATS_FINISH_TIME TIME, IN DAY_TO_REMOVE INTEGER)
LANGUAGE SQL
BEGIN
 ----------------------------------------------------------------------------
 ----------------------------------------------------------------------------
 --This procedure is the wrapper for all the rest to tidy it up a little bit.
 --It will only run the reorgs tille the time specified, then will just finish the one
 --that it is on once the time has expired.
 --Similar thing for the runstats so that it does not impact on the running of the
 --morning loads.
 --Rebind the procedures so that they get new packages based on the updated statistics
 --from the reorg and runstats.
 --All Reorg done off line as this is what DB2 does.
 --MAINT_SCHEMA = The schema you wish to be looked at
 --REORG_FINISH_TIME = The time you wish the reorgs to run until
 --RUNSTATS_FINISH_TIME = The time you wish runstats to run till
 --DAY_TO_REMOVE = The number of day back you wish staging tables to be emptied from
 ----------------------------------------------------------------------------
 ----------------------------------------------------------------------------

 ----------------------------------------------------------------------------
 ----------------------------------------------------------------------------
 --Reorg the tables
 CALL DB_MAIN.RUN_AUTOMATED_TABLE_REORG(MAINT_SCHEMA, REORG_FINISH_TIME, DAY_TO_REMOVE);
----------------------------------------------------------------------------
 ----------------------------------------------------------------------------
 --Runstat the tables that have been reorged
 CALL DB_MAIN.RUN_AUTOMATED_TABLE_RUNSTATS(MAINT_SCHEMA, RUNSTATS_FINISH_TIME,DAY_TO_REMOVE);
----------------------------------------------------------------------------
 ----------------------------------------------------------------------------
 --Rebind the stored procedures to take advantage of the potentially new plans
 CALL DB_MAIN.RUN_AUTOMATED_REBIND_PROCEDURES(MAINT_SCHEMA);

END

This is now a three stage operation, the first two stages have time limits and so they will carry out new operations until this time limit is breached. What you have to realise here is that if the end time is 18:00:00 then it will start work right up until 17:59:59, this means if it picks up a particularly large reorganisation task at this last second then it will run till it has finished.

Some of the code especially the runstats stuff is quite a lot like the previous version just with a change for the time. As I cant upload a single .zip file as apparently it will be a security risk, and apparently a .sql file is also a risk please find a number of .doc files a the bottom of the article. Please just change the file extension and then you will be able to access them. I would very interested in having feedback from anyone who uses this code to see how you get on with it.

DISCLAIMER: As stated at the top of the blog use this code in your production systems at your own peril. I have tested and know it works on my systems, please test and check it works on yours properly as reorganising tables can potentially dangerous.

FILES WITH CODE IN:

OverallRunnerStoredProcedure

ReorganiseTablesStoredProcedures

ReorganiseTableTables

ReorganiseTableViews

RunstatsTableTables

RunstatsTableViews

RunstatsTableStoredProcedures

RebindSchemaStoredProcedure

Posted in: DB2, DB2 Administration, DB2 built in tables, DB2 built in Views, DB2 Built-in Stored Procedures, DB2 Maintenance, IBM, Rebind Stored Procedure, Reorg Index, Reorg Table, Reorganise Index, Runstats, SYSIBM.SYSDATAPARTITIONS, SYSIBM.SYSTABLES, SYSIBMADM.SNAPUTIL, SYSPROC.ADMIN_CMD, SYSPROC.ADMIN_CMD, SYSPROC.REBIND_ROUTINE_PACKAGE, SYSPROC.REORGCHK_IX_STATS, SYSPROC.REORGCHK_TB_STATS

IDUG – EMEA – 17th – Final day

November 17, 2011 8:44 pm / Leave a Comment / dangerousDBA

First of all can I apologise for the spelling and poor english in some of my posts from IDUG EMEA, I have no excuse other than they were generally wrote late at night and with a few beers inside me. With that out the way lets get on with what I did today. Surprise of the day was seeing a lady in the restaurant having cucumber, chocolate cake and scrambled egg at the same time, but hey if she enjoyed it fair enough.

The talks that I went to today:

IO, IO its off to disk we go – Scott Hayes

There was bit of repetition of the index talk that Scott gave yesterday as the two are really closely related, and seeing him dancing on a video this morning was quite entertaining too (I wonder if that was his wife, or does she know?). Between all the talks on performance monitoring and index and IO tuning I have been to at IDUG EMEA I and my junior are going to have loads to do for several weeks. From this I learnt:

  1. REORGCHK – Does a runstats every time it is called. I am going to have to give this one a try next time I run it and check the col in the TABLES table. ITs not that I dont believe you Scott, its just it works so quickly to do that.
  2. SSD disks are better for random IO or even though you might not be able to afford to buy enough SSD to fit a whole database on it why not just the some of the database that is used the most often!

Database I/O in the brave new world – Aamer Sachedina

This was the second IO talk I went to in the day but it was completely different to Scotts, Aamer looked at it from a more hardware point of view as opposed to the database point of view. It was interesting as the hardware side was always something that I have wanted to know more about and this gave me a good foundation, and some questions to ask my storage manager when I get back. I learnt that:

  1. Thin provisioning does not give you space at all and it is more like sudo space allocated, which can lead to a whole heap of trouble. Will be asking some questions when I get home
  2. If you are using thin provisioning then there are some special db2 registry variables that you need to set (db2set)
  3. Soon we will be getting Fiber channel over copper at the low levels of the SAN stack!

Understanding and tuning page cleaning – Kelly Schlamb

Another talk on improving the IO on my DB2 databases, I am going to be investigating these things as a matter of urgency once I get back to work, if not before as I am itching to improve and learn.  This talk was mainly to do with the differences between settings that you need between having DB2_USE_ALTERNATE_PAGE_CLEANING ON or OFF.

After the conference finished I went for some cheeky sight seeing with Colin a DB2er that I met while over here and Iqbal from Triton consulting and one of the DB2Geeks. We got the Prague Metro into the center of town and got to see some of the sights. We had no idea where we were going, but I think by pure accident we saw most of the sights, or at least things that a lot of other tourists took photos of; so they must be sights right?

Second surprise of the day was meeting @db2fred in the local restaurant to the hotel, that was not the surprise, the fact that he knew who I was before I had even opened my mouth absolutely threw me. Good to put a face to twitter name. So have a nice journey home tomorrow morning Fred.

Tomorrow is the last full day I have in Prague before I fly home early doors on Saturday, because of the money that coming to IDUG as the student of Iqbal (under the mentor scheme) saved me it enabled to sign up for Scott Hayes – Rocket Science: DB2 LUW Performance analysis and Tuning Workshop, which I am hoping will give me even more areas to work on the database and teach me even more about the correct set up. Again I can’t say Thank you enough for doing this for me enabling me to be able to take part in IDUG, Triton Consulting and Iqbal Goralwalla.

Posted in: DB2, DB2 Administration, DB2 Maintenance, EMEA, IBM, IDUG, Reorg Index, Reorg Table, Triton

IDUG – EMEA – 16th – Day Three

November 16, 2011 11:04 pm / Leave a Comment / dangerousDBA

Today was another long day, but was ended by an excellent dinner put on by IBM to thank its customers, with ostrich leg and proper sushi so now we know where all out licensing fee goes!! The talks that I attended did not teach me as much as I had hoped, but I did learn something in each of them though so not a total waste of time.

A DBA’s guide to using TSA – Fredric Engelen

This covered the basics of HADR and then went on to cover how you set up the TSA to take over the HADR, and did not cover the TSM that I hoped it would that I will be implementing soon at Holiday extras. Learn’t:

  1. db2rfpen – Will let force the rollforward of the primary database.

Managing DB2 Performance in an Heterogeneous environment – Jim Wankowski

This covered the differences and similarities between DB2 LUW and DB2 z/OS. Although it was informative I feel the title was not correct for the session and should have been different. I learnt:

  1. When a Sort happens on a VARCHAR column then the column is expanded to its full length – I may ask this question to Scott Hayes when I do his Rocket Science Seminar on Friday

Deep Dive into DB2 LUW offline table and index reorg – Saeid Mohseni

This session was very good, if you are a frequent reader of my blog then you will know that I am trying to get a straight answer to my questions on Reorganisation and Runstats in DB2 and so I got confirmed and learnt:

  1. DB2 reorgs need the current runstats on the table to be correct to give the correct results for the reorganisation identifying stored procedure.
  2. You can parallel run a reorg on a partitioned table index as long as the first, and subsequent runnings do not allow reads.

Data Warehousing – SIG

This was a little disappointing as it did not have an agenda so was unstructured, and I would have liked to have had a little more information on how it was going to be run. It was informative and if any one has heard of “Data Vaulting” then there is a lady from the Netherlands that would really like to know.

Back to the fifties . . . . . 50 fabulous ways for forecasting failures, flaws and finding flubber – Alexander Kopac

This was an excellent talk and there is a lot to try out when I get back home and enough work to keep us going for week probably. The presenter dressed up as a wizard and the bits of SQL he has given in the slides will hopefully make the DB2 team at HX wizards.One main thing to remember is:

  1. KISS – Keep It Simple Stupid

Useful but widely unknown DB2 Functions – Michael Tiefenbacher

Second talk from this guy and if I did not already know, used or have blogged about all the things that he presented this would have been an extreamly useful and I really should have read the Agenda better before getting in there.
And to the final talk of the day:

DB2 LUW Index design, best practice and case studies – Scott Hayes

This was a very good talk and used in conjunction with Alexander’s information I think will build a framework for reviewing indexes and designs at HX. I learnt that:
  1. I need to read up on CLUSTERED indexes
  2. Single column indexes are not good, even though it is the recommended by IBM
  3. You need a good problem statement to come up with a good solution – Can be applied to everything in life.
Tomorrow is the last day of the conference and so it finishes pretty early and so I might get some sight seeing done in the afternoon, but before that I plan on attending:

Thursday, November 17, 2011

08:30 AM – 09:30 AM
Session 15
1899:I/O, I/O, it’s off to Disk I go – I/O Optimization, Elimination, & SSD (Aquarius)
09:45 AM – 10:45 AM
Session 16
2194:Database I/O in the Brave New World (Aquarius)
11:15 AM – 12:15 PM
Session 17
1892:Understanding and Tuning Page Cleaning in DB2 (Aquarius)
12:30 PM – 01:30 PM
Thursday DB2 Panel
So have a good night and see you all in the morning.
Posted in: Data types, DB2, DB2 Administration, DB2 Ecosystem, DB2 Maintenance, EMEA, IBM, IDUG, Reorg Index, Reorg Table, Reorganise Index, Varchar

DB2 Automated Maintenance Vs Automated Maintenance Scripts

September 11, 2011 8:43 am / Leave a Comment / dangerousDBA

I am the first to admit when I am wrong and accept the consequences but IBM some times do not make it easy to work out what you are meant to be doing and answers of  “it depends” are not entirely helpful, equally I understand that I know what I know about DB2 and I am more than willing to learn. Those of you that have come to my blog before have probably seen the articles that I have done before on stored procedures that you could automate to Reorganise tables and indexes, Runstats on the reorganised tables and then finally rebind the stored procedures. These recently have started to overrun and affect production systems that they were not meant too, so a new way of working needs to be found, therefore we are back to the automated maintenance found in the DB2 ESE product itself or editing the scripts, but from my research automated maintenance does not really do things in the right “order”.

The automated maintenance provided with DB2 from what I understand allows two time periods online window and an offline window, and in essence three different types of work method “do something” or “tell someone who’s bothered” or “do something and tell someone who’s bothered”. In the online window you can carry out runstats and activities that can be carried out on tables without talking them offline. In the offline window DB2 will carry out regorgs of tables in an offline classic reorg. At no point will it will it rebind the stored procedures and in no way are these joined up e.g. if a table is reorganised it will then have the runstats done on it unless DB2 formulas behind the scenes decide too. Where as the scripts and stored procedures I created will do everything in order, but is it needed.

I was listening to the excellent free webinar (live) from DBI on “DB2 LUW Vital Statistics – What you need to know” (replay download at the bottom) and listening to guest John Hornibrook explain how and what you can set in DB2 to gather statistics was elightening and I learned so all good. Having been researching the automated maintenance I thought of a question “Do you need to runstats after a table / index reorg?”, the host thought that he knew the answer, but I think John threw him a little bit of a curve ball by responding with (something like) “well the data has not changed but the locations and distribution on disk have changed” (would have liked to get the exact quote but no sound on replay I downloaded!), well I was even more confused. I would have loved to have submitted a follow up question but they drew it to a close in short order after that. My next question would have been “Will a table be marked for runstats after it has been reorganised?”.

So on the theme of that question I thought IBM developer works might know and if did have some very useful information on it Automatic table maintenance in DB2, Part 1 and Automatic table maintenance in DB2, Part 2. These articles are very good and explain how automatic table maintenance works, but equally left me with questions. A line in the Part 2:

“If you reorganize the table and do not update the table statistics by issuing a RUNSTATS command, the statistics will still indicate that the table contains a high percentage of overflow rows, and REORGCHK will continue to recommend that the table be reorganized”

But in Part 1 on runstats there is a list of decisions DB2 will make as to wether it needs to runstats:

  1. Check if the table has been accessed by the current workload.
  2. Check if table has statistics. If statistics hare never been collected for this table, issue RUNSTATS on the table. No further checks performed.
  3. Check whether UDI counter is greater than 10% of the rows. If not, no action on the table.
  4. Check whether UDI counter is greater than 50% of the rows, issue RUNSTATS on the table if UDI counter is greater than 50% of the rows.
  5. Check if the table is due for evaluation. No further action performed if the table is not due for evaluation. An internal table is used to track if tables are due for evaluation.
  6. RUNSTATS if the table is small.
  7. if table is large (more than 4000 pages), sample the table to decide whether or not to perform RUNSTATS.
So this seems that a table might not get runstat’ed if it did not fall into these criteria and then it would keep being targeted for reorganisation. Another thing that intrigued me was that:

“All scheduled reorganizations (and other automatic maintenance operations, like automatic runstats) are maintained in a queue. When the corresponding maintenance window begins, reorganizations are performed one after another until the end of the window”

So if your tables are large or your window when your tables can not be accessed is short then not a lot of work will be done. It is not multi threaded like the stored procedures that I wrote, but it does have one advantage that the reorganisation phase is to a window, something that is not built into my scripts. Equally the stored procedures have their disadvantages as the reorganisation is IO heavy and the runstats is CPU heavy, so if you have multiples of these things going off all could be at different stages and become quite a load on the server.

I think that the solution is that automatic maintenance is useful just to keep your runstats ticking over during the week because as explained by John this automation is very “light” and also can be set to evaluate before a query is run, but for reorganisation I think I am going to write a new version of the scripts and stored procedures that I blogged about before and build in time windows that work will be carried out under because it is a more joined up way of doing things and also will include the rebind which is essential for DB2 knowing the best execution plan for stored procedures.

I would love to know your experience with automatic maintenance or other methods of keeping your reorganisations and runstats up to date so please feel free to comment on this posting.

Posted in: DB2, DB2 Administration, DB2 built in Views, DB2 Maintenance, IBM, Rebind Stored Procedure, Reorg Index, Reorg Table, Reorganise Index, Runstats, SYSIBMADM.SNAPTAB_REORG

DB2 Detach Table Partitions automatically

May 15, 2011 5:36 pm / Leave a Comment / dangerousDBA

To aid querying the large tables that DB2 will allow you to create or that you will be creating when you use ESE (Enterprise Server Edition) or one of the extensions for the lesser versions DB2 will enable you to create range partitioned tables. A full starter explanation and examples can be found here so no need to go into it in this article.

Although you can create a table with a number of partitions detaching an old partition is an entirely manual process. As this is not complicated process, but can be time consuming if you have several tables to do, and you need to do it in a safe way.

If you have any kind of data retention policy then eventually “old” data that is in your tables will need detaching at the end of the tables as it has now past its usefulness but may be required in the future to satisfy extraordinary queries. There is a table in DB2 that holds meta data on tables with partitions. This is table is called: SYSIBM.SYSDATAPARTITIONS. This can be used to determine if there are enough partitions to detach the old ones or not.

So to make this process easier to manage I have come up with a process that uses a two user stored procedures, the SYSPROC.ADMIN_CMD, and a table to record information in and so then can be called from either command line, batch script or SQL command editor.

This first stored procedure uses a variable passed in from the main script to determine if there is a partition that could be a candidate for detachment:

CREATE PROCEDURE DB_MAIN.GET_MIN_PARTITION (IN TABLESCHEMA VARCHAR(255), IN TABLENAME VARCHAR(255),IN PARTITIONNUM INT, OUT PARTOBJID INT)
LANGUAGE SQL
BEGIN
    --Decalre Vars for use
        DECLARE ActualPartNo INT DEFAULT 0;
        DECLARE Task_problem condition FOR SQLSTATE '99999';

    --see if the ActualPart and the PartNum are equal or greater
        SET ActualPartNo = (SELECT COUNT(*)
                            FROM SYSIBM.SYSDATAPARTITIONS
                            WHERE LTRIM(RTRIM(TABNAME)) = TABLENAME
                                AND LTRIM(RTRIM(TABSCHEMA)) = TABLESCHEMA);

    --See if the number of partions on the table is the the same or greater than the
    --specified number
        IF(ActualPartNo > PARTITIONNUM) THEN
            SET PARTOBJID = (SELECT PARTITIONOBJECTID
                            FROM SYSIBM.SYSDATAPARTITIONS
                            WHERE LTRIM(RTRIM(TABNAME)) = TABLENAME
                                AND LTRIM(RTRIM(TABSCHEMA)) = TABLESCHEMA
                            ORDER BY DATAPARTITIONNAME ASC
                            FETCH FIRST ROW ONLY);
        END IF;           

    --Return the
        RETURN PARTOBJID;
END

This script returns the partition object ID if there is a partition that meets the criteria specified from the main procedure. If no partition meets the criteria, e.g. you pass in 10 and the table only has 8 partitions then a 0 will be returned and no ID will be passed to the outer procedure.

So onto the main procedure:

CREATE PROCEDURE DB_MAIN.DETACH_PARTITION(IN TABLESCHEMA VARCHAR(255), IN TABLENAME VARCHAR(255),IN PARTITIONNUM INT, IN EXPORTDIR VARCHAR(255))
LANGUAGE SQL
BEGIN
--Declare vars for use in SP
	DECLARE Partition_problem condition FOR SQLSTATE '99999';
	DECLARE DttActPartNo INT DEFAULT 0;
	DECLARE DttPartName varchar(150);
	DECLARE DttSQL Varchar(300) DEFAULT 'No Dont do it';
	DECLARE ReorgString VARCHAR(500);
	DECLARE ExportString VARCHAR(500);

--Find If there is a partition to detach
	CALL DB_MAIN.GET_MIN_PARTITION(TABLESCHEMA,TABLENAME,PARTITIONNUM,DttActPartNo);

	 IF(DttActPartNo <> 0) THEN
		--Get the name of the partition
		SET DttPartName = (SELECT DATAPARTITIONNAME
		       FROM SYSIBM.SYSDATAPARTITIONS
			   WHERE PARTITIONOBJECTID = DttActPartNo
			AND LTRIM(RTRIM(TABNAME)) = TABLENAME
			AND LTRIM(RTRIM(TABSCHEMA)) = TABLESCHEMA);

		--Build dynamic SQL to Detach and create a table of the partition
		SET DttSQL = 'ALTER TABLE ' || TABLESCHEMA || '.' || TABLENAME || ' DETACH PARTITION ';
		SET DttSQL = DttSQL || DttPartName || ' INTO ' || TABLESCHEMA || '.' || DttPartName;

		IF((DttSQL <> '') AND (DttSQL <> 'No Dont do it'))THEN
			--Write the table date Etc to Logging table
			INSERT INTO DB_MAIN.DETACHPARTITIONS(
				TABLESCHEMA,
				TABLENAME,
				DETACHDATE,
				DETACHTABLESCHEMA,
				DETACHTABLENAME,
				DETACHCODE
			)
			VALUES(
				TABLESCHEMA,
				TABLENAME,
				CURRENT DATE,
				TABLESCHEMA,
				DttPartName,
				DttSQL
			);			

	COMMIT; 

			--Run the code
			PREPARE S1 FROM DttSQL;
	EXECUTE S1;

	--Reorg the table
	    --Create the string
	    SET ReorgString = 'REORG INDEXES ALL FOR TABLE '  || TABLESCHEMA || '.' || TABLENAME || ' ALLOW NO ACCESS CLEANUP ONLY';

	    --Run the command
	    CALL SYSPROC.ADMIN_CMD(ReorgString);   

	--Create the Export
	    --Create the string
	    SET ExportString = 'EXPORT TO ' || EXPORTDIR || '/' || TABLESCHEMA || '_' || DttPartName || '.tsv OF DEL MODIFIED BY CHARDEL"" COLDEL0x09 DATESISO SELECT * FROM ' || TABLESCHEMA || '.' || DttPartName;
	    --Run the command
	    CALL SYSPROC.ADMIN_CMD(ExportString);

		END IF;
ELSE
    INSERT INTO DB_MAIN.DETACHPARTITIONS(
	TABLESCHEMA,
	TABLENAME,
	DETACHDATE,
	ERRORTEXT
    )
    VALUES (TABLESCHEMA,
		TABLENAME,
		CURRENT DATE,
	    'This table does not have that many partitions. Attempted:' || CHAR(PARTITIONNUM)
	    );

END IF;
END

The stored procedure takes four variables, table schema (TABLESCHEMA), table name (TABLENAME), the number of partitions you wish the table to have (PARTITIONNUM) and the directory on the server where the IXF of the detached partition (EXPORTDIR). The procedure works out from the parameters if the table (TABLESCHEMA . TABLENAME ) has the same amount or more partitions then the PARTITIONNUM parameter, if it does then the partition will be detached and an IXF file of the partition will be created at the EXPORTDIR location.

There is one GOTCHA is that if you have called your partitions all the same things across different tables in the same schema, then you will need to edit this code slightly to take account of this and differentiate the both the tables that are created and IXF file that are exported. The reason I mention this is that when you create a partitioned table normally if you do not specify the names of the partitions then DB2 will create them like PART0, PART1, PART2. Using this code the schema and the name would be the same and lead to conflicts.

This allows the stored procedure to detach the partition that is needed to be archived and create an IXF and a detached partition table. No table or data is deleted automatically, this means you can make sure that the data you need in the IXF is archived in your chosen way before deleting the detached partition table. As you can see the solution also uses a table to record what has been done for auditing purposes. If there are not enough partitions the auditing table will record the fact and no detaching will take place. The reorg is needed as the indexes on the table that has just had the partition remove will not work properly till this is done.

The procedure then be called as per the code below, via your favourite method for automating tasks:

CALL DB_MAIN.DETACH_PARTITION('INSURANCE', 'TRANSACTIONS',10, '/home/db2inst1/detach-archive/')

Please note the trailing slash is needed in the directory path.

DISCLAIMER: As stated at the top of the blog use this code in your production systems at your own peril. I have tested and know it works on my systems, please test and check it works on yours properly as detaching partitions can potentially dangerous. The file is a .doc only as that’s the only way I could get it uploaded onto wordpress, it should open fine like that, or knock the .doc off and it will open in your favourite text editor

FILE WITH CODE IN: DB2_Detach_Partitions_Tables_Sps_DCP

Posted in: DB2, DB2 Administration, DB2 built in tables, DB2 Built-in Stored Procedures, Detach table Partition, Reorg Index, SYSIBM.SYSDATAPARTITIONS, SYSPROC.ADMIN_CMD, SYSPROC.ADMIN_CMD

Post Navigation

← Older Posts
 

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 757 other subscribers

Recent Posts

  • Self generating Simple SQL procedures – MySQL
  • Google Cloud Management – My Idea – My White Whale?
  • Position Tracker – The Stub – Pandas:
  • Position Tracker – The Stub
  • Position Tracker – In the beginning
  • Whats been going on in the world of the Dangerous DBA:
  • QCon London Day 1
  • Testing Amazon Redshift: Distribution keys and styles
  • Back to dangerous blogging
  • DB2 10.1 LUW Certification 611 notes 1 : Physical Design

Dangerous Topics

added functionality ADMIN_EST_INLINE_LENGTH Bootcamp colum convert data types DB2 db2 DB2 Administration DB2 Development db2advis db2licm Decompose XML EXPORT GCP Google IBM IBM DB2 LUW idug information centre infosphere IOT LOAD merry christmas and a happy new year Position Tracking python Recursive Query Recursive SQL Reorganisation Reorganise Reorganise Indexes Reorganise Tables Runstats sql statement Stored Procedures SYSPROC.ADMIN_CMD Time UDF User Defined Functions V9.7 V10.1 Varchar XML XML PATH XMLTABLE

DangerousDBA Links

  • DB2 for WebSphere Commerce
  • My Personal Blog

Disclaimer:

The posts here represent my personal views and not those of my employer. Any technical advice or instructions are based on my own personal knowledge and experience, and should only be followed by an expert after a careful analysis. Please test any actions before performing them in a critical or nonrecoverable environment. Any actions taken based on my experiences should be done with extreme caution. I am not responsible for any adverse results. DB2 is a trademark of IBM. I am not an employee or representative of IBM.

Advertising

© Copyright 2022 - Dangerous DBA
Infinity Theme by DesignCoral / WordPress