Showing posts with label Teradata. Show all posts
Showing posts with label Teradata. Show all posts

2012/01/05

DEAL WITH THE PADDING DOT WHEN CAST A DECIMAL TO CHAR

in Teradata, when cast a decimal to char directly,
the result will show out with a padding dot at the end of the number

eg. the following query will results '123.'
select cast(123. as varchar(10))

the annoying dot can be dealed by cast to Z(I) format

select cast(cast(123. as format 'Z(I)') as varchar(10))

that will bring us the desired results '123'

cast decimal to integer can also do the same thing,
select cast(cast(123. as integer) as varchar(10))



but must take care of the lenth of the decimal number
because the integer can only accommodate numbers from -2,147,483,648 through +2,147,483,647. and when cast to Z(I) the max number can be 9999999999.

USING FASTEXPORT TO EXPORT TAB-DELIMITED TEXT FILE

we must remember one thing that fastexport is used for export flat-file from Teradata and the output file is used for load to tables.
so do not try to open the exported file, that's not the purpose of fastexport.

but, sometimes, we indid have to use the exported file as a text file, eg. tab-delimited text file

if all output columns are fixed length char then we just need to add tab as dummy column between columns like

.EXPORT
OUTFILE E:/DAT3/CNV/DATA23/SPND040060.DAT
MODE RECORD
FORMAT TEXT;


SELECT
ADDRESS1(CHAR(20))
,' '||ADDRESS2(CHAR(20))
,' '||ADDRESS3(CHAR(20))
,' '||ADDRESS4(CHAR(20))
FROM
M_VM_KNRYO


.END EXPORT;

but, if any column is varchar, instead of fixed length char, we will got some trouble here
because teradata will place some indicators includes the infomation of the following varchar column, at each head of varchar column . eg. length of the varchar following

in this case, we can use some tricks
like, combine all output all columns as one, separated by tab

.EXPORT
OUTFILE E:/DAT3/CNV/DATA23/SPND040060.DAT
MODE RECORD
FORMAT TEXT;


SELECT
ADDRESS1
||' '||ADDRESS2
||' '||ADDRESS3
||' '||ADDRESS4
FROM
M_VM_KNRYO


.END EXPORT;

Although, as a solo varchar column, on the head of each line still exist the 2bytes indicators, we can easily delete it by external tools



FASTEXPORT的目的,原本便是为了输出LOAD所需的数据,并非用来导出可编辑的文本文件。
但是有时候我们不得不这么做。比如说,输出一个TAB间隔的文本文件,提供给其他系统。

如果每个字段都是定长,只需要在每两个字段间,手工添加TAB
但是如果有VARCHAR,那么事情就变得麻烦起来,Teradata在Fastexport可变长字段的时候,会在每个变长字段前添加某些标记信息,比如说变长字段的长度等
这些标记会被文本编辑器识别为乱码,这时候,我们可以把所有的字段和tab拼接成一个单独的字段,整体输出
这样只有在每行开头输出标记信息,可以容易的用外部工具删除

当然了,数据量小的时候,还是用定长,或者干脆用BTEQ来EXPORT容易些


2012/01/03

TERADATA SQL TO VIEW SKEW DATA

usally we can use the dbc.TABLESIZE table to see perm size of each AMP on a table to find that if there is any significant data skew exist

SELECT
VPROC, SUM(CURRENTPERM)
FROM
DBC.TABLESIZE
WHERE
DATABASENAME = 'CVDTST'
AND TABLENAME = 'K_CALENDER'
GROUP BY
VPROC

Vproc Sum(CurrentPerm)
0 5,632.00
1 5,632.00
2 4,608.00
3 4,608.00
4 5,120.00
5 5,632.00
.
.
.
.

81 5,120.00
82 6,144.00
83 5,632.00
84 5,120.00
85 5,120.00
86 5,120.00
87 5,120.00


More accurate method is, count table's row count on each amp but not perm size.
it is more accurate, but much more costly

SELECT
HASHAMP (HASHBUCKET(HASHROW ( BOT_CD ,KAISYA_CD ,REKI_YMD))) AS "AMP NO", COUNT(*)
FROM
CVDTST.K_CALENDER
GROUP BY
1


AMP NO Count(*)
0 57
1 60
2 45
3 43
4 51
5 55
.
.
.
.
.
80 50
81 49
82 62
83 56
84 48
85 52
86 54
87 52


一般来说,我们可以通过查看DBC.Tablesize中某个表在各个AMP上占用的PERM空间来判定是否有显著的SKEW。
更精确的方法是利用PI字段依次计算出row hash,bucket,amp,通过统计每个amp上分配的行数来精确判定SKEW。
这个方法更精确,当然也需要更大的代价

2011/12/30

NULLIF Function

as we know, Teradata provides the well-known method NULLIFZERO.
Teradata also privides us another method named 'NULLIF', that's more powerfull, more flexible,
can be used to replace some query like
CASE WHEN COL1 = VALUE1 THEN NULL ELSE COL1
simply to
NULLIF(COL1, VALUE1)

http://forums.teradata.com/forum/database/coalesce


与广为人知的NULLIFZERO相比,TERADATA提供的NULLIF功能就不那么常用了,其实NULLIF功能更强大,更灵活,某些情况下能被用来简化,替换掉一些CASE语句
如上

USE SQL ASSISTANT TO CREATE A TABLE WITH A TIMESTAMP COLUMN BUT THE COLUMN WAS CONVERTED TO CHAR(23)

we found that when we create a table using SQL assistant, and define the column as

HNKO_TIME TIMESTAMP(0)

the create table statement was successd but when we show the created table we found that it was defined in

HNKO_TIME CHAR(23) CHARACTER SET UNICODE NOT CASESPECIFIC,

thanks to the Teradata forums
http://forums.teradata.com/forum/database/cast-string-to-time
http://forums.teradata.com/forum/enterprise/timestamp-interval-not-working-in-batch-jcl-but-works-in-sql-assistant

and now we know that it was because of the  "allow use of odbc extensions" option in sql assistant
unchecked it ,and problem  is resolved

it seems that that if the option is checked, odbc driver will parse the sql query.
any way, the most important thing is , create table ,and never forget to check the definition of the table created . make sure what you created is what you wanted


最近发现,在某台机器上,通过SQL Assistant提交的DDL,如果包含有TIMESTAMP定义的字段,会被置换为CHAR(23),这显然不正确。
罪魁祸首是老旧版本的SQL Assistant中的设置项:allow use of odbc extensions。
如果选中此选项,ODBC驱动看起来会将某些查询解析掉,譬如说DDL中的TIMESTAMP定义。

2011/12/26

DUPLICATE ROW PROBLEM WHEN INSERT SELECT INTO A SET TABLE WITH UPI

IT SEEMS THAT ,WHEN DEAL WITH DUPLICATE ROW INTO A SET TABLE WITH UPI,
TERADATA GOT SOME DIFFERENCE BETWEEN MANUAL AND THE ACTUAL.

WE KNOW THAT WHEN DEFINE A SET TABLE WITH UPI, TERADATA WILL DO A UPI
CHECK AND IGNORE SET TABLE CHECK(DUPLICATE ROW CHECK).


AND TERADATA ALSO DID IT WHEN WE INSERT INTO A SET-UPI TABLE.

BUT THING GOES DIFFERENT WHEN WE DO A INSERT SELECT


TERADATA WILL NOT SHOW ANY ERROR MESSAGE BUT SIMPLY GET A SUCCESS.
OF COURSE, THE DUPLICATE ROW WILL NOT APPEAR IN THE TARGET TABLE

THAT REALLY CONFUSED ME.

I SEARCHED THE INTERNET AND FOUND SOME SIMILAR PROBLEM

http://carlosal.wordpress.com/2009/02/16/tablas-set-y-multiset-en-teradata/



按照教科书的说法,TERADATA在处理定义有UPI的SET表时,会用UPI的唯一性检查替换掉SET表的重复行检查。
(参见Teradata Factory)
但是事实上并非完全如此,至少在某些情况下,比如说INSERT SELECT
在直接执行INSERT时,对于UPI的SET表,如果insert重复行,会提示有UPI重复错误。
但是如果在INSERT SELECT中,有时候不会有任何错误发生,Teradata会默默地去掉重复行,只插入非重复的部分
而不会返回任何错误
即使有教科书,有时候也挺让人迷惑的


UPDATE 2012/6/26-------------------------------------------------------
i found the answer from the Teradata ApplicationDesignand Development manual
they said that an insert-select automatically discards duplicate rows with out any notification. even if the target table has a UPI on it
To trap this we create a multi-set table with a UPI on it. 

PERFORMANCE IMPACT WHEN TERADATA DEAL WITH "NOT IN" CLAUSE

THERE IS A NOTABLE PERFORMANCE IMPACT WHEN TERADATA DEAL WITH "NOT IN" CLAUSE 
WITHOUT NOT NULL PROPERTY SPECIFIED ON THE COLUMN


HERE IS THE  SQL SCRIPT AND explain IN TERDATA 13:

explain SELECT
       BOT_CD
      ,KAISYA_CD
      ,TKS_CD
      ,LOCA_CD
      ,COL_NO
      ,NHIN_BSHO_CD
      ,MAC_CD
      ,SHO_CD
      ,JSC_SHO_CD
      ,JSC_SHO_CD_EDA
      ,HEN_ZAIKO_SU
      ,KINAI_ZAIKO_SU
      ,HOTCOLD_FLG
FROM
    cvdtst.S_KINAIZAIKO_COL1221OU/* S_KINAIZAIKO_COL */
WHERE
    (
        BOT_CD
        ,KAISYA_CD
        ,TKS_CD
        ,LOCA_CD
        ,NHIN_BSHO_CD
        ,SHO_CD
        ,HOTCOLD_FLG
    )
    NOT IN
    (
        SELECT
            BOT_CD
            ,KAISYA_CD
            ,TKS_CD
            ,LOCA_CD
            ,NHIN_BSHO_CD
            ,SHO_CD
            ,BUNRUI_CD
        FROM
            cvdtst.S_SLIP1_SHO1221OU /* S_SLIP1_SHO */
        WHERE
            SURYOU > 0
    )
;

  1) First, we lock a distinct cvdtst."pseudo table" for read on a
     RowHash to prevent global deadlock for cvdtst.S_SLIP1_SHO1221OU.
  2) Next, we lock a distinct cvdtst."pseudo table" for read on a
     RowHash to prevent global deadlock for
     cvdtst.S_KINAIZAIKO_COL1221OU.
  3) We lock cvdtst.S_SLIP1_SHO1221OU for read, and we lock
     cvdtst.S_KINAIZAIKO_COL1221OU for read.
  4) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from cvdtst.S_SLIP1_SHO1221OU
          by way of an all-rows scan with a condition of (
          "cvdtst.S_SLIP1_SHO1221OU.SURYOU > 0.0") into Spool 7
          (all_amps), which is redistributed by the hash code of (
          cvdtst.S_SLIP1_SHO1221OU.BUNRUI_CD,
          cvdtst.S_SLIP1_SHO1221OU.SHO_CD,
          cvdtst.S_SLIP1_SHO1221OU.NHIN_BSHO_CD,
          cvdtst.S_SLIP1_SHO1221OU.LOCA_CD,
          cvdtst.S_SLIP1_SHO1221OU.TKS_CD,
          cvdtst.S_SLIP1_SHO1221OU.KAISYA_CD,
          cvdtst.S_SLIP1_SHO1221OU.BOT_CD) to all AMPs.  Then we do a
          SORT to order Spool 7 by row hash and the sort key in spool
          field1 eliminating duplicate rows.  The size of Spool 7 is
          estimated with no confidence to be 11,078,203 rows (
          1,074,585,691 bytes).  The estimated time for this step is
          7.68 seconds.
       2) We do an all-AMPs SUM step to aggregate from
          cvdtst.S_KINAIZAIKO_COL1221OU by way of an all-rows scan with
          no residual conditions.  Aggregate Intermediate Results are
          computed globally, then placed in Spool 4.
  5) We do an all-AMPs SUM step to aggregate from Spool 7 by way of an
     all-rows scan.  Aggregate Intermediate Results are computed
     globally, then placed in Spool 8.
  6) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from Spool 8 (Last Use) by
          way of an all-rows scan into Spool 2 (all_amps), which is
          duplicated on all AMPs.
       2) We do an all-AMPs RETRIEVE step from Spool 4 (Last Use) by
          way of an all-rows scan into Spool 3 (all_amps), which is
          duplicated on all AMPs.
  7) We do an all-AMPs RETRIEVE step from cvdtst.S_KINAIZAIKO_COL1221OU
     by way of an all-rows scan with no residual conditions into Spool
     6 (all_amps), which is redistributed by the hash code of (
     cvdtst.S_KINAIZAIKO_COL1221OU.BOT_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.KAISYA_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.TKS_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.LOCA_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.NHIN_BSHO_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.SHO_CD,
     cvdtst.S_KINAIZAIKO_COL1221OU.HOTCOLD_FLG) to all AMPs.  Then we
     do a SORT to order Spool 6 by row hash, and null value information
     in Spool 3 and Spool 2.  Skip this retrieve step if null exists.
     The size of Spool 6 is estimated with low confidence to be
     3,257,496 rows (218,252,232 bytes).  The estimated time for this
     step is 1.91 seconds.
  8) We execute the following steps in parallel.
       1) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of
          an all-rows scan, which is joined to Spool 7 by way of an
          all-rows scan.  Spool 6 and Spool 7 are joined using an
          exclusion merge join, with a join condition of ("(BOT_CD =
          BOT_CD) AND ((KAISYA_CD = KAISYA_CD) AND ((TKS_CD = TKS_CD)
          AND ((LOCA_CD = LOCA_CD) AND ((NHIN_BSHO_CD = NHIN_BSHO_CD)
          AND ((SHO_CD = SHO_CD) AND (HOTCOLD_FLG = BUNRUI_CD ))))))"),
          and null value information in Spool 3 and Spool 2.  Skip this
          join step if null exists.  The result goes into Spool 1
          (group_amps), which is built locally on the AMPs.  The size
          of Spool 1 is estimated with no confidence to be 3,257,496
          rows (244,312,200 bytes).  The estimated time for this step
          is 1.02 seconds.
       2) We do an all-AMPs RETRIEVE step from
          cvdtst.S_KINAIZAIKO_COL1221OU by way of an all-rows scan with
          no residual conditions into Spool 10 (all_amps), which is
          redistributed by the hash code of (
          cvdtst.S_KINAIZAIKO_COL1221OU.BOT_CD) to all AMPs.  Then we
          do a SORT to order Spool 10 by row hash, and null value
          information in Spool 3 and Spool 2.  Skip this retrieve step
          if there is no null.  The size of Spool 10 is estimated with
          low confidence to be 3,257,496 rows (218,252,232 bytes).  The
          estimated time for this step is 1.91 seconds.
  9) We do an all-AMPs RETRIEVE step from Spool 7 (Last Use) by way of
     an all-rows scan into Spool 11 (all_amps), which is redistributed
     by the hash code of (cvdtst.S_SLIP1_SHO1221OU.BOT_CD) to all AMPs.
     Then we do a SORT to order Spool 11 by row hash, and null value
     information in Spool 3 and Spool 2.  Skip this retrieve step if
     there is no null.  The size of Spool 11 is estimated with no
     confidence to be 11,078,203 rows (1,074,585,691 bytes).  The
     estimated time for this step is 7.68 seconds.
 10) We do an all-AMPs JOIN step from Spool 10 (Last Use) by way of an
     all-rows scan, which is joined to Spool 11 (Last Use) by way of an
     all-rows scan.  Spool 10 and Spool 11 are joined using an
     exclusion merge join, with a join condition of ("(BOT_CD = BOT_CD)
     AND ((KAISYA_CD = KAISYA_CD) AND ((TKS_CD = TKS_CD) AND ((LOCA_CD
     = LOCA_CD) AND ((NHIN_BSHO_CD = NHIN_BSHO_CD) AND ((SHO_CD =
     SHO_CD) AND (HOTCOLD_FLG = BUNRUI_CD ))))))"), and null value
     information in Spool 3 (Last Use) and Spool 2 (Last Use).  Skip
     this join step if there is no null.  The result goes into Spool 1
     (group_amps), which is built locally on the AMPs.  The size of
     Spool 1 is estimated with no confidence to be 3,257,496 rows (
     244,312,200 bytes).  The estimated time for this step is 1.02
     seconds.
 11) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.

TERADATA HAVE TO CONFIRM IF NULL VALUE EXISTS ON EVERY "NOT IN" COLUMN,
AND IT IS VERY COSTLY AND WILL GREATLY IMPACT THE SYSTEM'S PERFORMANCE.
WE CAN AVOID IT BY SPECIFIING "NOT NULL" PROPERTY ON THE "NOT IN" COLUMN


HERE IS A ANOTHER SAMPLE IN JAPANESE
http://d.hatena.ne.jp/tgk/20100913




-------------------------------------------------------------------------------
THE FOLLOWING IS SQL AND EXPLAIN IN TERADATA V2R6


explain SELECT
       BOT_CD
      ,KAISYA_CD
      ,LOCA_CD
      ,COL_NO
      ,MAC_CD
      ,SHO_CD
      ,HEN_ZAIKO_SU
      ,KINAI_ZAIKO_SU
      ,HOTCOLD_FLG
FROM
S_KINAIZAIKO_COL/* S_KINAIZAIKO_COL */
WHERE
    (
        BOT_CD
        ,KAISYA_CD
        ,LOCA_CD
        ,MAC_CD
        ,SHO_CD
        ,HOTCOLD_FLG
    )
    NOT IN
    (
        SELECT
            BOT_CD
            ,KAISYA_CD
            ,LOCA_CD
            ,MAC_CD
            ,SHO_CD
            ,BUNRUI_CD
        FROM
       S_SLIP1_SHO/* S_SLIP1_SHO */
        WHERE
            SURYOU > 0
    )



  1) First, we lock a distinct HWKRUN34."pseudo table" for read on a
     RowHash to prevent global deadlock for HWKRUN34.S_SLIP1_SHO.
  2) Next, we lock a distinct HWKRUN34."pseudo table" for read on a
     RowHash to prevent global deadlock for HWKRUN34.S_KINAIZAIKO_COL.
  3) We lock HWKRUN34.S_SLIP1_SHO for read, and we lock
     HWKRUN34.S_KINAIZAIKO_COL for read.
  4) We do an all-AMPs SUM step to aggregate from
     HWKRUN34.S_KINAIZAIKO_COL by way of an all-rows scan with no
     residual conditions.  Aggregate Intermediate Results are computed
     globally, then placed in Spool 3.
  5) We do an all-AMPs RETRIEVE step from Spool 3 (Last Use) by way of
     an all-rows scan into Spool 2 (all_amps) (compressed columns
     allowed), which is duplicated on all AMPs.
  6) We execute the following steps in parallel.
       1) We do an all-AMPs RETRIEVE step from
          HWKRUN34.S_KINAIZAIKO_COL by way of an all-rows scan with no
          residual conditions into Spool 5 (all_amps) (compressed
          columns allowed), which is redistributed by hash code to all
          AMPs.  Then we do a SORT to order Spool 5 by row hash, and
          null value information in Spool 2.  Skip this retrieve step
          if null exists.  The size of Spool 5 is estimated with low
          confidence to be 3,663,600 rows.  The estimated time for this
          step is 4.55 seconds.
       2) We do an all-AMPs RETRIEVE step from HWKRUN34.S_SLIP1_SHO by
          way of an all-rows scan with a condition of (
          "HWKRUN34.S_SLIP1_SHO.SURYOU > 0.0") into Spool 6 (all_amps),
          which is redistributed by hash code to all AMPs.  Then we do
          a SORT to order Spool 6 by row hash and the sort key in spool
          field1 eliminating duplicate rows.  The input table will not
          be cached in memory, but it is eligible for synchronized
          scanning.  The size of Spool 6 is estimated with no
          confidence to be 11,532,920 rows.  The estimated time for
          this step is 18.35 seconds.
  7) We execute the following steps in parallel.
       1) We do an all-AMPs JOIN step from Spool 5 (Last Use) by way of
          an all-rows scan, which is joined to Spool 6 by way of an
          all-rows scan.  Spool 5 and Spool 6 are joined using an
          exclusion merge join, with a join condition of ("(BOT_CD =
          BOT_CD) AND ((KAISYA_CD = KAISYA_CD) AND ((LOCA_CD = LOCA_CD)
          AND ((MAC_CD = MAC_CD) AND ((SHO_CD = SHO_CD) AND
          (HOTCOLD_FLG = BUNRUI_CD )))))"), and null value information
          in Spool 2.  Skip this join step if null exists.  The result
          goes into Spool 1 (group_amps), which is built locally on the
          AMPs.  The size of Spool 1 is estimated with index join
          confidence to be 3,663,600 rows.  The estimated time for this
          step is 1.47 seconds.
       2) We do an all-AMPs RETRIEVE step from
          HWKRUN34.S_KINAIZAIKO_COL by way of an all-rows scan with no
          residual conditions into Spool 7 (all_amps) (compressed
          columns allowed), which is redistributed by hash code to all
          AMPs.  Then we do a SORT to order Spool 7 by row hash, and
          null value information in Spool 2.  Skip this retrieve step
          if there is no null.  The size of Spool 7 is estimated with
          low confidence to be 3,663,600 rows.  The estimated time for
          this step is 4.55 seconds.
  8) We do an all-AMPs RETRIEVE step from Spool 6 (Last Use) by way of
     an all-rows scan into Spool 8 (all_amps) (compressed columns
     allowed), which is redistributed by hash code to all AMPs.  Then
     we do a SORT to order Spool 8 by row hash, and null value
     information in Spool 2.  Skip this retrieve step if there is no
     null.  The size of Spool 8 is estimated with no confidence to be
     11,532,920 rows.  The estimated time for this step is 18.35
     seconds.
  9) We do an all-AMPs JOIN step from Spool 7 (Last Use) by way of an
     all-rows scan, which is joined to Spool 8 (Last Use) by way of an
     all-rows scan.  Spool 7 and Spool 8 are joined using an exclusion
     merge join, with a join condition of ("(BOT_CD = BOT_CD) AND
     ((KAISYA_CD = KAISYA_CD) AND ((LOCA_CD = LOCA_CD) AND ((MAC_CD =
     MAC_CD) AND ((SHO_CD = SHO_CD) AND (HOTCOLD_FLG = BUNRUI_CD )))))"),
     and null value information in Spool 2 (Last Use).  Skip this join
     step if there is no null.  The result goes into Spool 1
     (group_amps), which is built locally on the AMPs.  The size of
     Spool 1 is estimated with index join confidence to be 3,663,600
     rows.  The estimated time for this step is 1.47 seconds.
 10) Finally, we send out an END TRANSACTION step to all AMPs involved
     in processing the request.
  -> The contents of Spool 1 are sent back to the user as the result of
     statement 1.



Teradata13在处理NOT IN查询的时候,如果NOT IN字段没有NOT NULL定义的话,TERADATA会确认每一个字段是否为NULL值,
这会导致数次FTS,极大的影响性能。解决方式是,给NOT IN项目定义NOT IN,或者把NOT IN改为左连接

不过在老版本的Teradata中(比如说V2R6),似乎并不存在这样的问题,目前无法完全确定,留待以后研究

A WAY TO GET ALL ROW COUNT OF ALL TABLES IN A SPECIFIED DATABASE

RUN THE FOLLOWING SQL IN DBC AND THE GENERATED SQL SCRIPT COULD BE USED FOR GETTING ALL TABLES' ROW COUNT


SELECT ' SELECT '' ' || T.TABLENAME || ' '' AS TABLENAME , COUNT(*) AS ROWSNUM FROM ' || TRIM(BOTH FROM DATABASENAME) || '.' || TRIM(BOTH FROM TABLENAME) || ' UNION ' AS X
FROM DBC.TABLES T
WHERE T.TABLEKIND ='T' AND T.DATABASENAME = 'CVDTST'
ORDER BY T. TABLENAME

FROM
https://downloads.teradata.com/forum/analytics/row-counts-for-every-table-in-a-given-database



有时候需要统计某个DB下所有表的行数,在Teradata论坛上看到了这个方法
首先从DBC中取得表名的list,拼装成一个multi statment的SQL查询。
再次执行这个生成的SQL,得到所有表的总行数

2010/07/20

TRIGGER & PROCEDURE

HELP TRIGGER MAPA_INT_ORG_HST;

Name ActionTime Decimal Order Value Creation TimeStamp Event Kind Enabled Comment
Trig_insert_int_org           A 32,767 2009-03-17 16:39:21 I R Y ?


SHOW TRIGGER Trig_insert_int_org

REPLACE TRIGGER msttst.Trig_insert_int_org
AFTER INSERT ON msttst.MAPA_INT_ORG_HST
REFERENCING NEW AS NewRow
FOR EACH ROW
(
CALL msttst.Proc_insert_dummy_int_org(NewRow.id_int_org_pty,NewRow.user_crt,NewRow.dttm_crt,NewRow.user_upd,NewRow.dttm_upd);
);


SHOW PROCEDURE Proc_insert_dummy_int_org

REPLACE PROCEDURE msttst.Proc_insert_dummy_int_org(
IN Internal_Org_Party_Id  CHAR(4),
IN CREATED_BY    VARCHAR(20),
IN CREATION_DATE   TIMESTAMP,
IN UPDATED_BY    VARCHAR(20),
IN UPDATED_DATE    TIMESTAMP
)

EXIT_PROC:

BEGIN

DECLARE var_Key1    VARCHAR(30);
DECLARE var_Key2   VARCHAR(30);

DECLARE var_DmyCdPref   VARCHAR(10);
DECLARE var_DmyCd   VARCHAR(200);

--MAIO_INT_DPT_HST
DECLARE chr_Id_Int_Org_Pty  CHAR(4);
DECLARE var_Id_Int_Dpt_Pty  VARCHAR(50);
DECLARE dat_dt_pty_strt   DATE;
DECLARE dat_dt_pty_end   DATE;
DECLARE var_Nm_Int_Dpt_Eng  VARCHAR(50);
DECLARE var_Nm_Int_Dpt_Int  VARCHAR(50);
DECLARE var_Nm_Int_Dpt_Abr  VARCHAR(50);
DECLARE var_Dsc_Int_Dpt   VARCHAR(50);
DECLARE chr_Cd_Int_Dpt_Tp  CHAR(2);
DECLARE var_Id_Par_Int_Dpt_Pty  VARCHAR(50);
DECLARE var_Flg_Dmy   VARCHAR(1);
DECLARE var_User_Crt   VARCHAR(10);

DECLARE var_Dttm_Crt   TIMESTAMP;

DECLARE var_User_Upd   VARCHAR(10);

DECLARE var_Dttm_Upd   TIMESTAMP;

DECLARE num_rec    INTEGER;

SELECT COUNT(*) INTO num_rec
 FROM msttst.MAPA_INT_ORG_HST
 WHERE id_int_org_pty = Internal_Org_Party_Id;

IF num_rec > 1 THEN
 LEAVE EXIT_PROC;
END IF;

SET var_Key1 = 'ADMIN';
SET var_Key2 = 'MST_DUMMY';

--Get DUMMY CODE
SET var_DmyCdPref = NULL;
SELECT txt1 INTO var_DmyCdPref
 FROM msttst.MACM_GNL_MST
 WHERE key1 = var_Key1 AND
  key2 = var_Key2 AND
  flg_del = '0';

IF var_DmyCdPref IS NULL THEN
 SET var_DmyCdPref = '#';
END IF;

--CREATE DUMMY CODE
SET var_DmyCd = var_DmyCdPref || Internal_Org_Party_Id;

SET chr_Id_Int_Org_Pty  = Internal_Org_Party_Id;
SET var_Id_Int_Dpt_Pty  = var_DmyCd;
SET dat_dt_pty_strt  = '1900-01-01';
SET dat_dt_pty_end  = '9999-12-31';
SET var_Nm_Int_Dpt_Eng  = NULL;
SET var_Nm_Int_Dpt_Int  = NULL;
SET var_Nm_Int_Dpt_Abr  = NULL;
SET var_Dsc_Int_Dpt  = NULL;
SET chr_Cd_Int_Dpt_Tp  = NULL;
SET var_Id_Par_Int_Dpt_Pty = NULL;
SET var_Flg_Dmy   = '1';
SET var_User_Crt  = CREATED_BY;
SET var_Dttm_Crt  = CREATION_DATE;
SET var_User_Upd  = UPDATED_BY;
SET var_Dttm_Upd  = UPDATED_DATE;

INSERT INTO msttst.MAIO_INT_DPT_HST
VALUES(chr_Id_Int_Org_Pty,var_Id_Int_Dpt_Pty,
         dat_dt_pty_strt,dat_dt_pty_end,
                var_Nm_Int_Dpt_Eng,var_Nm_Int_Dpt_Int,
                       var_Nm_Int_Dpt_Abr,var_Dsc_Int_Dpt,
  chr_Cd_Int_Dpt_Tp,var_Id_Par_Int_Dpt_Pty,var_Flg_Dmy,
  var_User_Crt,var_Dttm_Crt,var_User_Upd,var_Dttm_Upd);
END;


2010/07/04

RoundHalfwayMagUP

sel  cast(0.000005 as dec(7,5))
      ,cast(0.000015 as dec(7,5))
      ,cast(0.000025 as dec(7,5))
      ,cast(0.000035 as dec(7,5))
      ,cast(0.000045 as dec(7,5))
      ,cast(0.000055 as dec(7,5))
      ,cast(0.000065 as dec(7,5))
      ,cast(0.000075 as dec(7,5))
      ,cast(0.000085 as dec(7,5))
      ,cast(0.000095 as dec(7,5))

FALSE
0.00000 0.00002 0.00002 0.00004 0.00004 0.00006 0.00006 0.00008 0.00008 0.00010

TRUE
0.00001 0.00002 0.00003 0.00004 0.00005 0.00006 0.00007 0.00008 0.00009 0.00010

2010/06/15

Import/Export Data by SQL Assistant

Export
SQL Assistant -> Exprort Data ->输出至文件
Import
SQL Assistant -> Import Data
insert into dwhtst.TMSL_CUST_MAJ  values (?,?,?,'Y',NULL,NULL,NULL)
执行并选择输入源文件

不过好慢

2010/06/09

Types of Tables in Teradata

1 SET
2 MULTI SET
3 GLOBAL TEMPORARY TABLES
4 TEMPORARY TABLES
5 VOLATILE TEMPORARY TABLES

SET表中如有UPI,则由UPI保证非重复记录

定义字段
ename CHAR(10) CHARACTER SET LATIN NOT CASESPECIFIC
select * from scott.emp where ename(CASESPECIFIC) = 'FAN'
select * from scott.emp_case where ename(not CASESPECIFIC) = 'FAN';
select * from scott.emp_case where upper(ename) = 'FAN';

CREATE GLOBAL TEMPORARY MULTISET TABLE scott.gt_deptsal
(deptno SMALLINT
,avgsal DEC(9,2)
,maxsal DEC(9,2)
,minsal DEC(9,2)
,sumsal DEC(9,2)
,empcnt SMALLINT)
PRIMARY INDEX (deptno)
ON COMMIT PRESERVE ROWS;

查数据字典:select * from dbc.tables where tablename = 'gt_deptsal'
物化表:insert into scott.gt_deptsal values (22,33,33,22,22,22);
查询物化表:物化后DBC.TempTables 表会存在一条表记录,物化表drop后记录删除:
select * from DBC.TempTables;

本文来自CSDN博客,转载请标明出处:http://blog.csdn.net/fanxiaoliang83/archive/2009/07/30/4395457.aspx