Halaman

Tampilkan postingan dengan label PostgreSQL. Tampilkan semua postingan
Tampilkan postingan dengan label PostgreSQL. Tampilkan semua postingan

Kamis, 21 Februari 2013

Hierarchical Data

Recursive queries are typically used to deal with hierarchical or tree-structured data, such as chart of accounts (COA). COA is a listing all the accounts in the general ledger in such a way that forms a hierarchical structure, for example the following:


acc_
acc_id parid acc_code     acc_name                    acc_level
-----------------------------------------------------------------
1           10.0000.0000 AKTIVA                         0
2 1 11.0000.0000 AKTIVA LANCAR                   1
3 2 11.1000.0000 Kas dan Setara Kas                2
4  3 11.1100.0000 Kas                            3
5  4 11.1110.0000 Kas - Program Kemitraan          4
6  4 11.1120.0000 Kas - Bina Lingkungan             4
7  3 11.1200.0000 Bank                            3
8  7 11.1210.0000 Bank - Program Kemitraan        4
....
....
116  97 22.0000.0000 KEWAJIBAN JANGKA PANJANG 1
117  116 22.2000.0000 Hutang Kepada BUMN Pembina Lain  2
119  116 22.9000.0000 Hutang Jangka Panjang Lainnya     2
122          30.0000.0000 AKTIVA BERSIH                   0
123  122 33.0000.0000 PENYISIHAN                      1



The list is generated by running the following SQL command:


WITH RECURSIVE tree AS (SELECT acc_id,acc_code, acc_parid, acc_name ,0 AS acc_level 
FROM tova.coa WHERE acc_parid IS NULL
UNION ALL
SELECT c.acc_id, c.acc_code, c.acc_parid, c.acc_name, acc_level+1 AS acc_level 
FROM tova.coa c JOIN tree t ON t.acc_id  = c.acc_parid)
SELECT acc_id, acc_parid, tova.getakunttk(acc_code), acc_name, acc_level 
FROM tree b order by 3;


I save the transaction only at the lowest level account, of course user still can see reports for each account in a hierarchy. The stored function below will display accounts hierarchy.

CREATE OR REPLACE FUNCTION tova.getcoa_allchildofparentrec(IN pacc_parid integer) 
RETURNS TABLE (acc_id integer,acc_code character varying, acc_name character varying) AS
$$
BEGIN
    RETURN QUERY WITH RECURSIVE tree AS (
            SELECT a.acc_id, a.acc_code as acc_code,a.acc_name 
               FROM tova.coa a WHERE acc_parid = $1
            UNION ALL
            SELECT a.acc_id, a.acc_code as acc_code,a.acc_name 
               FROM tova.coa a, tree t WHERE t.acc_id = a.acc_parid)
    SELECT h.acc_id,h.acc_code, h.acc_name FROM tree h;
END;
$$ LANGUAGE plpgsql;


To display the coa's tree, just call this:
select acc_code, acc_name from ferry.getcoa_allchildofparentrec(34) order by 1;


acc_code acc_name
---------------------------------------------------------------------
11.3610.0000 Piutang Pinjaman Mitra Binaan - Sektor Industri
11.3610.6108 PPMB Sektor Industri - Kab. Landak
11.3610.6103 PPMB Sektor Industri - Kab. Sanggau
11.3610.6301 PPMB Sektor Industri - Kab. Tanah Laut
11.3610.6309 PPMB Sektor Industri - Kab. Tabalong
11.3610.6105 PPMB Sektor Industri - Kab. Sintang
11.3610.6303 PPMB Sektor Industri - Kab. Banjar
11.3610.6100 PPMB Sektor Industri Prov. Kalimantan Barat
11.3610.6171 PPMB Sektor Industri - Kota Pontianak
11.3620.0000 Piutang Pinjaman Mitra Binaan - Sektor Perdagangan
11.3630.0000 Piutang Pinjaman Mitra Binaan - Sektor Pertanian
11.3640.0000 Piutang Pinjaman Mitra Binaan - Sektor Peternakan
11.3650.0000 Piutang Pinjaman Mitra Binaan - Sektor Perkebunan
11.3660.0000 Piutang Pinjaman Mitra Binaan - Sektor Perikanan
11.3670.0000 Piutang Pinjaman Mitra Binaan - Sektor Jasa
11.3680.0000 Piutang Pinjaman Mitra Binaan - Sektor Lain
11.3610.6300 Sektor Industri Prov. Kalimantan Selatan
11.3610.6310 PPMB - Sektor Industri - Kab. Tanah Bumbu
...
...
11.3670.6171 Piutang Pinjaman Mitra Binaan - Sektor Jasa - Kota Pontianak
11.3660.6171 Piutang Pinjaman Mitra Binaan - Sektor Perikanan - Kota Pontianak
11.3680.6171 Piutang Pinjaman Mitra Binaan - Sektor Lain - Kota Pontianak

Minggu, 23 September 2012

PostgreSQL: Invalid byte sequence for encoding UTF8


I have a comma delimited file (.csv) contains stock master data and wanto to import them into PostgreSQL table .During importing, I am facing Invalid byte sequence for encoding "UTF8" error message.

C:\>psql -Umyname -dmydb -p5490
--1. do import
mydb=# \copy myshema.stocks from C:\temp\inv.csv with delimiter as ';'csv
ERROR: Invalid byte sequence for encoding "UTF8": 0xa0
CONTEXT:  copy stock, line 26120

--2. check my client encoding
mydb=# show client_encoding;

client_encoding
----------------
UTF8
(1 row)

--3. modify my client encoding
mydb=# \client_encoding LATIN1

--3a. check current client encoding
mydb=# show client_encoding;

client_encoding
----------------
LATIN1
(1 row)
--4. Re do importing
mydb=# \copy myshema.stocks from C:\temp\inv.csv with delimiter as ';'csv

--5. reset client_encoding;

That is it.

Selasa, 18 September 2012

To Get Distinct Rows By specific columns

My purpose today is to get the rows with unique data based on criteria of uniqueness value on the column, including combinations of column.  As you may think, just use the DISTINCT clause. You are right! Let's try with PostgreSQL.

Firt, list the data using this statement:

SELECT a.nik, a.nama, g.ket as job, h.akunno, h.alokasi
FROM ester.maskar a
   INNER JOIN  ester.setnamajob g ON a.jabid=g.idku
   INNER JOIN  ester.setnamajobakun h ON g.idku=h.jobid
   WHERE  a.thn=2012 AND  a.unitid = 16 ORDER BY a.nik, a.nama
=========================================================
nik   nama          job                   akunno  alokasi
=========================================================
100.200  ALBERT    Manajer                400     0.00
100.300  JHONI     Danru Satpam           423     1.00
100.400  ZAENAL    Anggota Satpam         423     1.00
100.500  ROBI      Anggota Satpam         423     1.00
200.400  TEDDY     Operator St. Kempa     603.07  0.40
200.400  TEDDY     Operator St. Kempa     603.08  0.40
200.400  TEDDY     Operator St. Kempa     603.09  0.20
300.500  JOKO      Operator Kolam Limbah  603.11  0.50
300.500  JOKO      Operator Kolam Limbah  603.12  0.50
300.650  SEPTIAN   Operator St. Kempa     603.09  0.20
300.650  SEPTIAN   Operator St. Kempa     603.07  0.40
300.650  SEPTIAN   Operator St. Kempa     603.08  0.40
400.600  OPA OPO   Kerani I TUK           401     1.00
----------------------------------------------------------

DISTINCT 
DISTINCT will remove all duplicate rows from the result set and one row is kept from each group of duplicates. Let's try with DISTINCT clause:

SELECT DISTINCT a.nik, a.nama, g.ket as job, h.akunno, h.alokasi
FROM ester.maskar a
   INNER JOIN  ester.setnamajob g ON a.jabid=g.idku
   INNER JOIN  ester.setnamajobakun h ON g.idku=h.jobid
   WHERE  a.thn=2012 AND  a.unitid = 16 ORDER BY a.nik, a.nama

What is your result?
I've got same result set with previous!


DISTINCT ON
Now try with this command:

 SELECT DISTINCT ON (nik,nama) a.nik, a.nama, g.ket as job,h.akunno, h.alokasi
FROM  ester.maskar a
   INNER JOIN  ester.setnamajob g ON a.jabid=g.idku
   INNER JOIN  ester.setnamajobakun h ON g.idku=h.jobid
   WHERE  a.thn=2012 AND  a.unitid = 16 ORDER BY a.nik, a.nama

=========================================================
nik   nama          job                   akunno  alokasi
=========================================================
100.200  ALBERT    Manajer                400     0.00
100.300  JHONI     Danru Satpam           423     1.00
100.400  ZAENAL    Anggota Satpam         423     1.00
100.500  ROBI      Anggota Satpam         423     1.00
200.400  TEDDY     Operator St. Kempa     603.07  0.40
300.500  JOKO      Operator Kolam Limbah  603.11  0.50
300.650  SEPTIAN   Operator St. Kempa     603.09  0.20
400.600  OPA OPO   Kerani I TUK           401     1.00
---------------------------------------------------------
Those are the result set that I want.

DISTINCT ON ( expression [, ...] ) keeps only the first row of each set of rows where the given expressions evaluate to equal. The DISTINCT ON expressions are interpreted using the same rules as for ORDER BY. Note that the "first row" of each set is unpredictable unless ORDER BY is used to ensure that the desired row appears first.

The DISTINCT ON expression(s) must match the leftmost ORDER BY expression(s). The ORDER BY clause will normally contain additional expression(s) that determine the desired precedence of rows within each DISTINCT ON group.



Sabtu, 12 Februari 2011

Generate SQL Script File from Xbase

I have a task to write a new system of friend of mine. His existing system was run on Xbase, a MS DOS based application.

One of interesting part is doing data transformation from DBase format (table) into a database server such MySQL or PostgreSQL.
The scenario I planned were:
1. Read DBase table format and data
2. Generate a sql script file for creating table an inserting data
3. Run the script

Okay, there are many GUI tools already. I've a try using GUI tools such as Exportizer even another tool Foxpro-Postgresql import-export or MySQL Front. Those are good, later I arrived at dbf2sql. I compile and test the program . It works.

After some testing I realize that there are no NULL keyword generated for numeric, date, and character data type. I modify the source to generate a NULL keyword for non existing value of every field/column and generate some commands to satisfy my requirements.

Here the snippet code using C:
...
for(X = 0; X <>Field[Y].Length; X++)
{
temp[X] = *(data++);
}
temp[X] = '\0';

if(DBase->Field[Y].Type == 'N' DBase->Field[Y].Type == 'n')
{
if(!Trim) //it's a number, trim anyway...
TrimSpaces(temp, DBase->Field[Y].Length);
if(temp[0] == '.')
fprintf(output, "0%s", temp);
else
if(temp[0] == '-' && temp[1] == '.')
fprintf(output, "-0%s", &(temp[1]));
else if(temp[0] == '\0' )
fprintf(output, "null");
else
fprintf(output, "%s", temp);
}
....


The command to generate a script file:
C:\TEST>dbf2sql -i rekening.dbf -t tova.rek0 -o cmd.sql

The parameter provide input file name i.e rekening.dbf, and specify the schema and table name of the generated sql, and specify the destination to file cmd.sql. The contents of cmd.sql file cotains such following code:

DROP TABLE tova.rek0;
create table tova.rek0 (I've cut the detail);
INSERT INTO tova.rek0 VALUES ('11010001', 20, 20, 20, 4069, 4069, null, null, null, null, null, null, null, null, null, null, 0, '!', 'IB', 'L', 18450, 0, '2009-01-19', '2008-12-16', 'A', 'A', 20, 20, 0, '2009-01-19', null, null, null, null, null, null, null, null, null, null, null, 20, null, null, null, null, null, null, null, null, 0, 0, 0, 0, null, null, null, null, null, null, null, null, 0, 0, 0, 1000, null, null, null, null, null, null, null, null, null, null, null, 2000, null, null, null, null, null, null, null, null, null, null, null, 0, null, null, null, null, null, null, null, null, null, null, null, 15450, null, null, null, null, null, null, null, null, null, null, null, 'A', null, null, null, null, null, null, null, null, null, null, null, 'IB', null, null, null, null, null, null, null, null, null, null, null, 'L', null, null, null, null, null, null, null, null, null, null, null, '!', null, null, null, null, null, null, null, null, null, null, null, 'L', 'IB', 0, 'ZAMAG', null, null, null, null, null, null, null, null, null, null, null, 19, null, null, null, null, null, null, null, null, null, null, null, 0, 0, 0, 'A', 'A', 'A');

I love to use the dbf2sql to generate sql script for simplicity, customizable and speed reasons. The next step is to run the script into database. For PostgreSQL try this.

Importing SQL Script file into Postgresql table

To import a table from sql script file do this:
1. Generate a sql script file.
We could have a sql script file from many tools.

2. Run the script and specify parameter for user, database, and the sql script file, e.g:
C:\TEST>psql -Uferrari -d mydb -f cmd.sql

If the size of cmd.sql is small, we could use pgAdmin tools. If it was a large file, it will take longer to process e.g: while inserting into table. or an error will raise with ERROR: out of memory; SQL state: 53200; Detail: Failed on request of size 1048576.

Senin, 24 Januari 2011

Inserting or Changing Primary Key

SELECT script is easy to be obtained from PostgreSQL tables by using the GUI such as pgAdmin. By right-clicking on a table and then select the SELECT script, we get the script. Another way could be affordable by the coding.

First, create a table.
CREATE TABLE coba
(
pk character(12) NOT NULL,
nourut smallint NOT NULL,
nourutku smallint NOT NULL,
unit_id character(3) NOT NULL,
CONSTRAINT coba_pkey PRIMARY KEY (pk)
)
WITH (OIDS=FALSE);
ALTER TABLE coba OWNER TO binsar;


List the table's fields:
SELECT column_name FROM information_schema.columns WHERE table_name = 'coba'

Output:
"pk"
"nourut"
"nourutku"
"unit_id"


Another way...using coding
SELECT script can be generated through the following script:

SELECT ' SELECT ' || array_to_string(ARRAY(SELECT '' ||column_name
FROM information_schema.columns
WHERE table_name = 'coba'), ',') || ' FROM binsar.coba;';

Output:
SELECT pk,nourut,nourutku,unit_id FROM binsar.coba;


We may face the need to change the primary key or adding a new primary key based on the existing rows at the same table. The problems will be encountered, especially if there are many relationships between tables. For example, how do we simply insert some rows and only change the primary key value?

insert into binsar.coba VALUES ('1',1, 1,'1');
insert into binsar.coba SELECT '2',2, 2,'2';
insert into binsar.coba SELECT '3', nourut, nourutku, unit_id FROM binsar.coba WHERE pk='1';


The last command simply solve the need....certainly for a table with some fields.
If I have many tables and fields, ( think about fields name: new, dropped or modified ),
I was not diligent enough to do it. Were you?

Here the script:

SELECT 'SELECT ''300'',' || array_to_string(ARRAY(SELECT '' || column_name
FROM information_schema.columns
WHERE table_name = 'coba'
AND column_name NOT IN('pk')
), ',') || ' FROM binsar.coba where pk=''1'';';


Output:
SELECT '300',nourut,nourutku,unit_id FROM binsar.coba where pk='1';


So, new rows can be added with ease. We do not need to write all the field names,
just need to write down the field we do not want, also without hard coding the SELECT script.

In a stored function (PL/pgSQL), I use EXECUTE 'statement' for all tables in a database.

CREATE OR REPLACE FUNCTION cobadeh(pk character, newpk character,tabname character)
RETURNS character AS
$BODY$
DECLARE
t1 character varying (300);
t2 character varying (300);
BEGIN
t1='INSERT INTO binsar.' || tabname;
SELECT ' SELECT ' || '''' || newpk|| '''' || ',' || array_to_string(ARRAY(SELECT '' || column_name
FROM information_schema.columns
WHERE table_name = 'coba'
AND column_name NOT IN('pk')
), ',') || ' FROM binsar.coba WHERE pk=' || '''' ||pk ||'''' INTO t2;
EXECUTE t1 || t2;
RETURN t1 || t2;
END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE
COST 100;
ALTER FUNCTION cobadeh(character,character,character) OWNER TO binsar;

then tests it:

select * from binsar.cobadeh('1','300', 'coba');
select * from binsar.cobadeh('3','ABC', 'coba');

will run these command:
"INSERT INTO binsar.coba SELECT '300',nourut,nourutku,unit_id FROM binsar.coba WHERE pk='1'"
INSERT INTO binsar.coba SELECT 'ABC',nourut,nourutku,unit_id FROM binsar.coba WHERE pk='3'


The table now has new rows:
select * from binsar.coba ;
"1 ";1;1;"1 "
"2 ";2;2;"2 "
"3 ";1;1;"1 "
"300 ";1;1;"1 "
"ABC ";1;1;"1 "

that's.

Rabu, 30 Juni 2010

Waiting for PostgreSQL 9 stable version

I just read PostgreSQL 9.0 Beta 2 and it's feature on this. Many new feature there, and drive me to use it in future. Things that I am interested in:

Backend section:
1. Column level permissions: allows specification of granular column-level permissions in addition to table level grants
2. GRANT/REVOKE ON ALL TABLES/SEQUENCES/FUNCTIONS: simplifies permission management by allowing mass changes of GRANTS

Functions and triggers section:
1. Column level triggers: adds support for SQL standard column level trigger support, by allowing triggers to only fire if specific columns are mentioned in the UPDATEs SET list
2. WHEN clause for CREATE TRIGGER: adds support for a boolean expression to be checked if a trigger should be fired or not
3. RETURNS TABLE: SQL standard RETURNS TABLE clause for CREATE FUNCTION.

Cryptographic on FreeBSD

I have two Postgresql installed one on Windows (at home) and FreeBSD (at some place out there). I use pgcrypto for encrypting sensitive data. On Windows I have installed the library by executing C:\sistemku\PostgreSQL\9.0\share\contrib\pgcrypto.sql file.

Now it's time for me to send my work to remote server.
I am making a backup secret.db by using pg_dump, then transfer the backup file onto remote server using FTP command. With PuTTY via modem connection, I restored the database. Make some test of web site contents, they looks okay except for crytographic functions. Yeah...the FreeBSD server has no pgcrypto installed.

#find / -name pgcrypto.sql (the file does not exist).


1st attempt: Execute pgcrypto.sql, as you might suspect.., it failed. I have FTPed the file before.
2nd attempt: Type this command using psql:

CREATE OR REPLACE FUNCTION digest(text, text)
RETURNS bytea AS '$libdir/pgcrypto', 'pg_digest'
LANGUAGE C IMMUTABLE STRICT;


Again an error message raised "libdir not found".

3rd attempt: Use port to install pgcrypto with the following steps:
1. Install pgcrypto module
#cd /usr/ports/database/postgresql-contrib
#make config
#make install clean


I could found /usr/local/lib/postgresql/pgcrypto.so and /usr/local/share/postgresql/contrib/pgcrypto.sql.


2. To access library functions, I load stored procedures onto my secret database.
# psql -U binsar -d secredb -f /usr/local/share/postgresql/contrib/pgcrypto.sql


3. Check if cyptographic function loaded.

secretdb=# \df digest
List of functions
Schema Name Type Argument
-------+-------+-------+------------
public digest bytea bytea, text
public digest bytea text, text



4. It works, and get my coffee.

Selasa, 18 Mei 2010

Export PostgreSQL table into CSV file

1. Change directory to PostgreSQL binary folder or just type this command if the search path is already listed in your environment paths.

C:\pgsql83\bin>psql -dmydb -Uferry

mydb# COPY customer TO 'C:/myfolder/allcust.csv' DELIMITER ';';
COPY 5076

mydb# COPY (SELECT * FROM customer WHERE city ='PNK') TO E'C:\\myfolder\cityPNK.csv' DELIMITER ';' CSV;
COPY 502

mydb# \q

See: Import from CSV files into PostgreSQL Table

Importing a CSV file into PostgreSQL table

1. Export your data into CSV format. Remember the field delimiter you specify here (I prefer the semicolon) and you must pay attention or change it later if the import process fails. Also uncheck the option for first line as a header so that the CSV file contains only data.

2. If not exists, create a table in PostgreSQL with the data type that can accommodate the data to be imported.

3. Change directory to PostgreSQL binary folder or just type this command if the search path is already listed in your environment paths.

C:\pgsql83\bin>psql -dmydb -Uferry
mydb# COPY dest_table FROM 'C:/myfolder/myfile.csv' USING DELIMITERS ';';
COPY 10537

mydb# COPY another_dest_table FROM 'C:/myfolder/myfile22.csv' USING DELIMITERS ';';
COPY 300024

mydb# \q

See: Export from PostgreSQL to CSV files