Postgresql
Table of contents
- Create new user
- Create new database
- Change user password
- Give privileges to database for user
- Export data to CSV
- Queries
- Improve insertion speed
Create new user
Log into a shell with the postgres
user and run the following command:
$ createuser -P user_name
It will prompt you for a password.
Create new database
Log into a shell with the postgres
user and run the following command:
$ createdb db_name
Change user password
Log into the Postgresql database and run the following query:
alter user user_name with password 'new password here';
Give privileges to database for user
In order to grant all possible privileges to a user, use the following query:
grant all privileges on database db_name to user_name;
Export data to CSV
\copy (select * from table) to '/tmp/file.csv' with csv;
Queries
Get active queries
SELECT * FROM pg_stat_activity WHERE state = 'active';
SELECT
pid, state, age(clock_timestamp(), query_start), usename, query
FROM
pg_stat_activity;
Cancel a running query
The PID of the process can be retrieved from table pg_stat_activity
.
SELECT pg_cancel_backend(<pid of the process>)
Force termination of running query
The PID of the process can be retrieved from table pg_stat_activity
.
SELECT pg_terminate_backend(<pid of the process>)
Estimate number of rows of a table
SELECT
reltuples::bigint AS estimate
FROM
pg_class
WHERE
relname = '<table name>';
Get size of a table’s indexes
select pg_size_pretty(pg_indexes_size('table_name'));
Get size of a partitioned table
This returns the size of the whole table:
SELECT
pi.inhparent::regclass AS parent_table_name,
pg_size_pretty(sum(pg_total_relation_size(psu.relid))) AS total,
pg_size_pretty(sum(pg_relation_size(psu.relid))) AS internal,
pg_size_pretty(sum(pg_table_size(psu.relid) - pg_relation_size(psu.relid))) AS external, -- toast
pg_size_pretty(sum(pg_indexes_size(psu.relid))) AS indexes
FROM pg_catalog.pg_statio_user_tables psu
JOIN pg_class pc ON psu.relname = pc.relname
JOIN pg_inherits pi ON pi.inhrelid = pc.oid
WHERE psu.relname like 'part_name_%' -- replace partition name here
GROUP BY pi.inhparent
ORDER BY sum(pg_total_relation_size(psu.relid)) DESC;
This returns the size per partition:
SELECT
pi.inhparent::regclass AS parent_table_name,
pc.relname as table_name,
pg_size_pretty(pg_total_relation_size(psu.relid)) AS total,
pg_size_pretty(pg_relation_size(psu.relid)) AS internal,
pg_relation_size(psu.relid) AS internal_raw,
pg_size_pretty(pg_table_size(psu.relid) - pg_relation_size(psu.relid)) AS external, -- toast
pg_size_pretty(pg_indexes_size(psu.relid)) AS indexes
FROM pg_catalog.pg_statio_user_tables psu
JOIN pg_class pc ON psu.relname = pc.relname
JOIN pg_inherits pi ON pi.inhrelid = pc.oid
WHERE psu.relname like 'part_name_%'
order by (pg_total_relation_size(psu.relid)) desc;
Drop empty partitions
Change the name of the partition in the psu.relname
parameter
DO $$
DECLARE
r record;
cnt int;
BEGIN
FOR r IN
SELECT
pi.inhparent::regclass AS parent_table_name,
pc.relname as table_name,
pg_size_pretty(pg_total_relation_size(psu.relid)) AS total,
pg_size_pretty(pg_relation_size(psu.relid)) AS internal,
pg_relation_size(psu.relid) AS internal_raw,
pg_size_pretty(pg_table_size(psu.relid) - pg_relation_size(psu.relid)) AS external, -- toast
pg_size_pretty(pg_indexes_size(psu.relid)) AS indexes
FROM pg_catalog.pg_statio_user_tables psu
JOIN pg_class pc ON psu.relname = pc.relname
JOIN pg_inherits pi ON pi.inhrelid = pc.oid
WHERE psu.relname like 'part_name_%'
and pg_relation_size(psu.relid) = 0
order by (pg_total_relation_size(psu.relid)) desc
LOOP
execute 'select count(*) from ' || r.table_name into cnt;
if cnt = 0 then
raise notice 'Will drop [%] with cnt [%]', r.table_name, cnt;
end if;
END LOOP;
END$$;
Kill idle connections
DO $$
DECLARE
r record;
cnt int;
BEGIN
FOR r IN
SELECT pid from pg_stat_activity where state = 'idle'
LOOP
execute 'select pg_terminate_backend(' || r.pid || ');';
END LOOP;
END$$;
Improve insertion speed
If you need to improve insertion speed in a table that has indexes and that will see a lot of insertions, change the fillfactor
of it. I had great success doing it (I was doing 100k inserts per second).
alter index <index_name> set (fillfactor=30);
reindex (verbose) index index_name;
Disable autovacuum
for the table where data is being inserted:
alter table tablename set (autovacuum_enabled=false, toast.autovacuum_enabled=false);
Configuration tuning
Change the following values on postgresql.conf
:
- maintenance_work_mem
- shared_buffers
- effective_cache_size
- max_stack_depth
For a machine with 256GB of RAM, the values I am using are:
- maintenance_work_mem = 12GB
- shared_buffers = 8GB
- effective_cache_size = 96GB
- max_stack_depth = 4MB