SlideShare a Scribd company logo
1 of 44
Download to read offline
KUMAR RAJEEV RASTOGI (rajeevrastogi@huawei.com)
Go Faster With Native Compilation
PGDayAsia 2016
17th March 2016
PRASANNA VENKATESH (prasanna.venkatesh@huawei.com)
 KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)
 Senior Technical Leader at Huawei Technology for almost 8 years
 Active PostgreSQL community members, have contributed many patches
 Have presented many papers in various PG based conference (e.g. PGCon
Canada, PGDay Asia, PGDay India etc).
 One of the Organizer for PGDay Asia conference
 Program committee member for PGDay India.
 Holds around 14 patents in my name in various DB technologies.
Blog - rajeevrastogi.blogspot.in
LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi
Who Am I?
Native Compilation3
5
Current BusinessTrend2
What to Compile
Cost model4
Schema binding6
7 Schema binding Solution
Performance Scenario8
1 Background
9 Procedure Compilation
Agenda
The traditional database executors are based on the fact that “I/O cost dominates
execution”.These executor models are inefficient in terms of CPU instructions.
Now most of the workloads fits into main memory, which is consequence of two
broad trends :
1. Growth in the amount of memory (RAM) per node/machine
2. Prevalence of high speed SSD
Background
So now biggest bottleneck is CPU usage efficiency not I/O. Our problem
statement is to make our database more efficient in terms of CPU instructions –
there by leveraging the larger memory
Source:ICDEConference
Slowly database industries are reaching to a point where increase of
throughput has become very limited. Quoting from a paper on Hekaton -
The only real hope to increase throughput is to reduce the number of instructions
executed but the reduction needs to be dramatic. To go 10X faster, the engine must
execute 90% fewer instructions and yet still get the work done. To go 100X faster, it
must execute 99% fewer instructions.
Such a drastic reduction in instruction without
disturbing whole functionality is only possible by code specialization (a.k.a
Native Compilation or famously as LLVM) i.e. to generate code specific to
object/query.
Current Business Trend
Many DBs are moving into compilation technology to improve
performance by reducing the CPU instruction some of them are:
 Hekaton (SQL Server 2014)
 Oracle
 MemSQL
Current Business Trend Contd…
Hekaton: Comparison of CPU efficiency for lookups
Source: Hekaton Paper
Native Compilation is a methodology to reduce CPU instructions by executing only
instruction specific to given query/objects unlike interpreted execution. Steps are:
1. Generate C-code specific to objects/query.
2. Compile C-code to generate DLL and load with server executable.
3. Call specialized function instead of generalized function.
Native Compilation
e.g. Expression: Col1 + 100
Traditional executor will requires 100’s of instruction to find all
combination of expression before final execution, whereas in vanilla c
code, it can directly execute in 2-3 instructions.
Source:ICDEConference
Cost model of specialized code can be expressed as:
cost of execution = generate specialized code
+ compilation
+ execute compiled code
Execution of compiled code is very efficient but generation of
specialized code and compiling same may be bit expensive affair. So in
order to drive down this cost:
1. Generate and compile the code once and use it many times; this
distributes the constant cost.
2. Improve the performance of generation and compilation
significantly.
Cost model
Any CPU intensive entity of database can be natively compiled, if they have
similar pattern on different execution. Some of the most popular one are:
 Schema (Relation)
 Procedure
 Query
 Algebraic expression
Note: We will target only Schema for this presentation.
What to Native Compile?
Property of each relation:
1. Number of attributes, their length and data-type are fixed.
2. Irrespective of any data, it is going to be stored in similar pattern.
3. Each attributes are accessed in similar pattern.
Disadvantage of current approach for each tuple access:
1. Loops for each attribute.
2. Property of all attributes are checked to take many decisions.
3. Executes many unwanted instructions.
Schema binding
So we can overcome the disadvantage by natively compiling the relation
based on its property to generate specialized code for each functions of
schema.
Schema Binding = Native Compilation of Relation
Benefit:
1. Each attribute access gets flattened.
2. All attribute property decision are taken during code generation.
3. No decision making at run-time.
4. Reduced CPU instruction.
Schema binding Contd…
Schema binding Contd…
CREATE
TABLE
Automatic
Code
generation
C  DLL
LoadAll function
SQL QUERY
Compiled
Functions
Once a create table command
is issued, a C-file with all
specialized access function is
generated, which is in turns
gets loaded as DLL. These
loaded functions are used by
all SQL query accessing the
compiled table
Schema binding Contd…
This show overall
interaction with
schema bound. Any
query issued from
client can use
schema bound
function or normal
function depending
on the underlying
table.
Schema:
create table tbl (id1 int, id2 float,
id3 varchar(10),id4 bool);
Schema binding: Example
Field id1 and id2 is
going to be always
stored at same offset
and with same
alignment, no
change at run time.
Only variable length
attribute and
attribute following
this will have
variable offset.
Using current approach:
Access Using specialized code:
method-1:
method-2:
Conclusion: Specialized code uses fewer number of instruction compare to generalized code
and hence better performance.
Schema binding: Example
Each Line here
is macro, which
invokes
multiple
condition check
to decide the
action
if (thisatt->attlen != -1)
{
offset = att_align_nominal(off, thisatt->attalign)
values[1] = fetchatt(thisatt, tp + offset)
offset = att_addlength_pointer(off, thisatt->attlen,
tp + off);
}
values[1] = ((struct tbl_xxx*)tp)->id2;
offset = DOUBLEALIGN(offset);
values[1] = *((Datum *)(tp + offset));
offset += 8;
See details
about this in
further slides.
Solution can be categorized as:
1  Opting for schema bind.
2  Functions to be customized.
3  Customized function generation.
4  Loading of customized function.
5  Invocation of customized function.
6  How to generate dynamic library.
Schema Binding Solution
CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } |
UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE
tablespace_name ] [SCHEMA_BOUNDED]
SCHEMA_BOUND is new option with
CREATETABLE to opt for code specialization.
Solution: Opting for schema bind tuple
Function Name (xxx  relname_relid) Purpose
heap_compute_data_size_xxx To calculate size of the data part of the tuple
Heap_fill_tuple_xxx To fill the tuple with the data
Heap_deform_tuple_xxx Deform the heap tuple
Slot_deform_tuple_xxx
To deform the tuple at the end of scan to project
attribute
Nocachegetattr_xxx
To get one attribute value from the tuple for
vacuum case
Solution: Functions to be customized
Customized function for tuple access of a table can be categorized in 3
approaches:
Method-1  WithTuple format change
Method-2  Without changing the tuple format.
Method-3  Re-organize table columns internally to make all
fixed length and variable length attribute in
sequence.
Solution: Function Generation
A structure corresponding to relation will be created in such a way that
each attribute’s value/offset can be directly referenced by typecasting the
data buffer with structure.
e.g. Consider our earlier example table:
Solution: Function Generation-Method-1
Structure member variable
id1, id2 and id4 contains actual
value of column, whereas
id3_offset stores the offset of
the column id3, as during
create table it is not known the
size of the actual value going
to be stored. End of this
structure buffer will hold data
for variable size column and it
can be accessed based on the
corresponding offset stored.
typedef struct schemaBindTbl_xxx
{
int id1;
float id2;
short id3_offset;
bool id4;
/* Actual data for variable size
column*/
} SchemaBindTbl_xxxx;
create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
Solution: Function Generation-Method-1 Contd…
ExistingTuple Format
NewTuple Format
All attribute
values stored in
sequence.
Value of fixed
length attribute
but offset of
variable length
attribute stored in
sequence. So
structure typecast
will give either
value or offset of
value.
So using this structure, tuple data can be stored as:
Fixed size data-type storage:
Variable size data-type storage:
Using this approach heap_fill_tuple function can be generated during create
table.
Solution: Function Generation-Method-1 Contd…
((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]);
((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset;
data_length = SIZE((char*)values[attno]);
SET_VARSIZE_SHORT(data + data_offset, data_length);
memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length -
1);
data_offset += data_length;
Similarly, each attribute value from tuple can be accessed as:
Fixed size data-type access:
Variable size data-type access:
Using this approach all function related to deformation of tuple (i.e.
heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated
during create table.
Solution: Function Generation-Method-1 Contd…
values[attno] = ((SchemaBindTbl_xxxx*)data)->id1;
data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ;
values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
Advantage:
1. No dependency on previous attributes.
2. Any of the attribute value can be accessed directly.
3. Access of attribute value is very efficient as it will take very few
instructions.
Disadvantage:
1. Size of the tuple will increase leading to more memory consumption.
Solution: Function Generation-Method-1 Contd…
This method generates the customized functions without changing the
format of the tuple.
This approach uses slight variation of existing macros:
 fetch_att
 att_addlength_pointer
 att_align_nominal
 att_align_pointer
These macros takes many decision based on the data-type, its size of each
attributes which is going to be same for a relation.
So instead of using these macro for each tuple of a relation at run-
time, it is used once during table schema definition itself to generate all
customized function.
Solution: Function Generation-Method-2
So as per this mechanism, code for accessing float attribute will be as below:
Similarly access for all other data-type attributes can also
be generated.
Using the combination of other macro, customized code
for all other functions used for tuple access can be generated.
Solution: Function Generation-Method-2 Contd…
offset = DOUBLEALIGN(offset); Skipped alignment check
values[1] = *((Datum *)(tp + offset)); Skipped datum size check
offset += 8; Skipped attribute length check
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. Dependency on previous attribute incase previous attribute is variable
length.
Solution: Function Generation-Method-2 Contd…
This method is intended to use advantages of previous methods i.e.
 Make least number of attribute dependency
All fixed length attributes are grouped together to make initial list of
columns followed by all variable length columns. So all fixed length
attributes can be accessed directly. Change in column order will be
done during creation of table itself.
 No change in tuple size,so access of tuple will be very efficient
In order to achieve this, we use Method-2 to generate specialized
code.
Solution: Function Generation-Method-3
E.g. Consider our earlier example:
create table tbl (id1 int, id2 float, id3 varchar(10), id4
bool);
Solution: Function Generation-Method-3 Contd…
So in this case, while
creating the table id1,
id2 and id4 will be
first 3 columns
followed by id3.
So access code can be generated directly during schema definition
without dependency on any run time parameter because all of the
attribute offset is fixed except of variable length attributes.
If there are more variable length attributes then they
will be stored after id3 and for them it will have to know the length of
the previous columns to find the exact offset.
Advantage:
1. Existing tested macro are used, so it is very safe.
2. No change in tuple format and size.
3. Reduces number of overall instruction by huge margin.
Disadvantage:
1. There will be still dependency among multiple variable length
attributes (if any).
Solution: Function Generation-Method-3 Contd…
Once we generate the code
corresponding to each access
function, the same gets written into
a C-file, which in turn gets
compiled to dynamic linked library
and then this dynamic library gets
loaded with server executable. So
now any function of the library can
be invoked directly from the server
executables.
Solution: Loading of customized functions
The generated C-file should be compiled to generate dynamic library,
which can be done using:
1. LLVM
Compilation using the LLVM will be very fast.
2. GCC
GCC is standard way of compiling C file but it will be slow
compare to LLVM.
Solution: How to generate dynamic library
While forming the tuple,
corresponding relation option
schema_bound will be checked to
decide whether to call customized
function corresponding to this
relation or the standard
generalized function. Also in tuple
flag t_infomask2,
HEAP_SCHEMA_BIND_TUPLE
(with value 0x1800) will be
appended to mark the schema
bounded tuple.
Solution: Invocation of Storage Customized Function
The tuple header’s t_infomask2
flag will be checked to see , if
HEAP_SCHEMA_BIND_TUPLE
is set to decide whether to call
customized function
corresponding to this relation or
the standard generalized function.
Solution: Invocation of access customized function
Performance (TPC-H):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-H Configuration: Default
Query-1, 2 and 17 not shown in charts to maintain clear
visibility of chart.
0 2000 4000 6000 8000 10000 12000
Query-3
Query-4
Query-5
Query-6
Query-7
Query-8
Query-9
Query-10
Query-11
Query-12
Query-13
Query-14
Query-15
Query-16
Query-18
Query-19
Time(ms)
TPC-H Performance
Original(ms) SchemaBind (ms)
TPC-H Query Improvement(%)
Query-1 2%
Query-2 36%
Query-3 14%
Query-4 13%
Query-5 2%
Query-6 21%
Query-7 16%
Query-8 5%
Query-9 6%
Query-10 9%
Query-11 3%
Query-12 17%
Query-13 3%
Query-14 20%
Query-15 20%
Query-16 4%
Query-17 25%
Query-18 9%
Query-19 24%
Performance (Hash Join):
0
200
400
600
800
1,000
1,200
slot_deform_tuple Overall
Instruction Reduction
SchemaBind Original
0
50
100
150
200
250
T
i
m
e
(
m
s
)
Latency Improvement
SchemaBind Original
Latency Improvement: 23%
Overall Instruction reduction: 30%
Access method instruction reduction: 89%
OuterTable: Having 10 columns, cardinality 1M
InnerTable: Having 2 columns, cardinality 1K
Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
Schema binding mainly depend on the code specialization of access function
for table. Number of instruction reduced per call of slot_deform_function
is more than 70% and hence if this function form good percentage of total
instruction e.g. in
 Aggregate query,
 group
 Join
 Query with multiple attribute
All of above cases with huge table size, then overall instruction reduction
will be also huge and hence much better performance.
Performance Scenario:
Procedure Compilation
First diagram highlights at what step and how procedure will be compiled. Once
the parsing of procedure is done, we will have all information about the
procedure in PLpgSQL_function pointer, which we can traverse as planner
does and for each statement corresponding C-code can be generated.
Second diagram explain, how compiled function will be invoked.
Performance (TPC-C):
The system configuration is as below:
SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core
TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000
Checkpoint_segment = 100
0
10000
20000
30000
40000
50000
60000
70000
New_order All Procedure
tpmC
Original
Compiled
Reading in tpmC
Reading New_order All Procedure
Original 22334 49606
Compiled 28973 64580
Improvement 23% 23%
With some basic compilation of procedure, we are able to get around 23%
performance improvement.
Seeing the industry trend, we have implemented two way of specialization,
which resulted in up to 36% and 23% of performance improvement on
standard benchmarkTPC-H andTPC-C respectively.
This technology will make us align with
current business trend to tackle the CPU bottleneck and also could be one
of the hot technology for work on PostgreSQL.
Conclusion
1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic
code specialization of database management systems." Proceedings of theTenth
International Symposium on Code Generation and Optimization.ACM, 2012.
http://dl.acm.org/citation.cfm?id=2259025
2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft
SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30.
http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub
/debull/A14mar/p22.pdf
Reference
TAPE
DRAM
DISK
CACHE
DRAM
DISK
“Disk is the new tape;
Memory is the new disk.”
-- Jim Gray
PostgreSQL on Big RAM
Source: ICDE Conference
Go Faster With Native Compilation
Go Faster With Native Compilation

More Related Content

What's hot

Postgrtesql as a NoSQL Document Store - The JSON/JSONB data type
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data typePostgrtesql as a NoSQL Document Store - The JSON/JSONB data type
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data typeJumping Bean
 
Migration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLMigration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLPGConf APAC
 
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiPostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiSatoshi Nagayasu
 
Major features postgres 11
Major features postgres 11Major features postgres 11
Major features postgres 11EDB
 
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
[EPPG] Oracle to PostgreSQL, Challenges to OpportunityEqunix Business Solutions
 
Connecting Hadoop and Oracle
Connecting Hadoop and OracleConnecting Hadoop and Oracle
Connecting Hadoop and OracleTanel Poder
 
Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Bob Ward
 
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)Matt Fuller
 
Stream processing from single node to a cluster
Stream processing from single node to a clusterStream processing from single node to a cluster
Stream processing from single node to a clusterGal Marder
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
 
Tanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2Gal Marder
 
PostgreSQL and Redis - talk at pgcon 2013
PostgreSQL and Redis - talk at pgcon 2013PostgreSQL and Redis - talk at pgcon 2013
PostgreSQL and Redis - talk at pgcon 2013Andrew Dunstan
 
Low Level CPU Performance Profiling Examples
Low Level CPU Performance Profiling ExamplesLow Level CPU Performance Profiling Examples
Low Level CPU Performance Profiling ExamplesTanel Poder
 
PostgreSQL Rocks Indonesia
PostgreSQL Rocks IndonesiaPostgreSQL Rocks Indonesia
PostgreSQL Rocks IndonesiaPGConf APAC
 
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsDatabricks
 
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike SteenbergenMeet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergendistributed matters
 
Technical Introduction to PostgreSQL and PPAS
Technical Introduction to PostgreSQL and PPASTechnical Introduction to PostgreSQL and PPAS
Technical Introduction to PostgreSQL and PPASAshnikbiz
 
Non-Relational Postgres
Non-Relational PostgresNon-Relational Postgres
Non-Relational PostgresEDB
 

What's hot (20)

Postgrtesql as a NoSQL Document Store - The JSON/JSONB data type
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data typePostgrtesql as a NoSQL Document Store - The JSON/JSONB data type
Postgrtesql as a NoSQL Document Store - The JSON/JSONB data type
 
Migration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQLMigration From Oracle to PostgreSQL
Migration From Oracle to PostgreSQL
 
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 TaipeiPostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
PostgreSQL 9.4, 9.5 and Beyond @ COSCUP 2015 Taipei
 
Major features postgres 11
Major features postgres 11Major features postgres 11
Major features postgres 11
 
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
[EPPG] Oracle to PostgreSQL, Challenges to Opportunity
 
Connecting Hadoop and Oracle
Connecting Hadoop and OracleConnecting Hadoop and Oracle
Connecting Hadoop and Oracle
 
Learning postgresql
Learning postgresqlLearning postgresql
Learning postgresql
 
Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017Inside sql server in memory oltp sql sat nyc 2017
Inside sql server in memory oltp sql sat nyc 2017
 
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
Presto Testing Tools: Benchto & Tempto (Presto Boston Meetup 10062015)
 
Stream processing from single node to a cluster
Stream processing from single node to a clusterStream processing from single node to a cluster
Stream processing from single node to a cluster
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
 
Tanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata MigrationsTanel Poder - Performance stories from Exadata Migrations
Tanel Poder - Performance stories from Exadata Migrations
 
Dive into spark2
Dive into spark2Dive into spark2
Dive into spark2
 
PostgreSQL and Redis - talk at pgcon 2013
PostgreSQL and Redis - talk at pgcon 2013PostgreSQL and Redis - talk at pgcon 2013
PostgreSQL and Redis - talk at pgcon 2013
 
Low Level CPU Performance Profiling Examples
Low Level CPU Performance Profiling ExamplesLow Level CPU Performance Profiling Examples
Low Level CPU Performance Profiling Examples
 
PostgreSQL Rocks Indonesia
PostgreSQL Rocks IndonesiaPostgreSQL Rocks Indonesia
PostgreSQL Rocks Indonesia
 
Fine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark JobsFine Tuning and Enhancing Performance of Apache Spark Jobs
Fine Tuning and Enhancing Performance of Apache Spark Jobs
 
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike SteenbergenMeet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
Meet Spilo, Zalando’s HIGH-AVAILABLE POSTGRESQL CLUSTER - Feike Steenbergen
 
Technical Introduction to PostgreSQL and PPAS
Technical Introduction to PostgreSQL and PPASTechnical Introduction to PostgreSQL and PPAS
Technical Introduction to PostgreSQL and PPAS
 
Non-Relational Postgres
Non-Relational PostgresNon-Relational Postgres
Non-Relational Postgres
 

Viewers also liked

(Ab)using 4d Indexing
(Ab)using 4d Indexing(Ab)using 4d Indexing
(Ab)using 4d IndexingPGConf APAC
 
PostgreSQL on Amazon RDS
PostgreSQL on Amazon RDSPostgreSQL on Amazon RDS
PostgreSQL on Amazon RDSPGConf APAC
 
There is Javascript in my SQL
There is Javascript in my SQLThere is Javascript in my SQL
There is Javascript in my SQLPGConf APAC
 
Introduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDIntroduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDPGConf APAC
 
PostgreSQL: Past present Future
PostgreSQL: Past present FuturePostgreSQL: Past present Future
PostgreSQL: Past present FuturePGConf APAC
 
Swapping Pacemaker Corosync with repmgr
Swapping Pacemaker Corosync with repmgrSwapping Pacemaker Corosync with repmgr
Swapping Pacemaker Corosync with repmgrPGConf APAC
 
Lightening Talk - PostgreSQL Worst Practices
Lightening Talk - PostgreSQL Worst PracticesLightening Talk - PostgreSQL Worst Practices
Lightening Talk - PostgreSQL Worst PracticesPGConf APAC
 
Lessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’tLessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’tPGConf APAC
 
Use Case: PostGIS and Agribotics
Use Case: PostGIS and AgriboticsUse Case: PostGIS and Agribotics
Use Case: PostGIS and AgriboticsPGConf APAC
 
How to teach an elephant to rock'n'roll
How to teach an elephant to rock'n'rollHow to teach an elephant to rock'n'roll
How to teach an elephant to rock'n'rollPGConf APAC
 
Why we love pgpool-II and why we hate it!
Why we love pgpool-II and why we hate it!Why we love pgpool-II and why we hate it!
Why we love pgpool-II and why we hate it!PGConf APAC
 
Security Best Practices for your Postgres Deployment
Security Best Practices for your Postgres DeploymentSecurity Best Practices for your Postgres Deployment
Security Best Practices for your Postgres DeploymentPGConf APAC
 
Best Practices for Becoming an Exceptional Postgres DBA
Best Practices for Becoming an Exceptional Postgres DBA Best Practices for Becoming an Exceptional Postgres DBA
Best Practices for Becoming an Exceptional Postgres DBA EDB
 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1Federico Campoli
 
What's New In PostgreSQL 9.4
What's New In PostgreSQL 9.4What's New In PostgreSQL 9.4
What's New In PostgreSQL 9.4Pavan Deolasee
 

Viewers also liked (20)

(Ab)using 4d Indexing
(Ab)using 4d Indexing(Ab)using 4d Indexing
(Ab)using 4d Indexing
 
PostgreSQL on Amazon RDS
PostgreSQL on Amazon RDSPostgreSQL on Amazon RDS
PostgreSQL on Amazon RDS
 
There is Javascript in my SQL
There is Javascript in my SQLThere is Javascript in my SQL
There is Javascript in my SQL
 
Introduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XIDIntroduction to Vacuum Freezing and XID
Introduction to Vacuum Freezing and XID
 
PostgreSQL: Past present Future
PostgreSQL: Past present FuturePostgreSQL: Past present Future
PostgreSQL: Past present Future
 
Swapping Pacemaker Corosync with repmgr
Swapping Pacemaker Corosync with repmgrSwapping Pacemaker Corosync with repmgr
Swapping Pacemaker Corosync with repmgr
 
Lightening Talk - PostgreSQL Worst Practices
Lightening Talk - PostgreSQL Worst PracticesLightening Talk - PostgreSQL Worst Practices
Lightening Talk - PostgreSQL Worst Practices
 
Lessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’tLessons PostgreSQL learned from commercial databases, and didn’t
Lessons PostgreSQL learned from commercial databases, and didn’t
 
Use Case: PostGIS and Agribotics
Use Case: PostGIS and AgriboticsUse Case: PostGIS and Agribotics
Use Case: PostGIS and Agribotics
 
How to teach an elephant to rock'n'roll
How to teach an elephant to rock'n'rollHow to teach an elephant to rock'n'roll
How to teach an elephant to rock'n'roll
 
Why we love pgpool-II and why we hate it!
Why we love pgpool-II and why we hate it!Why we love pgpool-II and why we hate it!
Why we love pgpool-II and why we hate it!
 
Security Best Practices for your Postgres Deployment
Security Best Practices for your Postgres DeploymentSecurity Best Practices for your Postgres Deployment
Security Best Practices for your Postgres Deployment
 
Best Practices for Becoming an Exceptional Postgres DBA
Best Practices for Becoming an Exceptional Postgres DBA Best Practices for Becoming an Exceptional Postgres DBA
Best Practices for Becoming an Exceptional Postgres DBA
 
Postgresql database administration volume 1
Postgresql database administration volume 1Postgresql database administration volume 1
Postgresql database administration volume 1
 
PostgreSQL DBA Neler Yapar?
PostgreSQL DBA Neler Yapar?PostgreSQL DBA Neler Yapar?
PostgreSQL DBA Neler Yapar?
 
Founding a LLC in Turkey
Founding a LLC in TurkeyFounding a LLC in Turkey
Founding a LLC in Turkey
 
TTÜ Geeky Weekly
TTÜ Geeky WeeklyTTÜ Geeky Weekly
TTÜ Geeky Weekly
 
PostgreSQL Hem Güçlü Hem Güzel!
PostgreSQL Hem Güçlü Hem Güzel!PostgreSQL Hem Güçlü Hem Güzel!
PostgreSQL Hem Güçlü Hem Güzel!
 
PostgreSQL'i öğrenmek ve yönetmek
PostgreSQL'i öğrenmek ve yönetmekPostgreSQL'i öğrenmek ve yönetmek
PostgreSQL'i öğrenmek ve yönetmek
 
What's New In PostgreSQL 9.4
What's New In PostgreSQL 9.4What's New In PostgreSQL 9.4
What's New In PostgreSQL 9.4
 

Similar to Go Faster With Native Compilation

AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)Paul Chao
 
Migration To Multi Core - Parallel Programming Models
Migration To Multi Core - Parallel Programming ModelsMigration To Multi Core - Parallel Programming Models
Migration To Multi Core - Parallel Programming ModelsZvi Avraham
 
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...7mind
 
PofEAA and SQLAlchemy
PofEAA and SQLAlchemyPofEAA and SQLAlchemy
PofEAA and SQLAlchemyInada Naoki
 
Addressing Scenario
Addressing ScenarioAddressing Scenario
Addressing ScenarioTara Hardin
 
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-MallaKerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-MallaSpark Summit
 
Azure machine learning service
Azure machine learning serviceAzure machine learning service
Azure machine learning serviceRuth Yakubu
 
Terraform modules restructured
Terraform modules restructuredTerraform modules restructured
Terraform modules restructuredAmi Mahloof
 
Terraform Modules Restructured
Terraform Modules RestructuredTerraform Modules Restructured
Terraform Modules RestructuredDoiT International
 
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Raffi Khatchadourian
 
Ejb3 Struts Tutorial En
Ejb3 Struts Tutorial EnEjb3 Struts Tutorial En
Ejb3 Struts Tutorial EnAnkur Dongre
 
Ejb3 Struts Tutorial En
Ejb3 Struts Tutorial EnEjb3 Struts Tutorial En
Ejb3 Struts Tutorial EnAnkur Dongre
 

Similar to Go Faster With Native Compilation (20)

Go Faster With Native Compilation
Go Faster With Native CompilationGo Faster With Native Compilation
Go Faster With Native Compilation
 
Database programming
Database programmingDatabase programming
Database programming
 
Data herding
Data herdingData herding
Data herding
 
Data herding
Data herdingData herding
Data herding
 
AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)AI與大數據數據處理 Spark實戰(20171216)
AI與大數據數據處理 Spark實戰(20171216)
 
Migration To Multi Core - Parallel Programming Models
Migration To Multi Core - Parallel Programming ModelsMigration To Multi Core - Parallel Programming Models
Migration To Multi Core - Parallel Programming Models
 
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
distage: Purely Functional Staged Dependency Injection; bonus: Faking Kind Po...
 
Compose in Theory
Compose in TheoryCompose in Theory
Compose in Theory
 
PofEAA and SQLAlchemy
PofEAA and SQLAlchemyPofEAA and SQLAlchemy
PofEAA and SQLAlchemy
 
Addressing Scenario
Addressing ScenarioAddressing Scenario
Addressing Scenario
 
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-MallaKerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
Kerberizing Spark: Spark Summit East talk by Abel Rincon and Jorge Lopez-Malla
 
Readme
ReadmeReadme
Readme
 
Sql optimize
Sql optimizeSql optimize
Sql optimize
 
Azure machine learning service
Azure machine learning serviceAzure machine learning service
Azure machine learning service
 
Terraform modules restructured
Terraform modules restructuredTerraform modules restructured
Terraform modules restructured
 
Terraform Modules Restructured
Terraform Modules RestructuredTerraform Modules Restructured
Terraform Modules Restructured
 
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
Towards Safe Automated Refactoring of Imperative Deep Learning Programs to Gr...
 
Ejb3 Struts Tutorial En
Ejb3 Struts Tutorial EnEjb3 Struts Tutorial En
Ejb3 Struts Tutorial En
 
Ejb3 Struts Tutorial En
Ejb3 Struts Tutorial EnEjb3 Struts Tutorial En
Ejb3 Struts Tutorial En
 
Ruby On Rails
Ruby On RailsRuby On Rails
Ruby On Rails
 

More from PGConf APAC

PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...PGConf APAC
 
PGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PGConf APAC 2018: PostgreSQL 10 - Replication goes LogicalPGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PGConf APAC 2018: PostgreSQL 10 - Replication goes LogicalPGConf APAC
 
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQLPGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQLPGConf APAC
 
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQLPGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQLPGConf APAC
 
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...PGConf APAC
 
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018PGConf APAC
 
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companionPGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companionPGConf APAC
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodbPGConf APAC
 
PGConf APAC 2018 - Monitoring PostgreSQL at Scale
PGConf APAC 2018 - Monitoring PostgreSQL at ScalePGConf APAC 2018 - Monitoring PostgreSQL at Scale
PGConf APAC 2018 - Monitoring PostgreSQL at ScalePGConf APAC
 
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQLPGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQLPGConf APAC
 
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...PGConf APAC
 
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...PGConf APAC
 
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC
 
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...PGConf APAC
 
PGConf APAC 2018 - Tale from Trenches
PGConf APAC 2018 - Tale from TrenchesPGConf APAC 2018 - Tale from Trenches
PGConf APAC 2018 - Tale from TrenchesPGConf APAC
 
PGConf APAC 2018 Keynote: PostgreSQL goes eleven
PGConf APAC 2018 Keynote: PostgreSQL goes elevenPGConf APAC 2018 Keynote: PostgreSQL goes eleven
PGConf APAC 2018 Keynote: PostgreSQL goes elevenPGConf APAC
 
Amazon (AWS) Aurora
Amazon (AWS) AuroraAmazon (AWS) Aurora
Amazon (AWS) AuroraPGConf APAC
 

More from PGConf APAC (17)

PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
PGConf APAC 2018: Sponsored Talk by Fujitsu - The growing mandatory requireme...
 
PGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PGConf APAC 2018: PostgreSQL 10 - Replication goes LogicalPGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
PGConf APAC 2018: PostgreSQL 10 - Replication goes Logical
 
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQLPGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
PGConf APAC 2018 - Lightening Talk #3: How To Contribute to PostgreSQL
 
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQLPGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
PGConf APAC 2018 - Lightening Talk #2 - Centralizing Authorization in PostgreSQL
 
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
Sponsored Talk @ PGConf APAC 2018 - Choosing the right partner in your Postgr...
 
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
PGConf APAC 2018 - A PostgreSQL DBAs Toolbelt for 2018
 
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companionPGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
PGConf APAC 2018 - Patroni: Kubernetes-native PostgreSQL companion
 
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json  postgre-sql vs. mongodbPGConf APAC 2018 - High performance json  postgre-sql vs. mongodb
PGConf APAC 2018 - High performance json postgre-sql vs. mongodb
 
PGConf APAC 2018 - Monitoring PostgreSQL at Scale
PGConf APAC 2018 - Monitoring PostgreSQL at ScalePGConf APAC 2018 - Monitoring PostgreSQL at Scale
PGConf APAC 2018 - Monitoring PostgreSQL at Scale
 
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQLPGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
PGConf APAC 2018 - Where's Waldo - Text Search and Pattern in PostgreSQL
 
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...
 
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
PGConf APAC 2018 - PostgreSQL HA with Pgpool-II and whats been happening in P...
 
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various cloudsPGConf APAC 2018 - PostgreSQL performance comparison in various clouds
PGConf APAC 2018 - PostgreSQL performance comparison in various clouds
 
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
Sponsored Talk @ PGConf APAC 2018 - Migrating Oracle to EDB Postgres Approach...
 
PGConf APAC 2018 - Tale from Trenches
PGConf APAC 2018 - Tale from TrenchesPGConf APAC 2018 - Tale from Trenches
PGConf APAC 2018 - Tale from Trenches
 
PGConf APAC 2018 Keynote: PostgreSQL goes eleven
PGConf APAC 2018 Keynote: PostgreSQL goes elevenPGConf APAC 2018 Keynote: PostgreSQL goes eleven
PGConf APAC 2018 Keynote: PostgreSQL goes eleven
 
Amazon (AWS) Aurora
Amazon (AWS) AuroraAmazon (AWS) Aurora
Amazon (AWS) Aurora
 

Recently uploaded

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024BookNet Canada
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Strongerpanagenda
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...Wes McKinney
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsNathaniel Shimoni
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesThousandEyes
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality AssuranceInflectra
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Mark Goldstein
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfMounikaPolabathina
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersNicole Novielli
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxLoriGlavin3
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPathCommunity
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsRavi Sanghani
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024Lonnie McRorey
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch TuesdayIvanti
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity PlanDatabarracks
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Hiroshi SHIBATA
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityIES VE
 

Recently uploaded (20)

Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
Transcript: New from BookNet Canada for 2024: Loan Stars - Tech Forum 2024
 
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better StrongerModern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
Modern Roaming for Notes and Nomad – Cheaper Faster Better Stronger
 
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
The Future Roadmap for the Composable Data Stack - Wes McKinney - Data Counci...
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
Time Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directionsTime Series Foundation Models - current state and future directions
Time Series Foundation Models - current state and future directions
 
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyesAssure Ecommerce and Retail Operations Uptime with ThousandEyes
Assure Ecommerce and Retail Operations Uptime with ThousandEyes
 
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance[Webinar] SpiraTest - Setting New Standards in Quality Assurance
[Webinar] SpiraTest - Setting New Standards in Quality Assurance
 
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
Arizona Broadband Policy Past, Present, and Future Presentation 3/25/24
 
What is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdfWhat is DBT - The Ultimate Data Build Tool.pdf
What is DBT - The Ultimate Data Build Tool.pdf
 
A Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software DevelopersA Journey Into the Emotions of Software Developers
A Journey Into the Emotions of Software Developers
 
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptxUse of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
Use of FIDO in the Payments and Identity Landscape: FIDO Paris Seminar.pptx
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
UiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to HeroUiPath Community: Communication Mining from Zero to Hero
UiPath Community: Communication Mining from Zero to Hero
 
Potential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and InsightsPotential of AI (Generative AI) in Business: Learnings and Insights
Potential of AI (Generative AI) in Business: Learnings and Insights
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024TeamStation AI System Report LATAM IT Salaries 2024
TeamStation AI System Report LATAM IT Salaries 2024
 
2024 April Patch Tuesday
2024 April Patch Tuesday2024 April Patch Tuesday
2024 April Patch Tuesday
 
How to write a Business Continuity Plan
How to write a Business Continuity PlanHow to write a Business Continuity Plan
How to write a Business Continuity Plan
 
Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024Long journey of Ruby standard library at RubyConf AU 2024
Long journey of Ruby standard library at RubyConf AU 2024
 
Decarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a realityDecarbonising Buildings: Making a net-zero built environment a reality
Decarbonising Buildings: Making a net-zero built environment a reality
 

Go Faster With Native Compilation

  • 1. KUMAR RAJEEV RASTOGI (rajeevrastogi@huawei.com) Go Faster With Native Compilation PGDayAsia 2016 17th March 2016 PRASANNA VENKATESH (prasanna.venkatesh@huawei.com)
  • 2.  KUMAR RAJEEV RASTOGI (In Love with PostgreSQL..)  Senior Technical Leader at Huawei Technology for almost 8 years  Active PostgreSQL community members, have contributed many patches  Have presented many papers in various PG based conference (e.g. PGCon Canada, PGDay Asia, PGDay India etc).  One of the Organizer for PGDay Asia conference  Program committee member for PGDay India.  Holds around 14 patents in my name in various DB technologies. Blog - rajeevrastogi.blogspot.in LinkedIn - http://in.linkedin.com/in/kumarrajeevrastogi Who Am I?
  • 3. Native Compilation3 5 Current BusinessTrend2 What to Compile Cost model4 Schema binding6 7 Schema binding Solution Performance Scenario8 1 Background 9 Procedure Compilation Agenda
  • 4. The traditional database executors are based on the fact that “I/O cost dominates execution”.These executor models are inefficient in terms of CPU instructions. Now most of the workloads fits into main memory, which is consequence of two broad trends : 1. Growth in the amount of memory (RAM) per node/machine 2. Prevalence of high speed SSD Background So now biggest bottleneck is CPU usage efficiency not I/O. Our problem statement is to make our database more efficient in terms of CPU instructions – there by leveraging the larger memory Source:ICDEConference
  • 5. Slowly database industries are reaching to a point where increase of throughput has become very limited. Quoting from a paper on Hekaton - The only real hope to increase throughput is to reduce the number of instructions executed but the reduction needs to be dramatic. To go 10X faster, the engine must execute 90% fewer instructions and yet still get the work done. To go 100X faster, it must execute 99% fewer instructions. Such a drastic reduction in instruction without disturbing whole functionality is only possible by code specialization (a.k.a Native Compilation or famously as LLVM) i.e. to generate code specific to object/query. Current Business Trend
  • 6. Many DBs are moving into compilation technology to improve performance by reducing the CPU instruction some of them are:  Hekaton (SQL Server 2014)  Oracle  MemSQL Current Business Trend Contd… Hekaton: Comparison of CPU efficiency for lookups Source: Hekaton Paper
  • 7. Native Compilation is a methodology to reduce CPU instructions by executing only instruction specific to given query/objects unlike interpreted execution. Steps are: 1. Generate C-code specific to objects/query. 2. Compile C-code to generate DLL and load with server executable. 3. Call specialized function instead of generalized function. Native Compilation e.g. Expression: Col1 + 100 Traditional executor will requires 100’s of instruction to find all combination of expression before final execution, whereas in vanilla c code, it can directly execute in 2-3 instructions. Source:ICDEConference
  • 8. Cost model of specialized code can be expressed as: cost of execution = generate specialized code + compilation + execute compiled code Execution of compiled code is very efficient but generation of specialized code and compiling same may be bit expensive affair. So in order to drive down this cost: 1. Generate and compile the code once and use it many times; this distributes the constant cost. 2. Improve the performance of generation and compilation significantly. Cost model
  • 9. Any CPU intensive entity of database can be natively compiled, if they have similar pattern on different execution. Some of the most popular one are:  Schema (Relation)  Procedure  Query  Algebraic expression Note: We will target only Schema for this presentation. What to Native Compile?
  • 10. Property of each relation: 1. Number of attributes, their length and data-type are fixed. 2. Irrespective of any data, it is going to be stored in similar pattern. 3. Each attributes are accessed in similar pattern. Disadvantage of current approach for each tuple access: 1. Loops for each attribute. 2. Property of all attributes are checked to take many decisions. 3. Executes many unwanted instructions. Schema binding
  • 11. So we can overcome the disadvantage by natively compiling the relation based on its property to generate specialized code for each functions of schema. Schema Binding = Native Compilation of Relation Benefit: 1. Each attribute access gets flattened. 2. All attribute property decision are taken during code generation. 3. No decision making at run-time. 4. Reduced CPU instruction. Schema binding Contd…
  • 12. Schema binding Contd… CREATE TABLE Automatic Code generation C  DLL LoadAll function SQL QUERY Compiled Functions Once a create table command is issued, a C-file with all specialized access function is generated, which is in turns gets loaded as DLL. These loaded functions are used by all SQL query accessing the compiled table
  • 13. Schema binding Contd… This show overall interaction with schema bound. Any query issued from client can use schema bound function or normal function depending on the underlying table.
  • 14. Schema: create table tbl (id1 int, id2 float, id3 varchar(10),id4 bool); Schema binding: Example Field id1 and id2 is going to be always stored at same offset and with same alignment, no change at run time. Only variable length attribute and attribute following this will have variable offset.
  • 15. Using current approach: Access Using specialized code: method-1: method-2: Conclusion: Specialized code uses fewer number of instruction compare to generalized code and hence better performance. Schema binding: Example Each Line here is macro, which invokes multiple condition check to decide the action if (thisatt->attlen != -1) { offset = att_align_nominal(off, thisatt->attalign) values[1] = fetchatt(thisatt, tp + offset) offset = att_addlength_pointer(off, thisatt->attlen, tp + off); } values[1] = ((struct tbl_xxx*)tp)->id2; offset = DOUBLEALIGN(offset); values[1] = *((Datum *)(tp + offset)); offset += 8; See details about this in further slides.
  • 16. Solution can be categorized as: 1  Opting for schema bind. 2  Functions to be customized. 3  Customized function generation. 4  Loading of customized function. 5  Invocation of customized function. 6  How to generate dynamic library. Schema Binding Solution
  • 17. CREATE [ [ GLOBAL | LOCAL ] { TEMPORARY | TEMP } | UNLOGGED ] TABLE [ IF NOT EXISTS ] table …[ TABLESPACE tablespace_name ] [SCHEMA_BOUNDED] SCHEMA_BOUND is new option with CREATETABLE to opt for code specialization. Solution: Opting for schema bind tuple
  • 18. Function Name (xxx  relname_relid) Purpose heap_compute_data_size_xxx To calculate size of the data part of the tuple Heap_fill_tuple_xxx To fill the tuple with the data Heap_deform_tuple_xxx Deform the heap tuple Slot_deform_tuple_xxx To deform the tuple at the end of scan to project attribute Nocachegetattr_xxx To get one attribute value from the tuple for vacuum case Solution: Functions to be customized
  • 19. Customized function for tuple access of a table can be categorized in 3 approaches: Method-1  WithTuple format change Method-2  Without changing the tuple format. Method-3  Re-organize table columns internally to make all fixed length and variable length attribute in sequence. Solution: Function Generation
  • 20. A structure corresponding to relation will be created in such a way that each attribute’s value/offset can be directly referenced by typecasting the data buffer with structure. e.g. Consider our earlier example table: Solution: Function Generation-Method-1 Structure member variable id1, id2 and id4 contains actual value of column, whereas id3_offset stores the offset of the column id3, as during create table it is not known the size of the actual value going to be stored. End of this structure buffer will hold data for variable size column and it can be accessed based on the corresponding offset stored. typedef struct schemaBindTbl_xxx { int id1; float id2; short id3_offset; bool id4; /* Actual data for variable size column*/ } SchemaBindTbl_xxxx; create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool);
  • 21. Solution: Function Generation-Method-1 Contd… ExistingTuple Format NewTuple Format All attribute values stored in sequence. Value of fixed length attribute but offset of variable length attribute stored in sequence. So structure typecast will give either value or offset of value.
  • 22. So using this structure, tuple data can be stored as: Fixed size data-type storage: Variable size data-type storage: Using this approach heap_fill_tuple function can be generated during create table. Solution: Function Generation-Method-1 Contd… ((SchemaBindTbl_xxxx*)data)->id1 = DatumGetXXX(values[attno]); ((SchemaBindTbl_xxxx*)data)->id3_offset = data_offset; data_length = SIZE((char*)values[attno]); SET_VARSIZE_SHORT(data + data_offset, data_length); memcpy(data + data_offset + 1, VARDATA((char*)values[attno]), data_length - 1); data_offset += data_length;
  • 23. Similarly, each attribute value from tuple can be accessed as: Fixed size data-type access: Variable size data-type access: Using this approach all function related to deformation of tuple (i.e. heap_deform_tuple, slot_deform_tuple and nocachegettr) can be generated during create table. Solution: Function Generation-Method-1 Contd… values[attno] = ((SchemaBindTbl_xxxx*)data)->id1; data_offset = ((SchemaBindTbl_xxxx*)data)->id3_offset ; values[attno] = PointerGetDatum((char *) ((char*)tp + data_offset));
  • 24. Advantage: 1. No dependency on previous attributes. 2. Any of the attribute value can be accessed directly. 3. Access of attribute value is very efficient as it will take very few instructions. Disadvantage: 1. Size of the tuple will increase leading to more memory consumption. Solution: Function Generation-Method-1 Contd…
  • 25. This method generates the customized functions without changing the format of the tuple. This approach uses slight variation of existing macros:  fetch_att  att_addlength_pointer  att_align_nominal  att_align_pointer These macros takes many decision based on the data-type, its size of each attributes which is going to be same for a relation. So instead of using these macro for each tuple of a relation at run- time, it is used once during table schema definition itself to generate all customized function. Solution: Function Generation-Method-2
  • 26. So as per this mechanism, code for accessing float attribute will be as below: Similarly access for all other data-type attributes can also be generated. Using the combination of other macro, customized code for all other functions used for tuple access can be generated. Solution: Function Generation-Method-2 Contd… offset = DOUBLEALIGN(offset); Skipped alignment check values[1] = *((Datum *)(tp + offset)); Skipped datum size check offset += 8; Skipped attribute length check
  • 27. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. Dependency on previous attribute incase previous attribute is variable length. Solution: Function Generation-Method-2 Contd…
  • 28. This method is intended to use advantages of previous methods i.e.  Make least number of attribute dependency All fixed length attributes are grouped together to make initial list of columns followed by all variable length columns. So all fixed length attributes can be accessed directly. Change in column order will be done during creation of table itself.  No change in tuple size,so access of tuple will be very efficient In order to achieve this, we use Method-2 to generate specialized code. Solution: Function Generation-Method-3
  • 29. E.g. Consider our earlier example: create table tbl (id1 int, id2 float, id3 varchar(10), id4 bool); Solution: Function Generation-Method-3 Contd… So in this case, while creating the table id1, id2 and id4 will be first 3 columns followed by id3. So access code can be generated directly during schema definition without dependency on any run time parameter because all of the attribute offset is fixed except of variable length attributes. If there are more variable length attributes then they will be stored after id3 and for them it will have to know the length of the previous columns to find the exact offset.
  • 30. Advantage: 1. Existing tested macro are used, so it is very safe. 2. No change in tuple format and size. 3. Reduces number of overall instruction by huge margin. Disadvantage: 1. There will be still dependency among multiple variable length attributes (if any). Solution: Function Generation-Method-3 Contd…
  • 31. Once we generate the code corresponding to each access function, the same gets written into a C-file, which in turn gets compiled to dynamic linked library and then this dynamic library gets loaded with server executable. So now any function of the library can be invoked directly from the server executables. Solution: Loading of customized functions
  • 32. The generated C-file should be compiled to generate dynamic library, which can be done using: 1. LLVM Compilation using the LLVM will be very fast. 2. GCC GCC is standard way of compiling C file but it will be slow compare to LLVM. Solution: How to generate dynamic library
  • 33. While forming the tuple, corresponding relation option schema_bound will be checked to decide whether to call customized function corresponding to this relation or the standard generalized function. Also in tuple flag t_infomask2, HEAP_SCHEMA_BIND_TUPLE (with value 0x1800) will be appended to mark the schema bounded tuple. Solution: Invocation of Storage Customized Function
  • 34. The tuple header’s t_infomask2 flag will be checked to see , if HEAP_SCHEMA_BIND_TUPLE is set to decide whether to call customized function corresponding to this relation or the standard generalized function. Solution: Invocation of access customized function
  • 35. Performance (TPC-H): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-H Configuration: Default Query-1, 2 and 17 not shown in charts to maintain clear visibility of chart. 0 2000 4000 6000 8000 10000 12000 Query-3 Query-4 Query-5 Query-6 Query-7 Query-8 Query-9 Query-10 Query-11 Query-12 Query-13 Query-14 Query-15 Query-16 Query-18 Query-19 Time(ms) TPC-H Performance Original(ms) SchemaBind (ms) TPC-H Query Improvement(%) Query-1 2% Query-2 36% Query-3 14% Query-4 13% Query-5 2% Query-6 21% Query-7 16% Query-8 5% Query-9 6% Query-10 9% Query-11 3% Query-12 17% Query-13 3% Query-14 20% Query-15 20% Query-16 4% Query-17 25% Query-18 9% Query-19 24%
  • 36. Performance (Hash Join): 0 200 400 600 800 1,000 1,200 slot_deform_tuple Overall Instruction Reduction SchemaBind Original 0 50 100 150 200 250 T i m e ( m s ) Latency Improvement SchemaBind Original Latency Improvement: 23% Overall Instruction reduction: 30% Access method instruction reduction: 89% OuterTable: Having 10 columns, cardinality 1M InnerTable: Having 2 columns, cardinality 1K Query: select sum(tbl.id10) from tbl,tbl2 where tbl.id10=tbl2.id2 group by tbl.id9;
  • 37. Schema binding mainly depend on the code specialization of access function for table. Number of instruction reduced per call of slot_deform_function is more than 70% and hence if this function form good percentage of total instruction e.g. in  Aggregate query,  group  Join  Query with multiple attribute All of above cases with huge table size, then overall instruction reduction will be also huge and hence much better performance. Performance Scenario:
  • 38. Procedure Compilation First diagram highlights at what step and how procedure will be compiled. Once the parsing of procedure is done, we will have all information about the procedure in PLpgSQL_function pointer, which we can traverse as planner does and for each statement corresponding C-code can be generated. Second diagram explain, how compiled function will be invoked.
  • 39. Performance (TPC-C): The system configuration is as below: SUSE Linux Enterprise Server 11 (x86_64), 2 Cores, 10 sockets per core TPC-C Configuration: runMins=0, runTxnsPerTerminal=200000 Checkpoint_segment = 100 0 10000 20000 30000 40000 50000 60000 70000 New_order All Procedure tpmC Original Compiled Reading in tpmC Reading New_order All Procedure Original 22334 49606 Compiled 28973 64580 Improvement 23% 23% With some basic compilation of procedure, we are able to get around 23% performance improvement.
  • 40. Seeing the industry trend, we have implemented two way of specialization, which resulted in up to 36% and 23% of performance improvement on standard benchmarkTPC-H andTPC-C respectively. This technology will make us align with current business trend to tackle the CPU bottleneck and also could be one of the hot technology for work on PostgreSQL. Conclusion
  • 41. 1. Zhang, Rui, Saumya Debray, and RichardT. Snodgrass. "Micro-specialization: dynamic code specialization of database management systems." Proceedings of theTenth International Symposium on Code Generation and Optimization.ACM, 2012. http://dl.acm.org/citation.cfm?id=2259025 2. Freedman, Craig, Erik Ismert, and Per-Åke Larson. "Compilation in the Microsoft SQL Server Hekaton Engine." IEEE Data Eng.Bull. 37.1 (2014): 22-30. http://www.internalrequests.org/showconfirmpage/?url=ftp://131.107.65.22/pub /debull/A14mar/p22.pdf Reference
  • 42. TAPE DRAM DISK CACHE DRAM DISK “Disk is the new tape; Memory is the new disk.” -- Jim Gray PostgreSQL on Big RAM Source: ICDE Conference