SlideShare a Scribd company logo
1 of 53
Download to read offline
Load Data Fast!
BILL KARWIN
PERCONA LIVE OPEN SOURCE DATABASE CONFERENCE 2017
Bill Karwin
Software developer, consultant, trainer
Using MySQL since 2000
Senior Database Architect at SchoolMessenger
SQL Antipatterns: Avoiding the Pitfalls of Database
Programming
https://pragprog.com/titles/bksqla/sql-antipatterns
Oracle ACE Director
Load Data Fast!
Common chores
§ Dump and restore
§ Import third-party data
§ Extract, Transfer, Load (ETL)
§ Test data that needs to be reloaded
repeatedly
https://commons.wikimedia.org/wiki/File:Kitten_with_laptop_-_278017185.jpg
Is it done yet?
How to Speed This Up?
1. Query Solutions
2. Schema Solutions
3. Configuration Solutions
4. Parallel Execution Solutions
Example Table
CREATE TABLE TestTable (
id INT UNSIGNED NOT NULL PRIMARY KEY,
intCol INT UNSIGNED DEFAULT NULL,
stringCol VARCHAR(100) DEFAULT NULL,
textCol TEXT
) ENGINE=InnoDB;
Let’s load 1 million rows!
Best Case Performance
Running a test script to loop over 1 million rows, without inserting to a database.
$ php test-bulk-insert.php --total-rows 1000000 --noop
This should have a speed that is the upper bound for any subsequent test.
Time: 2 seconds (00:00:02)
1000000 rows = 432435.24 rows/sec
1000000 stmt = 432435.24 stmt/sec
1000000 txns = 432435.24 txns/sec
1000000 conn = 432435.24 conn/sec
Worst Case Performance
INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES
(?, ?, ?, ?);
Run a test script that executes one INSERT, commits, reconnects.
$ php test-bulk-insert.php --total-rows 10000
Time: 34 seconds (00:00:34)
10000 rows = 290.29 rows/sec
10000 stmt = 290.29 stmt/sec
10000 txns = 290.29 txns/sec
10000 conn = 290.29 conn/sec
Inserting One Row: Overhead
https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html
0
0.5
1
1.5
2
2.5
3
Connecting Sending	query Parsing Inserting	row Closing	query
Query Solutions
Inserting One Row at a Time
INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES
(?, ?, ?, ?);
Run a test script that executes one INSERT, commits using a single connection.
$ php test-bulk-insert.php --total-rows 1000000 
--txns-per-conn 1000000
Time: 527 seconds (00:08:47)
1000000 rows = 1894.67 rows/sec
1000000 stmt = 1894.67 stmt/sec
1000000 txns = 1894.67 txns/sec
1 conn = 0.00 conn/sec
Inserting One Row: Overhead
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Sending	query Parsing Inserting	row Closing	query
Inserting Multiple Rows
INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?),
(?, ?, ?, ?);
Q: How many rows can you insert in one statement?
A: As many as fit in max_allowed_packet bytes.
Inserting Multiple Rows: Overhead
0
1
2
3
4
5
6
7
8
Sending	query Parsing Inserting	row Closing	query
Inserting Multiple Rows: Results
$ php Test-bulk-insert.php --total-rows 1000000 
--rows-per-stmt 100 --txns-per-conn 10000
Time: 85 seconds (00:01:25)
1000000 rows = 11680.98 rows/sec
10000 stmt = 116.81 stmt/sec
10000 txns = 116.81 txns/sec
1 conn = 0.01 conn/sec
Transactions
BEGIN TRANSACTION;
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
INSERT INTO TestTable …
COMMIT;
Q: How many statements can you do in one transaction?
A: In theory this is constrained by undo log segments, but it's a lot.
Transactions: Results
$ php test-bulk-insert.php --total-rows 1000000 
--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100
Time: 63 seconds (00:01:03)
1000000 rows = 15744.53 rows/sec
10000 stmt = 157.45 stmt/sec
100 txns = 1.57 txns/sec
1 conn = 0.02 conn/sec
Inserting with Prepared Queries
BEGIN TRANSACTION;
PREPARE INSERT INTO TestTable …
EXECUTE …
EXECUTE …
EXECUTE …
EXECUTE …
COMMIT;
Q: How many times can you execute a given prepared statement?
A: There is no limit, as far as I can tell.
0
1
2
3
4
5
6
7
8
Sending	query Parsing Inserting	row Inserting	row Inserting	row Inserting	row Closing	query
Prepared Queries: Overhead
Prepared Queries: Results
$ php test-bulk-insert.php --total-rows 1000000 
--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100
$ php test-bulk-insert.php --total-rows 1000000 
--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 
--emulate-prepares
Time: 95 seconds (00:01:35)
1000000 rows = 10518.97 rows/sec
Time: 63 seconds (00:01:03)
1000000 rows = 15744.53 rows/sec
Load Data in File: Results
mysql> LOAD DATA LOCAL INFILE 'TestTable.csv'
INTO TABLE TestTable;
https://dev.mysql.com/doc/refman/8.0/en/load-data.html
Flat-file data load in a single transaction.
Works with replication.
Overhead: Load Data Infile
0
50
100
150
200
250
Sending	query Parsing LOAD	DATA	INFILE Closing	query
Load Data in File: Results
$ php test-bulk-insert.php --total-rows 1000000 --load-data
Time: 25 seconds (00:00:25)
1000000 rows = 39563.53 rows/sec
1 stmt = 0.04 stmt/sec
1 txns = 0.04 txns/sec
1 conn = 0.04 conn/sec
Load XML in File: Results
LOAD XML LOCAL INFILE 'TestTable.xml'
INTO TABLE TestTable;
https://dev.mysql.com/doc/refman/8.0/en/load-xml.html
$ php test-bulk-insert.php --total-rows 1000000 --load-xml
Time: 77 seconds (00:01:17)
1000000 rows = 12858.16 rows/sec
1 stmt = 0.01 stmt/sec
1 txns = 0.01 txns/sec
1 conn = 0.01 conn/sec
What about Load JSON in File?
Sorry, the hypothetical LOAD JSON INFILE is not supported by MySQL yet.
😭
But it has been proposed as a feature request:
https://bugs.mysql.com/bug.php?id=79209
Go vote for it!
Or better yet, implement it and contribute a patch!
Schema Solutions
Indexes
How much overhead for one index? Two indexes?
1. mysql> ALTER TABLE TestTable ADD INDEX (intCol);
2. mysql> ALTER TABLE TextTable ADD INDEX (stringCol);
Indexes: Overhead
0
1
2
3
4
5
6
7
8
Sending	query Parsing Inserting	row Inserting	indexes Closing	query
Indexes: Results
$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 
--stmts-per-txn 100 --txns-per-conn 100
$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 
--stmts-per-txn 100 --txns-per-conn 100 --indexes 1
$ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 
--stmts-per-txn 100 --txns-per-conn 100 --indexes 2
Time: 71 seconds (00:01:11)
1000000 rows = 13993.81 rows/sec
Time: 63 seconds (00:01:03)
1000000 rows = 15744.53 rows/sec
Time: 95 seconds (00:01:35)
1000000 rows = 10473.64 rows/sec
Index Deferral
What if we insert with no indexes, and build indexes at the end?
§ Thi is what Percona’s mysqldump --innodb-optimize-keys does.
§ Load time is like when you have no indexes:
Then create indexes after data load. This reduces the effective rate of rows/second:
mysql> ALTER TABLE TestTable ADD INDEX (intCol);
Query OK, 0 rows affected (7.02 sec)
mysql> ALTER TABLE TestTable ADD INDEX (stringCol);
Query OK, 0 rows affected (8.54 sec)
Time: 63 seconds (00:01:03)
1000000 rows = 15744.53 rows/sec
Time: 63 + 7 + 8.5 seconds (00:01:35)
1000000 rows = 12738.85 rows/sec
effective data
load rate
Triggers
How much overhead for a trigger?
mysql> CREATE TRIGGER TestTrigger
BEFORE INSERT ON TestTable
FOR EACH ROW
SET NEW.stringCol = UPPER(NEW.stringCol);
This is a very simple trigger. If you have more complex code, like subordinate
INSERT statements, the cost will be higher.
Triggers: Results
$ php test-bulk-insert.php --total-rows 1000000 
--rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 
--trigger
Time: 69 seconds (00:01:09)
1000000 rows = 14296.91 rows/sec
10000 stmt = 142.97 stmt/sec
100 txns = 1.43 txns/sec
1 conn = 0.01 conn/sec
CSV Storage Engine
mysql> CREATE TABLE TestTable (
id INT UNSIGNED NOT NULL,
intCol INT UNSIGNED NOT NULL,
stringCol VARCHAR(100) NOT NULL,
textCol TEXT NOT NULL
) ENGINE=CSV;
# ls -l /usr/local/mysql/data/test
total 24
-rw-r----- 1 _mysql _mysql 5824 Apr 22 20:10 TestTable_429.SDI
-rw-r----- 1 _mysql _mysql 35 Apr 22 20:10 testtable.CSM
-rw-r----- 1 _mysql _mysql 0 Apr 22 20:10 testtable.CSV
CSV Storage Engine
Move CSV file into datadir:
# time cp data.csv /usr/local/mysql/data/test/testtable.CSV
real 0m8.359s
# ls -l /usr/local/mysql/data/test/
total 6350872
-rw-r----- 1 _mysql _mysql 5824 Apr 22 20:18 TestTable_431.SDI
-rw-r----- 1 _mysql _mysql 35 Apr 22 20:18 testtable.CSM
-rw-r----- 1 _mysql _mysql 3251630334 Apr 22 20:19 testtable.CSV
Time: 8.359 (00:00:08)
1000000 rows = 119631.53 rows/sec
CSV into InnoDB Storage Engine
Use CSV storage engine, then alter to InnoDB table (and add a primary key):
ALTER TABLE TestTable ADD PRIMARY KEY (id), ENGINE=InnoDB;
Query OK, 1000000 rows affected (1 min 37.73 sec)
Time: 8.359 + 97.73 seconds (00:01:46)
1000000 rows = 9426.05 rows/sec
effective data
load rate
Partitioning
Transportable Tablespaces
Configuration Solutions
Increase Buffering,
Decrease Durability
innodb_buffer_pool_size = 4G
(default 128M)
innodb_log_buffer_size = 1G
(default 16M)
innodb_log_file_size = 4G
(default 48M)
innodb_flush_log_at_trx_commit = 0
(default 1)
# log-bin = mysql-bin
Time: 56 seconds (00:00:56)
1000000 rows = 17697.29 rows/sec
Increase Buffering,
Decrease Durability
Same, but at least flush the log buffer:
innodb_flush_log_at_trx_commit = 2
(default 1)
Time: 60 seconds (00:01:00)
1000000 rows = 16564.26 rows/sec
Tuning + Load Data
$ php test-bulk-insert.php --total-rows 1000000 --load-data
Time: 22 seconds (00:00:22)
1000000 rows = 43873.50 rows/sec
Config for More Buffering
Innodb_buffer_pool_size=4G
(default 128M)
Time: 82 seconds (00:01:22)
1000000 rows = 12161.69 rows/sec
Innodb_change_buffering=none
(default all)
Innodb_log_buffer_size=1G
(default 16M)
Time: 81 seconds (00:01:21)
1000000 rows = 12291.17 rows/sec
Binlog_cache_size=256K)
(default 32K)
Config for Greater Throughput
Innodb_log_file_size=4G
(default 48M)
Time: 80 seconds (00:01:20)
1000000 rows = 12488.30 rows/sec
Innodb_io_capacity=2000
(default 200)
Time: 80 seconds (00:01:20)
1000000 rows = 12432.38 rows/sec
Innodb_lru_scan_depth=8192
(default 1024)
Time: 81 seconds (00:01:21)
1000000 rows = 12269.61 rows/sec
Config for Lower Durability
Innodb_doublewrite=OFF
(default ON)
Time: 85 seconds (00:01:25)
1000000 rows = 11740.06 rows/sec
Innodb_flush_log_at_trx_commit=0
(default 1)
Time: 84 seconds (00:01:24)
1000000 rows = 11768.51 rows/sec
# Log_bin Time: 82 seconds (00:01:22)
1000000 rows = 12087.97 rows/sec
Sync_binlog=0
(default 1)
Time: 83 seconds (00:01:23)
1000000 rows = 11906.84 rows/sec
Config for Fewer Checks
Innodb_checksum_algorithm=none
(default	crc32)
Time:	84	seconds (00:01:24)
1000000	rows =	 11807.99	rows/sec
Innodb_log_checksums=OFF
(default	ON)
Time:	84	seconds	(00:01:24)
1000000	rows	=	 11893.64	rows/sec
Foreign_key_checks=0
(default	1)
Unique_checks=0
(default	1)
Parallel Execution
Solutions
Parallel Import
Like LOAD DATA INFILE but supports multi-threaded import:
$ mysqlimport --local --use-threads 4 
dbname table1 table2 table3 table4
Runs a fixed number of threads, imports one table per thread.
If an import finishes and there are more tables, first available thread does it.
https://dev.mysql.com/doc/refman/8.0/en/mysqlimport.html
Parallel Import
Connecting to localhost
Connecting to localhost
Connecting to localhost
Connecting to localhost
Selecting database test
Selecting database test
Selecting database test
Selecting database test
Loading data from LOCAL file: TestTable2.csv into TestTable2
Loading data from LOCAL file: TestTable3.csv into TestTable3
Loading data from LOCAL file: TestTable1.csv into TestTable1
Loading data from LOCAL file: TestTable4.csv into TestTable4
test.TestTable3: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0
Disconnecting from localhost
test.TestTable1: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0
Disconnecting from localhost
test.TestTable2: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0
Disconnecting from localhost
test.TestTable4: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0
Disconnecting from localhost
MysqlImport: Results
$ php test-bulk-insert.php --total-rows 1000000 --load-data 
--use-threads 4
Time: 31 seconds (00:00:31)
1000000 rows = 32205.28 rows/sec
4 stmt = 0.13 stmt/sec
4 txns = 0.13 txns/sec
4 conn = 0.13 conn/sec
Conclusions
0
5000
10000
15000
20000
25000
30000
35000
40000
45000
50000
Rows	per	Second
why are
you still
doing this?
Want to Try The Tests Yourself?
The test-bulk-insert.php script is available here:
https://github.com/billkarwin/bk-tools
One Last Thing…
What Was Our Solution?
We cheated:
§ Load database once.
§ Take a filesystem snapshot.
§ Run tests.
§ Restore from snapshot.
§ Re-run tests.
§ etc.
This is not a good solution for everyone. It worked for one specific use case.
License and Copyright
Copyright 2017 Bill Karwin
http://www.slideshare.net/billkarwin
Released under a Creative Commons 3.0 License:
http://creativecommons.org/licenses/by-nc-nd/3.0/
You are free to share—to copy, distribute,
and transmit this work, under the following conditions:
Attribution.
You	must	attribute	this	
work	to	Bill	Karwin.
Noncommercial.
You	may	not	use	this	work	
for	commercial	purposes.
No	Derivative	Works.
You may	not	alter,	
transform,	or	build	upon	
this	work.

More Related Content

What's hot

Your first ClickHouse data warehouse
Your first ClickHouse data warehouseYour first ClickHouse data warehouse
Your first ClickHouse data warehouseAltinity Ltd
 
ClickHouse Introduction by Alexander Zaitsev, Altinity CTO
ClickHouse Introduction by Alexander Zaitsev, Altinity CTOClickHouse Introduction by Alexander Zaitsev, Altinity CTO
ClickHouse Introduction by Alexander Zaitsev, Altinity CTOAltinity Ltd
 
Introduction to Apache Calcite
Introduction to Apache CalciteIntroduction to Apache Calcite
Introduction to Apache CalciteJordan Halterman
 
Solving PostgreSQL wicked problems
Solving PostgreSQL wicked problemsSolving PostgreSQL wicked problems
Solving PostgreSQL wicked problemsAlexander Korotkov
 
Designing Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things RightDesigning Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things RightDatabricks
 
The InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQLThe InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQLMorgan Tocker
 
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfAltinity Ltd
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Flink Forward
 
Scaling Twitter
Scaling TwitterScaling Twitter
Scaling TwitterBlaine
 
A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides Altinity Ltd
 
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021Altinity Ltd
 
SQL to Hive Cheat Sheet
SQL to Hive Cheat SheetSQL to Hive Cheat Sheet
SQL to Hive Cheat SheetHortonworks
 
Common issues with Apache Kafka® Producer
Common issues with Apache Kafka® ProducerCommon issues with Apache Kafka® Producer
Common issues with Apache Kafka® Producerconfluent
 
Understanding and Improving Code Generation
Understanding and Improving Code GenerationUnderstanding and Improving Code Generation
Understanding and Improving Code GenerationDatabricks
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsSpark Summit
 
Introduction to Presto at Treasure Data
Introduction to Presto at Treasure DataIntroduction to Presto at Treasure Data
Introduction to Presto at Treasure DataTaro L. Saito
 
Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security Mydbops
 

What's hot (20)

MyRocks Deep Dive
MyRocks Deep DiveMyRocks Deep Dive
MyRocks Deep Dive
 
Your first ClickHouse data warehouse
Your first ClickHouse data warehouseYour first ClickHouse data warehouse
Your first ClickHouse data warehouse
 
ClickHouse Introduction by Alexander Zaitsev, Altinity CTO
ClickHouse Introduction by Alexander Zaitsev, Altinity CTOClickHouse Introduction by Alexander Zaitsev, Altinity CTO
ClickHouse Introduction by Alexander Zaitsev, Altinity CTO
 
Introduction to Apache Calcite
Introduction to Apache CalciteIntroduction to Apache Calcite
Introduction to Apache Calcite
 
Solving PostgreSQL wicked problems
Solving PostgreSQL wicked problemsSolving PostgreSQL wicked problems
Solving PostgreSQL wicked problems
 
Designing Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things RightDesigning Structured Streaming Pipelines—How to Architect Things Right
Designing Structured Streaming Pipelines—How to Architect Things Right
 
The InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQLThe InnoDB Storage Engine for MySQL
The InnoDB Storage Engine for MySQL
 
Sql query patterns, optimized
Sql query patterns, optimizedSql query patterns, optimized
Sql query patterns, optimized
 
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdfDeep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
Deep Dive on ClickHouse Sharding and Replication-2202-09-22.pdf
 
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
Dynamically Scaling Data Streams across Multiple Kafka Clusters with Zero Fli...
 
Sql Antipatterns Strike Back
Sql Antipatterns Strike BackSql Antipatterns Strike Back
Sql Antipatterns Strike Back
 
Scaling Twitter
Scaling TwitterScaling Twitter
Scaling Twitter
 
A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides A Day in the Life of a ClickHouse Query Webinar Slides
A Day in the Life of a ClickHouse Query Webinar Slides
 
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021
Building ClickHouse and Making Your First Contribution: A Tutorial_06.10.2021
 
SQL to Hive Cheat Sheet
SQL to Hive Cheat SheetSQL to Hive Cheat Sheet
SQL to Hive Cheat Sheet
 
Common issues with Apache Kafka® Producer
Common issues with Apache Kafka® ProducerCommon issues with Apache Kafka® Producer
Common issues with Apache Kafka® Producer
 
Understanding and Improving Code Generation
Understanding and Improving Code GenerationUnderstanding and Improving Code Generation
Understanding and Improving Code Generation
 
Top 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark ApplicationsTop 5 Mistakes When Writing Spark Applications
Top 5 Mistakes When Writing Spark Applications
 
Introduction to Presto at Treasure Data
Introduction to Presto at Treasure DataIntroduction to Presto at Treasure Data
Introduction to Presto at Treasure Data
 
Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security Achieving compliance With MongoDB Security
Achieving compliance With MongoDB Security
 

Viewers also liked

Mastering InnoDB Diagnostics
Mastering InnoDB DiagnosticsMastering InnoDB Diagnostics
Mastering InnoDB Diagnosticsguest8212a5
 
Mix ‘n’ Match Async and Group Replication for Advanced Replication Setups
Mix ‘n’ Match Async and Group Replication for Advanced Replication SetupsMix ‘n’ Match Async and Group Replication for Advanced Replication Setups
Mix ‘n’ Match Async and Group Replication for Advanced Replication SetupsPedro Gomes
 
MySQL InnoDB 源码实现分析(一)
MySQL InnoDB 源码实现分析(一)MySQL InnoDB 源码实现分析(一)
MySQL InnoDB 源码实现分析(一)frogd
 
MySQL High Availability and Disaster Recovery with Continuent, a VMware company
MySQL High Availability and Disaster Recovery with Continuent, a VMware companyMySQL High Availability and Disaster Recovery with Continuent, a VMware company
MySQL High Availability and Disaster Recovery with Continuent, a VMware companyContinuent
 
MySQL Backup and Recovery Essentials
MySQL Backup and Recovery EssentialsMySQL Backup and Recovery Essentials
MySQL Backup and Recovery EssentialsRonald Bradford
 
Mysql参数-GDB
Mysql参数-GDBMysql参数-GDB
Mysql参数-GDBzhaolinjnu
 
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...Frederic Descamps
 
2010丹臣的思考
2010丹臣的思考2010丹臣的思考
2010丹臣的思考zhaolinjnu
 
Capturing, Analyzing and Optimizing MySQL
Capturing, Analyzing and Optimizing MySQLCapturing, Analyzing and Optimizing MySQL
Capturing, Analyzing and Optimizing MySQLRonald Bradford
 
Mysql high availability and scalability
Mysql high availability and scalabilityMysql high availability and scalability
Mysql high availability and scalabilityyin gong
 
Group Replication: A Journey to the Group Communication Core
Group Replication: A Journey to the Group Communication CoreGroup Replication: A Journey to the Group Communication Core
Group Replication: A Journey to the Group Communication CoreAlfranio Júnior
 
MySQL InnoDB Cluster and Group Replication - OSI 2017 Bangalore
MySQL InnoDB Cluster and Group Replication - OSI 2017 BangaloreMySQL InnoDB Cluster and Group Replication - OSI 2017 Bangalore
MySQL InnoDB Cluster and Group Replication - OSI 2017 BangaloreSujatha Sivakumar
 
MySQL Best Practices - OTN LAD Tour
MySQL Best Practices - OTN LAD TourMySQL Best Practices - OTN LAD Tour
MySQL Best Practices - OTN LAD TourRonald Bradford
 
Galera cluster for high availability
Galera cluster for high availability Galera cluster for high availability
Galera cluster for high availability Mydbops
 
淘宝数据库架构演进历程
淘宝数据库架构演进历程淘宝数据库架构演进历程
淘宝数据库架构演进历程zhaolinjnu
 
A New Architecture for Group Replication in Data Grid
A New Architecture for Group Replication in Data GridA New Architecture for Group Replication in Data Grid
A New Architecture for Group Replication in Data GridEditor IJCATR
 
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
MySQL InnoDB Cluster - A complete High Availability solution for MySQLMySQL InnoDB Cluster - A complete High Availability solution for MySQL
MySQL InnoDB Cluster - A complete High Availability solution for MySQLOlivier DASINI
 
MySQL High Availability with Group Replication
MySQL High Availability with Group ReplicationMySQL High Availability with Group Replication
MySQL High Availability with Group ReplicationNuno Carvalho
 
Inno db internals innodb file formats and source code structure
Inno db internals innodb file formats and source code structureInno db internals innodb file formats and source code structure
Inno db internals innodb file formats and source code structurezhaolinjnu
 

Viewers also liked (20)

Mastering InnoDB Diagnostics
Mastering InnoDB DiagnosticsMastering InnoDB Diagnostics
Mastering InnoDB Diagnostics
 
Mix ‘n’ Match Async and Group Replication for Advanced Replication Setups
Mix ‘n’ Match Async and Group Replication for Advanced Replication SetupsMix ‘n’ Match Async and Group Replication for Advanced Replication Setups
Mix ‘n’ Match Async and Group Replication for Advanced Replication Setups
 
MySQL InnoDB 源码实现分析(一)
MySQL InnoDB 源码实现分析(一)MySQL InnoDB 源码实现分析(一)
MySQL InnoDB 源码实现分析(一)
 
MySQL High Availability and Disaster Recovery with Continuent, a VMware company
MySQL High Availability and Disaster Recovery with Continuent, a VMware companyMySQL High Availability and Disaster Recovery with Continuent, a VMware company
MySQL High Availability and Disaster Recovery with Continuent, a VMware company
 
MySQL Backup and Recovery Essentials
MySQL Backup and Recovery EssentialsMySQL Backup and Recovery Essentials
MySQL Backup and Recovery Essentials
 
Mysql参数-GDB
Mysql参数-GDBMysql参数-GDB
Mysql参数-GDB
 
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
 
2010丹臣的思考
2010丹臣的思考2010丹臣的思考
2010丹臣的思考
 
Capturing, Analyzing and Optimizing MySQL
Capturing, Analyzing and Optimizing MySQLCapturing, Analyzing and Optimizing MySQL
Capturing, Analyzing and Optimizing MySQL
 
Mysql high availability and scalability
Mysql high availability and scalabilityMysql high availability and scalability
Mysql high availability and scalability
 
Group Replication: A Journey to the Group Communication Core
Group Replication: A Journey to the Group Communication CoreGroup Replication: A Journey to the Group Communication Core
Group Replication: A Journey to the Group Communication Core
 
MySQL InnoDB Cluster and Group Replication - OSI 2017 Bangalore
MySQL InnoDB Cluster and Group Replication - OSI 2017 BangaloreMySQL InnoDB Cluster and Group Replication - OSI 2017 Bangalore
MySQL InnoDB Cluster and Group Replication - OSI 2017 Bangalore
 
MySQL Best Practices - OTN LAD Tour
MySQL Best Practices - OTN LAD TourMySQL Best Practices - OTN LAD Tour
MySQL Best Practices - OTN LAD Tour
 
SQL Outer Joins for Fun and Profit
SQL Outer Joins for Fun and ProfitSQL Outer Joins for Fun and Profit
SQL Outer Joins for Fun and Profit
 
Galera cluster for high availability
Galera cluster for high availability Galera cluster for high availability
Galera cluster for high availability
 
淘宝数据库架构演进历程
淘宝数据库架构演进历程淘宝数据库架构演进历程
淘宝数据库架构演进历程
 
A New Architecture for Group Replication in Data Grid
A New Architecture for Group Replication in Data GridA New Architecture for Group Replication in Data Grid
A New Architecture for Group Replication in Data Grid
 
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
MySQL InnoDB Cluster - A complete High Availability solution for MySQLMySQL InnoDB Cluster - A complete High Availability solution for MySQL
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
 
MySQL High Availability with Group Replication
MySQL High Availability with Group ReplicationMySQL High Availability with Group Replication
MySQL High Availability with Group Replication
 
Inno db internals innodb file formats and source code structure
Inno db internals innodb file formats and source code structureInno db internals innodb file formats and source code structure
Inno db internals innodb file formats and source code structure
 

Similar to Load Data Fast!

Linuxfest Northwest 2022 - MySQL 8.0 Nre Features
Linuxfest Northwest 2022 - MySQL 8.0 Nre FeaturesLinuxfest Northwest 2022 - MySQL 8.0 Nre Features
Linuxfest Northwest 2022 - MySQL 8.0 Nre FeaturesDave Stokes
 
MySQL 8.0 New Features -- September 27th presentation for Open Source Summit
MySQL 8.0 New Features -- September 27th presentation for Open Source SummitMySQL 8.0 New Features -- September 27th presentation for Open Source Summit
MySQL 8.0 New Features -- September 27th presentation for Open Source SummitDave Stokes
 
Non-blocking I/O, Event loops and node.js
Non-blocking I/O, Event loops and node.jsNon-blocking I/O, Event loops and node.js
Non-blocking I/O, Event loops and node.jsMarcus Frödin
 
Azure SQL Database - Connectivity Best Practices
Azure SQL Database - Connectivity Best PracticesAzure SQL Database - Connectivity Best Practices
Azure SQL Database - Connectivity Best PracticesJose Manuel Jurado Diaz
 
Compare mysql5.1.50 mysql5.5.8
Compare mysql5.1.50 mysql5.5.8Compare mysql5.1.50 mysql5.5.8
Compare mysql5.1.50 mysql5.5.8Philip Zhong
 
QA Fest 2019. Антон Молдован. Load testing which you always wanted
QA Fest 2019. Антон Молдован. Load testing which you always wantedQA Fest 2019. Антон Молдован. Load testing which you always wanted
QA Fest 2019. Антон Молдован. Load testing which you always wantedQAFest
 
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...Ontico
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionTanel Poder
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneEnkitec
 
MySQL Scaling Presentation
MySQL Scaling PresentationMySQL Scaling Presentation
MySQL Scaling PresentationTommy Falgout
 
Data migration into eav model
Data migration into eav modelData migration into eav model
Data migration into eav modelMagento Dev
 
Full Stack Load Testing
Full Stack Load Testing Full Stack Load Testing
Full Stack Load Testing Terral R Jordan
 
Scaling asp.net websites to millions of users
Scaling asp.net websites to millions of usersScaling asp.net websites to millions of users
Scaling asp.net websites to millions of usersoazabir
 
Performance and stability testing \w Gatling
Performance and stability testing \w GatlingPerformance and stability testing \w Gatling
Performance and stability testing \w GatlingDmitry Vrublevsky
 

Similar to Load Data Fast! (20)

Linuxfest Northwest 2022 - MySQL 8.0 Nre Features
Linuxfest Northwest 2022 - MySQL 8.0 Nre FeaturesLinuxfest Northwest 2022 - MySQL 8.0 Nre Features
Linuxfest Northwest 2022 - MySQL 8.0 Nre Features
 
MySQL 8.0 New Features -- September 27th presentation for Open Source Summit
MySQL 8.0 New Features -- September 27th presentation for Open Source SummitMySQL 8.0 New Features -- September 27th presentation for Open Source Summit
MySQL 8.0 New Features -- September 27th presentation for Open Source Summit
 
Apex code benchmarking
Apex code benchmarkingApex code benchmarking
Apex code benchmarking
 
Non-blocking I/O, Event loops and node.js
Non-blocking I/O, Event loops and node.jsNon-blocking I/O, Event loops and node.js
Non-blocking I/O, Event loops and node.js
 
Azure SQL Database - Connectivity Best Practices
Azure SQL Database - Connectivity Best PracticesAzure SQL Database - Connectivity Best Practices
Azure SQL Database - Connectivity Best Practices
 
Compare mysql5.1.50 mysql5.5.8
Compare mysql5.1.50 mysql5.5.8Compare mysql5.1.50 mysql5.5.8
Compare mysql5.1.50 mysql5.5.8
 
QA Fest 2019. Антон Молдован. Load testing which you always wanted
QA Fest 2019. Антон Молдован. Load testing which you always wantedQA Fest 2019. Антон Молдован. Load testing which you always wanted
QA Fest 2019. Антон Молдован. Load testing which you always wanted
 
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...
Tarantool как платформа для микросервисов / Антон Резников, Владимир Перепели...
 
Oracle Database In-Memory Option in Action
Oracle Database In-Memory Option in ActionOracle Database In-Memory Option in Action
Oracle Database In-Memory Option in Action
 
In Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry OsborneIn Memory Database In Action by Tanel Poder and Kerry Osborne
In Memory Database In Action by Tanel Poder and Kerry Osborne
 
MySQL Scaling Presentation
MySQL Scaling PresentationMySQL Scaling Presentation
MySQL Scaling Presentation
 
Performance tests with Gatling
Performance tests with GatlingPerformance tests with Gatling
Performance tests with Gatling
 
Data migration into eav model
Data migration into eav modelData migration into eav model
Data migration into eav model
 
Load testing with Blitz
Load testing with BlitzLoad testing with Blitz
Load testing with Blitz
 
Full Stack Load Testing
Full Stack Load Testing Full Stack Load Testing
Full Stack Load Testing
 
Performance Tuning
Performance TuningPerformance Tuning
Performance Tuning
 
Nginx
NginxNginx
Nginx
 
Run Node Run
Run Node RunRun Node Run
Run Node Run
 
Scaling asp.net websites to millions of users
Scaling asp.net websites to millions of usersScaling asp.net websites to millions of users
Scaling asp.net websites to millions of users
 
Performance and stability testing \w Gatling
Performance and stability testing \w GatlingPerformance and stability testing \w Gatling
Performance and stability testing \w Gatling
 

More from Karwin Software Solutions LLC (14)

How to Use JSON in MySQL Wrong
How to Use JSON in MySQL WrongHow to Use JSON in MySQL Wrong
How to Use JSON in MySQL Wrong
 
Recursive Query Throwdown
Recursive Query ThrowdownRecursive Query Throwdown
Recursive Query Throwdown
 
Extensible Data Modeling
Extensible Data ModelingExtensible Data Modeling
Extensible Data Modeling
 
Survey of Percona Toolkit
Survey of Percona ToolkitSurvey of Percona Toolkit
Survey of Percona Toolkit
 
How to Design Indexes, Really
How to Design Indexes, ReallyHow to Design Indexes, Really
How to Design Indexes, Really
 
Schemadoc
SchemadocSchemadoc
Schemadoc
 
Percona toolkit
Percona toolkitPercona toolkit
Percona toolkit
 
MySQL 5.5 Guide to InnoDB Status
MySQL 5.5 Guide to InnoDB StatusMySQL 5.5 Guide to InnoDB Status
MySQL 5.5 Guide to InnoDB Status
 
Requirements the Last Bottleneck
Requirements the Last BottleneckRequirements the Last Bottleneck
Requirements the Last Bottleneck
 
Mentor Your Indexes
Mentor Your IndexesMentor Your Indexes
Mentor Your Indexes
 
Models for hierarchical data
Models for hierarchical dataModels for hierarchical data
Models for hierarchical data
 
Sql Injection Myths and Fallacies
Sql Injection Myths and FallaciesSql Injection Myths and Fallacies
Sql Injection Myths and Fallacies
 
Full Text Search In PostgreSQL
Full Text Search In PostgreSQLFull Text Search In PostgreSQL
Full Text Search In PostgreSQL
 
Practical Object Oriented Models In Sql
Practical Object Oriented Models In SqlPractical Object Oriented Models In Sql
Practical Object Oriented Models In Sql
 

Recently uploaded

8 key point on optimizing web hosting services in your business.pdf
8 key point on optimizing web hosting services in your business.pdf8 key point on optimizing web hosting services in your business.pdf
8 key point on optimizing web hosting services in your business.pdfOffsiteNOC
 
VuNet software organisation powerpoint deck
VuNet software organisation powerpoint deckVuNet software organisation powerpoint deck
VuNet software organisation powerpoint deckNaval Singh
 
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...jackiepotts6
 
MUT4SLX: Extensions for Mutation Testing of Stateflow Models
MUT4SLX: Extensions for Mutation Testing of Stateflow ModelsMUT4SLX: Extensions for Mutation Testing of Stateflow Models
MUT4SLX: Extensions for Mutation Testing of Stateflow ModelsUniversity of Antwerp
 
Mobile App Development company Houston
Mobile  App  Development  company HoustonMobile  App  Development  company Houston
Mobile App Development company Houstonjennysmithusa549
 
MinionLabs_Mr. Gokul Srinivas_Young Entrepreneur
MinionLabs_Mr. Gokul Srinivas_Young EntrepreneurMinionLabs_Mr. Gokul Srinivas_Young Entrepreneur
MinionLabs_Mr. Gokul Srinivas_Young EntrepreneurPriyadarshini T
 
Boost Efficiency: Sabre API Integration Made Easy
Boost Efficiency: Sabre API Integration Made EasyBoost Efficiency: Sabre API Integration Made Easy
Boost Efficiency: Sabre API Integration Made Easymichealwillson701
 
Mobile App Development process | Expert Tips
Mobile App Development process | Expert TipsMobile App Development process | Expert Tips
Mobile App Development process | Expert Tipsmichealwillson701
 
8 Steps to Build a LangChain RAG Chatbot.
8 Steps to Build a LangChain RAG Chatbot.8 Steps to Build a LangChain RAG Chatbot.
8 Steps to Build a LangChain RAG Chatbot.Ritesh Kanjee
 
Practical Advice for FDA’s 510(k) Requirements.pdf
Practical Advice for FDA’s 510(k) Requirements.pdfPractical Advice for FDA’s 510(k) Requirements.pdf
Practical Advice for FDA’s 510(k) Requirements.pdfICS
 
Enterprise Content Managements Solutions
Enterprise Content Managements SolutionsEnterprise Content Managements Solutions
Enterprise Content Managements SolutionsIQBG inc
 
renewable energy renewable energy renewable energy renewable energy
renewable energy renewable energy renewable energy  renewable energyrenewable energy renewable energy renewable energy  renewable energy
renewable energy renewable energy renewable energy renewable energyjeyasrig
 
Leveling Up your Branding and Mastering MERN: Fullstack WebDev
Leveling Up your Branding and Mastering MERN: Fullstack WebDevLeveling Up your Branding and Mastering MERN: Fullstack WebDev
Leveling Up your Branding and Mastering MERN: Fullstack WebDevpmgdscunsri
 
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptx
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptxCYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptx
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptxBarakaMuyengi
 
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...MyFAA
 
Unlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsUnlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsconfluent
 
BusinessGPT - SECURITY AND GOVERNANCE FOR GENERATIVE AI.pptx
BusinessGPT  - SECURITY AND GOVERNANCE  FOR GENERATIVE AI.pptxBusinessGPT  - SECURITY AND GOVERNANCE  FOR GENERATIVE AI.pptx
BusinessGPT - SECURITY AND GOVERNANCE FOR GENERATIVE AI.pptxAGATSoftware
 
Technical improvements. Reasons. Methods. Estimations. CJ
Technical improvements.  Reasons. Methods. Estimations. CJTechnical improvements.  Reasons. Methods. Estimations. CJ
Technical improvements. Reasons. Methods. Estimations. CJpolinaucc
 
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdf
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdfFlutter the Future of Mobile App Development - 5 Crucial Reasons.pdf
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdfMind IT Systems
 

Recently uploaded (20)

8 key point on optimizing web hosting services in your business.pdf
8 key point on optimizing web hosting services in your business.pdf8 key point on optimizing web hosting services in your business.pdf
8 key point on optimizing web hosting services in your business.pdf
 
VuNet software organisation powerpoint deck
VuNet software organisation powerpoint deckVuNet software organisation powerpoint deck
VuNet software organisation powerpoint deck
 
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...
03.2024_North America VMUG Optimizing RevOps using the power of ChatGPT in Ma...
 
20140812 - OBD2 Solution
20140812 - OBD2 Solution20140812 - OBD2 Solution
20140812 - OBD2 Solution
 
MUT4SLX: Extensions for Mutation Testing of Stateflow Models
MUT4SLX: Extensions for Mutation Testing of Stateflow ModelsMUT4SLX: Extensions for Mutation Testing of Stateflow Models
MUT4SLX: Extensions for Mutation Testing of Stateflow Models
 
Mobile App Development company Houston
Mobile  App  Development  company HoustonMobile  App  Development  company Houston
Mobile App Development company Houston
 
MinionLabs_Mr. Gokul Srinivas_Young Entrepreneur
MinionLabs_Mr. Gokul Srinivas_Young EntrepreneurMinionLabs_Mr. Gokul Srinivas_Young Entrepreneur
MinionLabs_Mr. Gokul Srinivas_Young Entrepreneur
 
Boost Efficiency: Sabre API Integration Made Easy
Boost Efficiency: Sabre API Integration Made EasyBoost Efficiency: Sabre API Integration Made Easy
Boost Efficiency: Sabre API Integration Made Easy
 
Mobile App Development process | Expert Tips
Mobile App Development process | Expert TipsMobile App Development process | Expert Tips
Mobile App Development process | Expert Tips
 
8 Steps to Build a LangChain RAG Chatbot.
8 Steps to Build a LangChain RAG Chatbot.8 Steps to Build a LangChain RAG Chatbot.
8 Steps to Build a LangChain RAG Chatbot.
 
Practical Advice for FDA’s 510(k) Requirements.pdf
Practical Advice for FDA’s 510(k) Requirements.pdfPractical Advice for FDA’s 510(k) Requirements.pdf
Practical Advice for FDA’s 510(k) Requirements.pdf
 
Enterprise Content Managements Solutions
Enterprise Content Managements SolutionsEnterprise Content Managements Solutions
Enterprise Content Managements Solutions
 
renewable energy renewable energy renewable energy renewable energy
renewable energy renewable energy renewable energy  renewable energyrenewable energy renewable energy renewable energy  renewable energy
renewable energy renewable energy renewable energy renewable energy
 
Leveling Up your Branding and Mastering MERN: Fullstack WebDev
Leveling Up your Branding and Mastering MERN: Fullstack WebDevLeveling Up your Branding and Mastering MERN: Fullstack WebDev
Leveling Up your Branding and Mastering MERN: Fullstack WebDev
 
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptx
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptxCYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptx
CYBER SECURITY AND CYBER CRIME COMPLETE GUIDE.pLptx
 
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...
Take Advantage of Mx Tracking Flight Scheduling Solutions to Streamline Your ...
 
Unlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insightsUnlocking the Power of IoT: A comprehensive approach to real-time insights
Unlocking the Power of IoT: A comprehensive approach to real-time insights
 
BusinessGPT - SECURITY AND GOVERNANCE FOR GENERATIVE AI.pptx
BusinessGPT  - SECURITY AND GOVERNANCE  FOR GENERATIVE AI.pptxBusinessGPT  - SECURITY AND GOVERNANCE  FOR GENERATIVE AI.pptx
BusinessGPT - SECURITY AND GOVERNANCE FOR GENERATIVE AI.pptx
 
Technical improvements. Reasons. Methods. Estimations. CJ
Technical improvements.  Reasons. Methods. Estimations. CJTechnical improvements.  Reasons. Methods. Estimations. CJ
Technical improvements. Reasons. Methods. Estimations. CJ
 
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdf
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdfFlutter the Future of Mobile App Development - 5 Crucial Reasons.pdf
Flutter the Future of Mobile App Development - 5 Crucial Reasons.pdf
 

Load Data Fast!

  • 1. Load Data Fast! BILL KARWIN PERCONA LIVE OPEN SOURCE DATABASE CONFERENCE 2017
  • 2. Bill Karwin Software developer, consultant, trainer Using MySQL since 2000 Senior Database Architect at SchoolMessenger SQL Antipatterns: Avoiding the Pitfalls of Database Programming https://pragprog.com/titles/bksqla/sql-antipatterns Oracle ACE Director
  • 3. Load Data Fast! Common chores § Dump and restore § Import third-party data § Extract, Transfer, Load (ETL) § Test data that needs to be reloaded repeatedly https://commons.wikimedia.org/wiki/File:Kitten_with_laptop_-_278017185.jpg Is it done yet?
  • 4. How to Speed This Up? 1. Query Solutions 2. Schema Solutions 3. Configuration Solutions 4. Parallel Execution Solutions
  • 5. Example Table CREATE TABLE TestTable ( id INT UNSIGNED NOT NULL PRIMARY KEY, intCol INT UNSIGNED DEFAULT NULL, stringCol VARCHAR(100) DEFAULT NULL, textCol TEXT ) ENGINE=InnoDB; Let’s load 1 million rows!
  • 6. Best Case Performance Running a test script to loop over 1 million rows, without inserting to a database. $ php test-bulk-insert.php --total-rows 1000000 --noop This should have a speed that is the upper bound for any subsequent test. Time: 2 seconds (00:00:02) 1000000 rows = 432435.24 rows/sec 1000000 stmt = 432435.24 stmt/sec 1000000 txns = 432435.24 txns/sec 1000000 conn = 432435.24 conn/sec
  • 7. Worst Case Performance INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?); Run a test script that executes one INSERT, commits, reconnects. $ php test-bulk-insert.php --total-rows 10000 Time: 34 seconds (00:00:34) 10000 rows = 290.29 rows/sec 10000 stmt = 290.29 stmt/sec 10000 txns = 290.29 txns/sec 10000 conn = 290.29 conn/sec
  • 8. Inserting One Row: Overhead https://dev.mysql.com/doc/refman/8.0/en/insert-optimization.html 0 0.5 1 1.5 2 2.5 3 Connecting Sending query Parsing Inserting row Closing query
  • 10. Inserting One Row at a Time INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?); Run a test script that executes one INSERT, commits using a single connection. $ php test-bulk-insert.php --total-rows 1000000 --txns-per-conn 1000000 Time: 527 seconds (00:08:47) 1000000 rows = 1894.67 rows/sec 1000000 stmt = 1894.67 stmt/sec 1000000 txns = 1894.67 txns/sec 1 conn = 0.00 conn/sec
  • 11. Inserting One Row: Overhead 0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Sending query Parsing Inserting row Closing query
  • 12. Inserting Multiple Rows INSERT INTO TestTable (id, intCol, stringCol, textCol) VALUES (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?), (?, ?, ?, ?); Q: How many rows can you insert in one statement? A: As many as fit in max_allowed_packet bytes.
  • 13. Inserting Multiple Rows: Overhead 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Closing query
  • 14. Inserting Multiple Rows: Results $ php Test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --txns-per-conn 10000 Time: 85 seconds (00:01:25) 1000000 rows = 11680.98 rows/sec 10000 stmt = 116.81 stmt/sec 10000 txns = 116.81 txns/sec 1 conn = 0.01 conn/sec
  • 15. Transactions BEGIN TRANSACTION; INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … INSERT INTO TestTable … COMMIT; Q: How many statements can you do in one transaction? A: In theory this is constrained by undo log segments, but it's a lot.
  • 16. Transactions: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec 10000 stmt = 157.45 stmt/sec 100 txns = 1.57 txns/sec 1 conn = 0.02 conn/sec
  • 17. Inserting with Prepared Queries BEGIN TRANSACTION; PREPARE INSERT INTO TestTable … EXECUTE … EXECUTE … EXECUTE … EXECUTE … COMMIT; Q: How many times can you execute a given prepared statement? A: There is no limit, as far as I can tell.
  • 18. 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Inserting row Inserting row Inserting row Closing query Prepared Queries: Overhead
  • 19. Prepared Queries: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --emulate-prepares Time: 95 seconds (00:01:35) 1000000 rows = 10518.97 rows/sec Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec
  • 20. Load Data in File: Results mysql> LOAD DATA LOCAL INFILE 'TestTable.csv' INTO TABLE TestTable; https://dev.mysql.com/doc/refman/8.0/en/load-data.html Flat-file data load in a single transaction. Works with replication.
  • 21. Overhead: Load Data Infile 0 50 100 150 200 250 Sending query Parsing LOAD DATA INFILE Closing query
  • 22. Load Data in File: Results $ php test-bulk-insert.php --total-rows 1000000 --load-data Time: 25 seconds (00:00:25) 1000000 rows = 39563.53 rows/sec 1 stmt = 0.04 stmt/sec 1 txns = 0.04 txns/sec 1 conn = 0.04 conn/sec
  • 23. Load XML in File: Results LOAD XML LOCAL INFILE 'TestTable.xml' INTO TABLE TestTable; https://dev.mysql.com/doc/refman/8.0/en/load-xml.html $ php test-bulk-insert.php --total-rows 1000000 --load-xml Time: 77 seconds (00:01:17) 1000000 rows = 12858.16 rows/sec 1 stmt = 0.01 stmt/sec 1 txns = 0.01 txns/sec 1 conn = 0.01 conn/sec
  • 24. What about Load JSON in File? Sorry, the hypothetical LOAD JSON INFILE is not supported by MySQL yet. 😭 But it has been proposed as a feature request: https://bugs.mysql.com/bug.php?id=79209 Go vote for it! Or better yet, implement it and contribute a patch!
  • 26. Indexes How much overhead for one index? Two indexes? 1. mysql> ALTER TABLE TestTable ADD INDEX (intCol); 2. mysql> ALTER TABLE TextTable ADD INDEX (stringCol);
  • 27. Indexes: Overhead 0 1 2 3 4 5 6 7 8 Sending query Parsing Inserting row Inserting indexes Closing query
  • 28. Indexes: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --indexes 1 $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --indexes 2 Time: 71 seconds (00:01:11) 1000000 rows = 13993.81 rows/sec Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec Time: 95 seconds (00:01:35) 1000000 rows = 10473.64 rows/sec
  • 29. Index Deferral What if we insert with no indexes, and build indexes at the end? § Thi is what Percona’s mysqldump --innodb-optimize-keys does. § Load time is like when you have no indexes: Then create indexes after data load. This reduces the effective rate of rows/second: mysql> ALTER TABLE TestTable ADD INDEX (intCol); Query OK, 0 rows affected (7.02 sec) mysql> ALTER TABLE TestTable ADD INDEX (stringCol); Query OK, 0 rows affected (8.54 sec) Time: 63 seconds (00:01:03) 1000000 rows = 15744.53 rows/sec Time: 63 + 7 + 8.5 seconds (00:01:35) 1000000 rows = 12738.85 rows/sec effective data load rate
  • 30. Triggers How much overhead for a trigger? mysql> CREATE TRIGGER TestTrigger BEFORE INSERT ON TestTable FOR EACH ROW SET NEW.stringCol = UPPER(NEW.stringCol); This is a very simple trigger. If you have more complex code, like subordinate INSERT statements, the cost will be higher.
  • 31. Triggers: Results $ php test-bulk-insert.php --total-rows 1000000 --rows-per-stmt 100 --stmts-per-txn 100 --txns-per-conn 100 --trigger Time: 69 seconds (00:01:09) 1000000 rows = 14296.91 rows/sec 10000 stmt = 142.97 stmt/sec 100 txns = 1.43 txns/sec 1 conn = 0.01 conn/sec
  • 32. CSV Storage Engine mysql> CREATE TABLE TestTable ( id INT UNSIGNED NOT NULL, intCol INT UNSIGNED NOT NULL, stringCol VARCHAR(100) NOT NULL, textCol TEXT NOT NULL ) ENGINE=CSV; # ls -l /usr/local/mysql/data/test total 24 -rw-r----- 1 _mysql _mysql 5824 Apr 22 20:10 TestTable_429.SDI -rw-r----- 1 _mysql _mysql 35 Apr 22 20:10 testtable.CSM -rw-r----- 1 _mysql _mysql 0 Apr 22 20:10 testtable.CSV
  • 33. CSV Storage Engine Move CSV file into datadir: # time cp data.csv /usr/local/mysql/data/test/testtable.CSV real 0m8.359s # ls -l /usr/local/mysql/data/test/ total 6350872 -rw-r----- 1 _mysql _mysql 5824 Apr 22 20:18 TestTable_431.SDI -rw-r----- 1 _mysql _mysql 35 Apr 22 20:18 testtable.CSM -rw-r----- 1 _mysql _mysql 3251630334 Apr 22 20:19 testtable.CSV Time: 8.359 (00:00:08) 1000000 rows = 119631.53 rows/sec
  • 34. CSV into InnoDB Storage Engine Use CSV storage engine, then alter to InnoDB table (and add a primary key): ALTER TABLE TestTable ADD PRIMARY KEY (id), ENGINE=InnoDB; Query OK, 1000000 rows affected (1 min 37.73 sec) Time: 8.359 + 97.73 seconds (00:01:46) 1000000 rows = 9426.05 rows/sec effective data load rate
  • 38. Increase Buffering, Decrease Durability innodb_buffer_pool_size = 4G (default 128M) innodb_log_buffer_size = 1G (default 16M) innodb_log_file_size = 4G (default 48M) innodb_flush_log_at_trx_commit = 0 (default 1) # log-bin = mysql-bin Time: 56 seconds (00:00:56) 1000000 rows = 17697.29 rows/sec
  • 39. Increase Buffering, Decrease Durability Same, but at least flush the log buffer: innodb_flush_log_at_trx_commit = 2 (default 1) Time: 60 seconds (00:01:00) 1000000 rows = 16564.26 rows/sec
  • 40. Tuning + Load Data $ php test-bulk-insert.php --total-rows 1000000 --load-data Time: 22 seconds (00:00:22) 1000000 rows = 43873.50 rows/sec
  • 41. Config for More Buffering Innodb_buffer_pool_size=4G (default 128M) Time: 82 seconds (00:01:22) 1000000 rows = 12161.69 rows/sec Innodb_change_buffering=none (default all) Innodb_log_buffer_size=1G (default 16M) Time: 81 seconds (00:01:21) 1000000 rows = 12291.17 rows/sec Binlog_cache_size=256K) (default 32K)
  • 42. Config for Greater Throughput Innodb_log_file_size=4G (default 48M) Time: 80 seconds (00:01:20) 1000000 rows = 12488.30 rows/sec Innodb_io_capacity=2000 (default 200) Time: 80 seconds (00:01:20) 1000000 rows = 12432.38 rows/sec Innodb_lru_scan_depth=8192 (default 1024) Time: 81 seconds (00:01:21) 1000000 rows = 12269.61 rows/sec
  • 43. Config for Lower Durability Innodb_doublewrite=OFF (default ON) Time: 85 seconds (00:01:25) 1000000 rows = 11740.06 rows/sec Innodb_flush_log_at_trx_commit=0 (default 1) Time: 84 seconds (00:01:24) 1000000 rows = 11768.51 rows/sec # Log_bin Time: 82 seconds (00:01:22) 1000000 rows = 12087.97 rows/sec Sync_binlog=0 (default 1) Time: 83 seconds (00:01:23) 1000000 rows = 11906.84 rows/sec
  • 44. Config for Fewer Checks Innodb_checksum_algorithm=none (default crc32) Time: 84 seconds (00:01:24) 1000000 rows = 11807.99 rows/sec Innodb_log_checksums=OFF (default ON) Time: 84 seconds (00:01:24) 1000000 rows = 11893.64 rows/sec Foreign_key_checks=0 (default 1) Unique_checks=0 (default 1)
  • 46. Parallel Import Like LOAD DATA INFILE but supports multi-threaded import: $ mysqlimport --local --use-threads 4 dbname table1 table2 table3 table4 Runs a fixed number of threads, imports one table per thread. If an import finishes and there are more tables, first available thread does it. https://dev.mysql.com/doc/refman/8.0/en/mysqlimport.html
  • 47. Parallel Import Connecting to localhost Connecting to localhost Connecting to localhost Connecting to localhost Selecting database test Selecting database test Selecting database test Selecting database test Loading data from LOCAL file: TestTable2.csv into TestTable2 Loading data from LOCAL file: TestTable3.csv into TestTable3 Loading data from LOCAL file: TestTable1.csv into TestTable1 Loading data from LOCAL file: TestTable4.csv into TestTable4 test.TestTable3: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable1: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable2: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost test.TestTable4: Records: 250000 Deleted: 0 Skipped: 0 Warnings: 0 Disconnecting from localhost
  • 48. MysqlImport: Results $ php test-bulk-insert.php --total-rows 1000000 --load-data --use-threads 4 Time: 31 seconds (00:00:31) 1000000 rows = 32205.28 rows/sec 4 stmt = 0.13 stmt/sec 4 txns = 0.13 txns/sec 4 conn = 0.13 conn/sec
  • 51. Want to Try The Tests Yourself? The test-bulk-insert.php script is available here: https://github.com/billkarwin/bk-tools
  • 52. One Last Thing… What Was Our Solution? We cheated: § Load database once. § Take a filesystem snapshot. § Run tests. § Restore from snapshot. § Re-run tests. § etc. This is not a good solution for everyone. It worked for one specific use case.
  • 53. License and Copyright Copyright 2017 Bill Karwin http://www.slideshare.net/billkarwin Released under a Creative Commons 3.0 License: http://creativecommons.org/licenses/by-nc-nd/3.0/ You are free to share—to copy, distribute, and transmit this work, under the following conditions: Attribution. You must attribute this work to Bill Karwin. Noncommercial. You may not use this work for commercial purposes. No Derivative Works. You may not alter, transform, or build upon this work.