Skip to content Skip to sidebar Skip to footer

Python : What's Wrong With My Code Of Multi Processes Inserting To Mysql?

I've write a Python script to insert some data(300 millions) to a MySQL table: #!/usr/bin/python import os import MySQLdb from multiprocessing import Pool class DB(object): d

Solution 1:

Yes, if you are bulk-inserting 300 million rows into the same table, then you shouldn't try to parallelize this insertion. All inserts must go through the same bottlenecks: updating the index, and writing into the physical file on the hard disk. These operations require exclusive access to the underlying resources (the index, or the disk head).

You are actually adding some useless overhead on the database which now has to handle several concurrent transactions. This consumes memory, forces context switching, makes the disk read head jump around all the time, and so on.

Insert everything in the same thread.


It looks like you are actually importing data from a kind of CSV file. You may want to use the built-in LOAD DATA INFILE MySQL command, designed for this very purpose. Please describe your source file if you need some help in tuning this command.

Post a Comment for "Python : What's Wrong With My Code Of Multi Processes Inserting To Mysql?"