To prevent incremental WAL logging while loading, disable archiving and streaming replication, by setting wal_level to minimal, archive_mode to off, and max_wal_senders to zero. When loading large amounts of data into an installation that uses WAL archiving or streaming replication, it might be faster to take a new base backup after the load has completed than to process a large amount of incremental WAL data. If temporarily removing the constraint isn't acceptable, the only other recourse may be to split up the load operation into smaller transactions.ġ4.4.7. Disable WAL Archival and Streaming Replication Therefore it may be necessary, not just desirable, to drop and re-apply foreign keys when loading large amounts of data. Loading many millions of rows can cause the trigger event queue to overflow available memory, leading to intolerable swapping or even outright failure of the command. What's more, when you load data into a table with existing foreign key constraints, each new row requires an entry in the server's list of pending trigger events (since it is the firing of a trigger that checks the row's foreign key constraint). Again, there is a trade-off between data load speed and loss of error checking while the constraint is missing. So it might be useful to drop foreign key constraints, load data, and re-create the constraints. Just as with indexes, a foreign key constraint can be checked “ in bulk” more efficiently than row-by-row. However, this consideration only applies when wal_level is minimal as all commands must write WAL otherwise. In such cases no WAL needs to be written, because in case of an error, the files containing the newly loaded data will be removed anyway. Note that loading a large number of rows using COPY is almost always faster than using INSERT, even if PREPARE is used and multiple insertions are batched into a single transaction.ĬOPY is fastest when used within the same transaction as an earlier CREATE TABLE or TRUNCATE command. Different interfaces provide this facility in different ways look for “ prepared statements” in the interface documentation. This avoids some of the overhead of repeatedly parsing and planning INSERT. If you cannot use COPY, it might help to use PREPARE to create a prepared INSERT statement, and then use EXECUTE as many times as required. Since COPY is a single command, there is no need to disable autocommit if you use this method to populate a table. ![]() The COPY command is optimized for loading large numbers of rows it is less flexible than INSERT, but incurs significantly less overhead for large data loads. Use COPY to load all the rows in one command, instead of using a series of INSERT commands.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |