Skip to content Skip to sidebar Skip to footer

Python: Iterate Over Many Large Files Simultaneusly, Get Every K-th Line

As in the title - I have many very large text files (>10GB) that have the same, repetitive structure. I would like to filter some information out so I would like to yield every

Solution 1:

Given that you talk about using itertools.izip(), I'm going to assume you are using Python 2 here.

Use itertools.islice() to facilitate skipping lines from files, and the itertools.izip_longest() function to lazily combine reading in parallel as well as handle files that are shorter:

from itertools import islice, izip_longest

filenames = [fname1, fname2, fname3]
open_files = [open(fname) for fname in filenames]
kth_slice_files = (islice(f, None, None, k) for f in open_files)
try:
    for kth_lines in izip_longest(*kth_slice_files, fillvalue=''):
        # do something with those combined lines

islice(fileobj, None, None, k) will start at the first line, then skip k - 1 lines to give you the 1 + k, then 1 + 2*k, etc. lines. If you need to start at a later line, replace the first None with that starting value.

Post a Comment for "Python: Iterate Over Many Large Files Simultaneusly, Get Every K-th Line"