文章来自《Python cookbook》. 翻译仅仅是为了个人学习,其它商业版权纠纷与此无关!
-- 61.182.251.99 [DateTime(2004-09-21T18:36:08Z)] TableOfContents
描述
统计文件行数
问题 Problem
需要计算文件的行数
解决 Solution
- The simplest approach, for reasonably sized files, is to read the file as a list of lines so that the count of lines is the length of the list.
对于具有适当大小的文件,最简单方法是拷贝文件各行到一个list中,list的长度就是文件的行数。
- If the file's path is in a string bound to the thefilepath variable, that's just:
如果文件路径由下面的string变量thefilepath给出,代码如下:
count = len(open(thefilepath).readlines( ))
For a truly huge file, this may be very slow or even fail to work. If you have to worry about humongous files, a loop using the xreadlines method always works:
对于很大的文件,readlines可能很慢, 甚至操作失败。 如果需要处理特别巨大的文件,使用xreadlines函数的循环可以保证不会出问题:
count = 0 for line in open(thefilepath).xreadlines( ): count += 1
Here's a slightly tricky alternative, if the line terminator is '\n' (or has '\n' as a substring, as happens on Windows):
如果行分隔符号是'\n' (Windows中'\n'作为一个子串出现), 那么可以选用如下的小技巧:
count = 0 thefile = open(thefilepath, 'rb') while 1: buffer = thefile.read(8192*1024) if not buffer: break count += buffer.count('\n') #string函数count thefile.close( )
Without the 'rb' argument to open, this will work anywhere, but performance may suffer greatly on Windows or Macintosh platforms.
如果不使用上面的rb参数,脚本也会正常工作,但是在Windows或Macintosh平台上可能有很大的效率损失。
讨论 Discussion
If you have an external program that counts a file's lines, such as wc -l on Unix-like platforms, you can of course choose to use that (e.g., via os.popen( )). However, it's generally simpler, faster, and more portable to do the line-counting in your program. You can rely on almost all text files having a reasonable size, so that reading the whole file into memory at once is feasible. For all such normal files, the len of the result of readlines gives you the count of lines in the simplest way.
如果可以使用外部程序,比如类Unix平台上命令 wc -l来统计文件行数,那么应该使用这个程序(在脚本中利用 os.popen()). 但更简单快捷, 具有移植性的做法是在程序中编码计算文件行数。可以认为几乎所有的文本文件具有合适的大小,一次读入内存处理是适当的。对这种文件,readlines读出的序列长度最简单地给出了文件行数。
If the file is larger than available memory (say, a few hundred of megabytes on a typical PC today), the simplest solution can become slow, as the operating system struggles to fit the file's contents into virtual memory. It may even fail, when swap space is exhausted and virtual memory can't help any more.
如果文件相对于可用内存太大(呵呵,普通PC机上的几百M大小文件),由于操作系统需要将文件内容读入虚拟内存,上面的简单方法可能非常慢。如果交换分区耗尽,即使是虚拟内存也不会有用,操作甚至会失败。(#译注:不太明白的说)
- On a typical PC, with 256 MB of RAM and virtually unlimited disk space, you should still expect serious problems when you try to read into memory files of, say, 1 or 2 GB, depending on your operating system (some operating systems are much more fragile than others in handling virtual-memory issues under such overstressed load conditions).
即使在有256 M RAM,硬盘可以认为是无限大的PC机上,将一个有1到2G大小的文件内容一次读入内存,可以预见也会出现问题(依赖于操作系统,有些系统在类似压力情况下的虚拟内存处理较之其它系统更脆弱)。
- In this case, the xreadlines method of file objects, introduced in Python 2.1, is generally a good way to process text files line by line. In Python 2.2, you can do even better, in terms of both clarity and speed, by looping directly on the file object:
此时,使用Python 2.1中引入的file对象的 xreadlines 方法,
for line in open(thefilepath): count += 1
However, xreadlines does not return a sequence, and neither does a loop directly on the file object, so you can't just use len in these cases to get the number of lines. Rather, you have to loop and count line by line, as shown in the solution.
Counting line-terminator characters while reading the file by bytes, in reasonably sized chunks, is the key idea in the third approach. It's probably the least immediately intuitive, and it's not perfectly cross-platform, but you might hope that it's fastest (for example, by analogy with Recipe 8.2 in the Perl Cookbook).
However, remember that, in most cases, performance doesn't really matter all that much. When it does matter, the time sink might not be what your intuition tells you it is, so you should never trust your intuition in this matter梚nstead, always benchmark and measure. For example, I took a typical Unix syslog file of middling size, a bit over 18 MB of text in 230,000 lines:
[situ@tioni nuc]$ wc nuc
- 231581 2312730 18508908 nuc
and I set up the following benchmark framework script, bench.py:
import time
def timeo(fun, n=10):
- start = time.clock( ) for i in range(n): fun( ) stend = time.clock( ) thetime = stend-start return fun._ _name_ _, thetime
import os
def linecount_wc( ):
- return int(os.popen('wc -l nuc').read().split( )[0])
def linecount_1( ):
- return len(open('nuc').readlines( ))
def linecount_2( ):
- count = 0 for line in open('nuc').xreadlines( ): count += 1 return count
def linecount_3( ):
- count = 0 thefile = open('nuc') while 1:
- buffer = thefile.read(65536) if not buffer: break count += buffer.count('\n')
for f in linecount_wc, linecount_1, linecount_2, linecount_3:
- print f._ _name_ _, f( )
for f in linecount_1, linecount_2, linecount_3:
- print "%s: %.2f"%timeo(f)
First, I print the line counts obtained by all methods, thus ensuring that there is no anomaly or error (counting tasks are notoriously prone to off-by-one errors). Then, I run each alternative 10 times, under the control of the timing function timeo, and look at the results. Here they are:
[situ@tioni nuc]$ python -O bench.py linecount_wc 231581 linecount_1 231581 linecount_2 231581 linecount_3 231581 linecount_1: 4.84 linecount_2: 4.54 linecount_3: 5.02 As you can see, the performance differences hardly matter: a difference of 10% or so in one auxiliary task is something that your users will never even notice. However, the fastest approach (for my particular circumstances, a cheap but very recent PC running a popular Linux distribution, as well as this specific benchmark) is the humble loop-on-every-line technique, while the slowest one is the ambitious technique that counts line terminators by chunks. In practice, unless I had to worry about files of many hundreds of megabytes, I'd always use the simplest approach (i.e., the first one presented in this recipe).