PageRenderTime 225ms CodeModel.GetById 27ms RepoModel.GetById 1ms app.codeStats 0ms

/po/tools/pygettext.py

http://txt2tags.googlecode.com/
Python | 669 lines | 646 code | 5 blank | 18 comment | 8 complexity | 80752181bc65d9f6b73c89b5237bedf3 MD5 | raw file
Possible License(s): GPL-2.0, GPL-3.0, WTFPL
  1. #! /usr/bin/env python
  2. # -*- coding: iso-8859-1 -*-
  3. # Originally written by Barry Warsaw <barry@zope.com>
  4. #
  5. # Minimally patched to make it even more xgettext compatible
  6. # by Peter Funk <pf@artcom-gmbh.de>
  7. #
  8. # 2002-11-22 Jürgen Hermann <jh@web.de>
  9. # Added checks that _() only contains string literals, and
  10. # command line args are resolved to module lists, i.e. you
  11. # can now pass a filename, a module or package name, or a
  12. # directory (including globbing chars, important for Win32).
  13. # Made docstring fit in 80 chars wide displays using pydoc.
  14. #
  15. # for selftesting
  16. try:
  17. import fintl
  18. _ = fintl.gettext
  19. except ImportError:
  20. _ = lambda s: s
  21. __doc__ = _("""pygettext -- Python equivalent of xgettext(1)
  22. Many systems (Solaris, Linux, Gnu) provide extensive tools that ease the
  23. internationalization of C programs. Most of these tools are independent of
  24. the programming language and can be used from within Python programs.
  25. Martin von Loewis' work[1] helps considerably in this regard.
  26. There's one problem though; xgettext is the program that scans source code
  27. looking for message strings, but it groks only C (or C++). Python
  28. introduces a few wrinkles, such as dual quoting characters, triple quoted
  29. strings, and raw strings. xgettext understands none of this.
  30. Enter pygettext, which uses Python's standard tokenize module to scan
  31. Python source code, generating .pot files identical to what GNU xgettext[2]
  32. generates for C and C++ code. From there, the standard GNU tools can be
  33. used.
  34. A word about marking Python strings as candidates for translation. GNU
  35. xgettext recognizes the following keywords: gettext, dgettext, dcgettext,
  36. and gettext_noop. But those can be a lot of text to include all over your
  37. code. C and C++ have a trick: they use the C preprocessor. Most
  38. internationalized C source includes a #define for gettext() to _() so that
  39. what has to be written in the source is much less. Thus these are both
  40. translatable strings:
  41. gettext("Translatable String")
  42. _("Translatable String")
  43. Python of course has no preprocessor so this doesn't work so well. Thus,
  44. pygettext searches only for _() by default, but see the -k/--keyword flag
  45. below for how to augment this.
  46. [1] http://www.python.org/workshops/1997-10/proceedings/loewis.html
  47. [2] http://www.gnu.org/software/gettext/gettext.html
  48. NOTE: pygettext attempts to be option and feature compatible with GNU
  49. xgettext where ever possible. However some options are still missing or are
  50. not fully implemented. Also, xgettext's use of command line switches with
  51. option arguments is broken, and in these cases, pygettext just defines
  52. additional switches.
  53. Usage: pygettext [options] inputfile ...
  54. Options:
  55. -a
  56. --extract-all
  57. Extract all strings.
  58. -d name
  59. --default-domain=name
  60. Rename the default output file from messages.pot to name.pot.
  61. -E
  62. --escape
  63. Replace non-ASCII characters with octal escape sequences.
  64. -D
  65. --docstrings
  66. Extract module, class, method, and function docstrings. These do
  67. not need to be wrapped in _() markers, and in fact cannot be for
  68. Python to consider them docstrings. (See also the -X option).
  69. -h
  70. --help
  71. Print this help message and exit.
  72. -k word
  73. --keyword=word
  74. Keywords to look for in addition to the default set, which are:
  75. %(DEFAULTKEYWORDS)s
  76. You can have multiple -k flags on the command line.
  77. -K
  78. --no-default-keywords
  79. Disable the default set of keywords (see above). Any keywords
  80. explicitly added with the -k/--keyword option are still recognized.
  81. --no-location
  82. Do not write filename/lineno location comments.
  83. -n
  84. --add-location
  85. Write filename/lineno location comments indicating where each
  86. extracted string is found in the source. These lines appear before
  87. each msgid. The style of comments is controlled by the -S/--style
  88. option. This is the default.
  89. -o filename
  90. --output=filename
  91. Rename the default output file from messages.pot to filename. If
  92. filename is `-' then the output is sent to standard out.
  93. -p dir
  94. --output-dir=dir
  95. Output files will be placed in directory dir.
  96. -S stylename
  97. --style stylename
  98. Specify which style to use for location comments. Two styles are
  99. supported:
  100. Solaris # File: filename, line: line-number
  101. GNU #: filename:line
  102. The style name is case insensitive. GNU style is the default.
  103. -v
  104. --verbose
  105. Print the names of the files being processed.
  106. -V
  107. --version
  108. Print the version of pygettext and exit.
  109. -w columns
  110. --width=columns
  111. Set width of output to columns.
  112. -x filename
  113. --exclude-file=filename
  114. Specify a file that contains a list of strings that are not be
  115. extracted from the input files. Each string to be excluded must
  116. appear on a line by itself in the file.
  117. -X filename
  118. --no-docstrings=filename
  119. Specify a file that contains a list of files (one per line) that
  120. should not have their docstrings extracted. This is only useful in
  121. conjunction with the -D option above.
  122. If `inputfile' is -, standard input is read.
  123. """)
  124. import os
  125. import imp
  126. import sys
  127. import glob
  128. import time
  129. import getopt
  130. import token
  131. import tokenize
  132. import operator
  133. __version__ = '1.5'
  134. default_keywords = ['_']
  135. DEFAULTKEYWORDS = ', '.join(default_keywords)
  136. EMPTYSTRING = ''
  137. # The normal pot-file header. msgmerge and Emacs's po-mode work better if it's
  138. # there.
  139. pot_header = _('''\
  140. # SOME DESCRIPTIVE TITLE.
  141. # Copyright (C) YEAR ORGANIZATION
  142. # FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
  143. #
  144. msgid ""
  145. msgstr ""
  146. "Project-Id-Version: PACKAGE VERSION\\n"
  147. "POT-Creation-Date: %(time)s\\n"
  148. "PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\\n"
  149. "Last-Translator: FULL NAME <EMAIL@ADDRESS>\\n"
  150. "Language-Team: LANGUAGE <LL@li.org>\\n"
  151. "MIME-Version: 1.0\\n"
  152. "Content-Type: text/plain; charset=CHARSET\\n"
  153. "Content-Transfer-Encoding: ENCODING\\n"
  154. "Generated-By: pygettext.py %(version)s\\n"
  155. ''')
  156. def usage(code, msg=''):
  157. print >> sys.stderr, __doc__ % globals()
  158. if msg:
  159. print >> sys.stderr, msg
  160. sys.exit(code)
  161. escapes = []
  162. def make_escapes(pass_iso8859):
  163. global escapes
  164. if pass_iso8859:
  165. # Allow iso-8859 characters to pass through so that e.g. 'msgid
  166. # "Höhe"' would result not result in 'msgid "H\366he"'. Otherwise we
  167. # escape any character outside the 32..126 range.
  168. mod = 128
  169. else:
  170. mod = 256
  171. for i in range(256):
  172. if 32 <= (i % mod) <= 126:
  173. escapes.append(chr(i))
  174. else:
  175. escapes.append("\\%03o" % i)
  176. escapes[ord('\\')] = '\\\\'
  177. escapes[ord('\t')] = '\\t'
  178. escapes[ord('\r')] = '\\r'
  179. escapes[ord('\n')] = '\\n'
  180. escapes[ord('\"')] = '\\"'
  181. def escape(s):
  182. global escapes
  183. s = list(s)
  184. for i in range(len(s)):
  185. s[i] = escapes[ord(s[i])]
  186. return EMPTYSTRING.join(s)
  187. def safe_eval(s):
  188. # unwrap quotes, safely
  189. return eval(s, {'__builtins__':{}}, {})
  190. def normalize(s):
  191. # This converts the various Python string types into a format that is
  192. # appropriate for .po files, namely much closer to C style.
  193. lines = s.split('\n')
  194. if len(lines) == 1:
  195. s = '"' + escape(s) + '"'
  196. else:
  197. if not lines[-1]:
  198. del lines[-1]
  199. lines[-1] = lines[-1] + '\n'
  200. for i in range(len(lines)):
  201. lines[i] = escape(lines[i])
  202. lineterm = '\\n"\n"'
  203. s = '""\n"' + lineterm.join(lines) + '"'
  204. return s
  205. def containsAny(str, set):
  206. """Check whether 'str' contains ANY of the chars in 'set'"""
  207. return 1 in [c in str for c in set]
  208. def _visit_pyfiles(list, dirname, names):
  209. """Helper for getFilesForName()."""
  210. # get extension for python source files
  211. if not globals().has_key('_py_ext'):
  212. global _py_ext
  213. _py_ext = [triple[0] for triple in imp.get_suffixes()
  214. if triple[2] == imp.PY_SOURCE][0]
  215. # don't recurse into CVS directories
  216. if 'CVS' in names:
  217. names.remove('CVS')
  218. # add all *.py files to list
  219. list.extend(
  220. [os.path.join(dirname, file) for file in names
  221. if os.path.splitext(file)[1] == _py_ext]
  222. )
  223. def _get_modpkg_path(dotted_name, pathlist=None):
  224. """Get the filesystem path for a module or a package.
  225. Return the file system path to a file for a module, and to a directory for
  226. a package. Return None if the name is not found, or is a builtin or
  227. extension module.
  228. """
  229. # split off top-most name
  230. parts = dotted_name.split('.', 1)
  231. if len(parts) > 1:
  232. # we have a dotted path, import top-level package
  233. try:
  234. file, pathname, description = imp.find_module(parts[0], pathlist)
  235. if file: file.close()
  236. except ImportError:
  237. return None
  238. # check if it's indeed a package
  239. if description[2] == imp.PKG_DIRECTORY:
  240. # recursively handle the remaining name parts
  241. pathname = _get_modpkg_path(parts[1], [pathname])
  242. else:
  243. pathname = None
  244. else:
  245. # plain name
  246. try:
  247. file, pathname, description = imp.find_module(
  248. dotted_name, pathlist)
  249. if file:
  250. file.close()
  251. if description[2] not in [imp.PY_SOURCE, imp.PKG_DIRECTORY]:
  252. pathname = None
  253. except ImportError:
  254. pathname = None
  255. return pathname
  256. def getFilesForName(name):
  257. """Get a list of module files for a filename, a module or package name,
  258. or a directory.
  259. """
  260. if not os.path.exists(name):
  261. # check for glob chars
  262. if containsAny(name, "*?[]"):
  263. files = glob.glob(name)
  264. list = []
  265. for file in files:
  266. list.extend(getFilesForName(file))
  267. return list
  268. # try to find module or package
  269. name = _get_modpkg_path(name)
  270. if not name:
  271. return []
  272. if os.path.isdir(name):
  273. # find all python files in directory
  274. list = []
  275. os.path.walk(name, _visit_pyfiles, list)
  276. return list
  277. elif os.path.exists(name):
  278. # a single file
  279. return [name]
  280. return []
  281. class TokenEater:
  282. def __init__(self, options):
  283. self.__options = options
  284. self.__messages = {}
  285. self.__state = self.__waiting
  286. self.__data = []
  287. self.__lineno = -1
  288. self.__freshmodule = 1
  289. self.__curfile = None
  290. def __call__(self, ttype, tstring, stup, etup, line):
  291. # dispatch
  292. ## import token
  293. ## print >> sys.stderr, 'ttype:', token.tok_name[ttype], \
  294. ## 'tstring:', tstring
  295. self.__state(ttype, tstring, stup[0])
  296. def __waiting(self, ttype, tstring, lineno):
  297. opts = self.__options
  298. # Do docstring extractions, if enabled
  299. if opts.docstrings and not opts.nodocstrings.get(self.__curfile):
  300. # module docstring?
  301. if self.__freshmodule:
  302. if ttype == tokenize.STRING:
  303. self.__addentry(safe_eval(tstring), lineno, isdocstring=1)
  304. self.__freshmodule = 0
  305. elif ttype not in (tokenize.COMMENT, tokenize.NL):
  306. self.__freshmodule = 0
  307. return
  308. # class docstring?
  309. if ttype == tokenize.NAME and tstring in ('class', 'def'):
  310. self.__state = self.__suiteseen
  311. return
  312. if ttype == tokenize.NAME and tstring in opts.keywords:
  313. self.__state = self.__keywordseen
  314. def __suiteseen(self, ttype, tstring, lineno):
  315. # ignore anything until we see the colon
  316. if ttype == tokenize.OP and tstring == ':':
  317. self.__state = self.__suitedocstring
  318. def __suitedocstring(self, ttype, tstring, lineno):
  319. # ignore any intervening noise
  320. if ttype == tokenize.STRING:
  321. self.__addentry(safe_eval(tstring), lineno, isdocstring=1)
  322. self.__state = self.__waiting
  323. elif ttype not in (tokenize.NEWLINE, tokenize.INDENT,
  324. tokenize.COMMENT):
  325. # there was no class docstring
  326. self.__state = self.__waiting
  327. def __keywordseen(self, ttype, tstring, lineno):
  328. if ttype == tokenize.OP and tstring == '(':
  329. self.__data = []
  330. self.__lineno = lineno
  331. self.__state = self.__openseen
  332. else:
  333. self.__state = self.__waiting
  334. def __openseen(self, ttype, tstring, lineno):
  335. if ttype == tokenize.OP and tstring == ')':
  336. # We've seen the last of the translatable strings. Record the
  337. # line number of the first line of the strings and update the list
  338. # of messages seen. Reset state for the next batch. If there
  339. # were no strings inside _(), then just ignore this entry.
  340. if self.__data:
  341. self.__addentry(EMPTYSTRING.join(self.__data))
  342. self.__state = self.__waiting
  343. elif ttype == tokenize.STRING:
  344. self.__data.append(safe_eval(tstring))
  345. elif ttype not in [tokenize.COMMENT, token.INDENT, token.DEDENT,
  346. token.NEWLINE, tokenize.NL]:
  347. # warn if we see anything else than STRING or whitespace
  348. print >> sys.stderr, _(
  349. '*** %(file)s:%(lineno)s: Seen unexpected token "%(token)s"'
  350. ) % {
  351. 'token': tstring,
  352. 'file': self.__curfile,
  353. 'lineno': self.__lineno
  354. }
  355. self.__state = self.__waiting
  356. def __addentry(self, msg, lineno=None, isdocstring=0):
  357. if lineno is None:
  358. lineno = self.__lineno
  359. if not msg in self.__options.toexclude:
  360. entry = (self.__curfile, lineno)
  361. self.__messages.setdefault(msg, {})[entry] = isdocstring
  362. def set_filename(self, filename):
  363. self.__curfile = filename
  364. self.__freshmodule = 1
  365. def write(self, fp):
  366. options = self.__options
  367. timestamp = time.strftime('%Y-%m-%d %H:%M+%Z')
  368. # The time stamp in the header doesn't have the same format as that
  369. # generated by xgettext...
  370. print >> fp, pot_header % {'time': timestamp, 'version': __version__}
  371. # Sort the entries. First sort each particular entry's keys, then
  372. # sort all the entries by their first item.
  373. reverse = {}
  374. for k, v in self.__messages.items():
  375. keys = v.keys()
  376. keys.sort()
  377. reverse.setdefault(tuple(keys), []).append((k, v))
  378. rkeys = reverse.keys()
  379. rkeys.sort()
  380. for rkey in rkeys:
  381. rentries = reverse[rkey]
  382. rentries.sort()
  383. for k, v in rentries:
  384. isdocstring = 0
  385. # If the entry was gleaned out of a docstring, then add a
  386. # comment stating so. This is to aid translators who may wish
  387. # to skip translating some unimportant docstrings.
  388. if reduce(operator.__add__, v.values()):
  389. isdocstring = 1
  390. # k is the message string, v is a dictionary-set of (filename,
  391. # lineno) tuples. We want to sort the entries in v first by
  392. # file name and then by line number.
  393. v = v.keys()
  394. v.sort()
  395. if not options.writelocations:
  396. pass
  397. # location comments are different b/w Solaris and GNU:
  398. elif options.locationstyle == options.SOLARIS:
  399. for filename, lineno in v:
  400. d = {'filename': filename, 'lineno': lineno}
  401. print >>fp, _(
  402. '# File: %(filename)s, line: %(lineno)d') % d
  403. elif options.locationstyle == options.GNU:
  404. # fit as many locations on one line, as long as the
  405. # resulting line length doesn't exceeds 'options.width'
  406. locline = '#:'
  407. for filename, lineno in v:
  408. d = {'filename': filename, 'lineno': lineno}
  409. s = _(' %(filename)s:%(lineno)d') % d
  410. if len(locline) + len(s) <= options.width:
  411. locline = locline + s
  412. else:
  413. print >> fp, locline
  414. locline = "#:" + s
  415. if len(locline) > 2:
  416. print >> fp, locline
  417. if isdocstring:
  418. print >> fp, '#, docstring'
  419. print >> fp, 'msgid', normalize(k)
  420. print >> fp, 'msgstr ""\n'
  421. def main():
  422. global default_keywords
  423. try:
  424. opts, args = getopt.getopt(
  425. sys.argv[1:],
  426. 'ad:DEhk:Kno:p:S:Vvw:x:X:',
  427. ['extract-all', 'default-domain=', 'escape', 'help',
  428. 'keyword=', 'no-default-keywords',
  429. 'add-location', 'no-location', 'output=', 'output-dir=',
  430. 'style=', 'verbose', 'version', 'width=', 'exclude-file=',
  431. 'docstrings', 'no-docstrings',
  432. ])
  433. except getopt.error, msg:
  434. usage(1, msg)
  435. # for holding option values
  436. class Options:
  437. # constants
  438. GNU = 1
  439. SOLARIS = 2
  440. # defaults
  441. extractall = 0 # FIXME: currently this option has no effect at all.
  442. escape = 0
  443. keywords = []
  444. outpath = ''
  445. outfile = 'messages.pot'
  446. writelocations = 1
  447. locationstyle = GNU
  448. verbose = 0
  449. width = 78
  450. excludefilename = ''
  451. docstrings = 0
  452. nodocstrings = {}
  453. options = Options()
  454. locations = {'gnu' : options.GNU,
  455. 'solaris' : options.SOLARIS,
  456. }
  457. # parse options
  458. for opt, arg in opts:
  459. if opt in ('-h', '--help'):
  460. usage(0)
  461. elif opt in ('-a', '--extract-all'):
  462. options.extractall = 1
  463. elif opt in ('-d', '--default-domain'):
  464. options.outfile = arg + '.pot'
  465. elif opt in ('-E', '--escape'):
  466. options.escape = 1
  467. elif opt in ('-D', '--docstrings'):
  468. options.docstrings = 1
  469. elif opt in ('-k', '--keyword'):
  470. options.keywords.append(arg)
  471. elif opt in ('-K', '--no-default-keywords'):
  472. default_keywords = []
  473. elif opt in ('-n', '--add-location'):
  474. options.writelocations = 1
  475. elif opt in ('--no-location',):
  476. options.writelocations = 0
  477. elif opt in ('-S', '--style'):
  478. options.locationstyle = locations.get(arg.lower())
  479. if options.locationstyle is None:
  480. usage(1, _('Invalid value for --style: %s') % arg)
  481. elif opt in ('-o', '--output'):
  482. options.outfile = arg
  483. elif opt in ('-p', '--output-dir'):
  484. options.outpath = arg
  485. elif opt in ('-v', '--verbose'):
  486. options.verbose = 1
  487. elif opt in ('-V', '--version'):
  488. print _('pygettext.py (xgettext for Python) %s') % __version__
  489. sys.exit(0)
  490. elif opt in ('-w', '--width'):
  491. try:
  492. options.width = int(arg)
  493. except ValueError:
  494. usage(1, _('--width argument must be an integer: %s') % arg)
  495. elif opt in ('-x', '--exclude-file'):
  496. options.excludefilename = arg
  497. elif opt in ('-X', '--no-docstrings'):
  498. fp = open(arg)
  499. try:
  500. while 1:
  501. line = fp.readline()
  502. if not line:
  503. break
  504. options.nodocstrings[line[:-1]] = 1
  505. finally:
  506. fp.close()
  507. # calculate escapes
  508. make_escapes(options.escape)
  509. # calculate all keywords
  510. options.keywords.extend(default_keywords)
  511. # initialize list of strings to exclude
  512. if options.excludefilename:
  513. try:
  514. fp = open(options.excludefilename)
  515. options.toexclude = fp.readlines()
  516. fp.close()
  517. except IOError:
  518. print >> sys.stderr, _(
  519. "Can't read --exclude-file: %s") % options.excludefilename
  520. sys.exit(1)
  521. else:
  522. options.toexclude = []
  523. # resolve args to module lists
  524. expanded = []
  525. for arg in args:
  526. if arg == '-':
  527. expanded.append(arg)
  528. else:
  529. expanded.extend(getFilesForName(arg))
  530. args = expanded
  531. # slurp through all the files
  532. eater = TokenEater(options)
  533. for filename in args:
  534. if filename == '-':
  535. if options.verbose:
  536. print _('Reading standard input')
  537. fp = sys.stdin
  538. closep = 0
  539. else:
  540. if options.verbose:
  541. print _('Working on %s') % filename
  542. fp = open(filename)
  543. closep = 1
  544. try:
  545. eater.set_filename(filename)
  546. try:
  547. tokenize.tokenize(fp.readline, eater)
  548. except tokenize.TokenError, e:
  549. print >> sys.stderr, '%s: %s, line %d, column %d' % (
  550. e[0], filename, e[1][0], e[1][1])
  551. finally:
  552. if closep:
  553. fp.close()
  554. # write the output
  555. if options.outfile == '-':
  556. fp = sys.stdout
  557. closep = 0
  558. else:
  559. if options.outpath:
  560. options.outfile = os.path.join(options.outpath, options.outfile)
  561. fp = open(options.outfile, 'w')
  562. closep = 1
  563. try:
  564. eater.write(fp)
  565. finally:
  566. if closep:
  567. fp.close()
  568. if __name__ == '__main__':
  569. main()
  570. # some more test strings
  571. _(u'a unicode string')
  572. # this one creates a warning
  573. _('*** Seen unexpected token "%(token)s"') % {'token': 'test'}
  574. _('more' 'than' 'one' 'string')