TXR: an Original, New
Programming Language for
Convenient Data Munging

Kaz Kylheku <kaz@kylheku.com>

Quick Links

Help Needed

The TXR project is looking for hackers to develop features, such as:

TXR has clean, easy to understand and maintain internals that are a pleasure to work with. Be sure to read the HACKING guide.

What is it?

TXR is a pragmatic, convenient tool ready to take on your daily hacking challenges with its dual personality: its whole-document pattern matching and extraction language for scraping information from arbitrary text sources, and its powerful data-processing language to slice through problems like a hot knife through butter. Many tasks can be accomplished with TXR "one liners" directly from your system prompt.

TXR is a fusion of many different ideas, a few of which are original, and it is influenced by many languages, such as Common Lisp, Scheme, Awk, M4, POSIX Shell, Prolog, Ruby, Python, Arc, Clojure, S-Lang and others. It is relatively new: the project started in 2009.

Similarly to some other data processing tools, it has certain convenient implicit behavior with regard to input handling, via its pattern-based text extraction language. A comparison to the Awk language may be drawn here: whereas Awk implicitly reads a file, breaking it into records and fields which are accessible as positional variables, TXR has quite a different way of making input handling implicit: namely via a nested, recursive pattern matching notation which binds variables. This approach still handles delimited fields with relative convenience, but generalizes into handling messy, loosely structured data, or data which exhibits different regularities in different sections, etc. Constructs in TXR (the pattern language) aren't imperative statements, but rather pattern-matching directives: each construct terminates by matching, failing, or throwing an exception. Searching and backtracking behaviors are implicit. It has features like structured named blocks with nonlocal exits, structured exception handling, named pattern matching functions, and numerous other features.  TXR's pattern language is powerful enough to parse grammars, yet simple to use in an ad-hoc way on trivial tasks.

TXR also has the "brains" that the designers of other pragmatic, convenient data munging languages have neglected to put in: a built in, powerful functional and imperative language, with lots of features, such as:

This embedded language, TXR Lisp, maintains strong ties to the Lisp family of languages, while its design also pays attention to newer scripting languages which have emerged in the last ten to twenty years, and takes cues from functional languages.


Here is a collection of TXR Solutions to a number of problems from Rosetta Code.

Rudimentary Concepts

A file containing UTF-8 text is already a TXR query which matches itself: almost. Care has to be taken to escape the meta-character @ which introduces all special syntax. This is done by writing it twice: @@ stands for a single literal @.  Thus, a text file which contains no @ signs, or whose @ signs are properly escaped by being doubled twice is a pattern match. So for instance:

Four score and
seven years ago
our fathers brought forth,

is a TXR query which matches the text itself. Actually, it matches more than just itself. It matches any text which begins with those three lines. Thus it also matches this text

Four score and
seven years ago
our fathers brought forth,
upon this continent

furthermore, spaces actually have a special meaning in TXR. A single space denotes a match for one or more spaces. So our query also matches this text, which is a convenient behavior.

Four   score   and
seven years ago
our fathers brought forth,
upon this continent

We can tighten the query so that it matches exactly three lines, and only single spaces in the first line.

Four@\ score@\ and
seven years ago
our fathers brought forth,

Here the @ character comes into play. The syntax @\space syntax encodes a literal space which doesn't have the "match one or more spaces" meaning. The @(eof) directive means "match the empty data set, consisting of no lines".

Variables are denoted as identifiers preceded by @, and  match pieces of text in mostly intuitive ways (and sometimes not so intuitive). Suppose we change the above to this:

Four@\ score@\ and
seven @units ago
our @relatives brought forth,

Now if this query is matched against the original file, the variable units will capture the character string "years" and relatives will capture "fathers". Of course, it matches texts which have words other than these, such as seven months ago, or our mothers brought forth.

As you can see, the basic concept in simple patterns like this very much resembles a "here document": it's a template of text with variables. But of course, this "here document" runs backwards! Rather than generating text by substituting variables, it does the opposite: it matches text and extracts variables. The need for a "here document run backwards" was what prompted the initial development of TXR!

From this departure point, things get rapidly complicated. The pattern language has numerous directives expressing parallel matching and iteration. Many of the directives work in vertical (line oriented) and horizontal (character oriented) modes. Pattern functions can be defined (horizontal and vertical) and those can be recursive, allowing grammars to be parsed.

Simple Collection/Generation Example

The following query reads a stream of comma-separated pairs and generates a HTML table. A complete version with sample data is given here.

@(output :filter :to_html)
@  (repeat)
@  (end)

Grammar Parsing Example

Here is a TXR query which matches an arithmetic expression grammar, consisting of numbers, identifiers, basic arithmetic operators (+ - * /) and parentheses. The expression is supplied as a command line argument (this is done by @(next :args) which redirects the pattern matching to the argument vector).

Note that most of this code is not literal text. All of the pieces shown in color are special syntax. The @; os -> optional space text is a comment.

@(next :args)
@(define os)@/ */@(end)@; os -> optional space
@(define mulop)@(os)@/[*\/]/@(os)@(end)
@(define addop)@(os)@/[+\-]/@(os)@(end)
@(define number)@(os)@/[0-9]+/@(os)@(end)
@(define ident)@(os)@/[A-Za-z]+/@(os)@(end)
@(define factor)@(cases)(@(expr))@(or)@(number)@(or)@(ident)@(end)@(end)
@(define term)@(some)@(factor)@(or)@(factor)@(mulop)@(term)@(or)@(addop)@(factor)@(end)@(end)
@(define expr)@(some)@(term)@(or)@(term)@(addop)@(expr)@(end)@(end)
@  (expr)
@  (output)
@  (end)
@  (expr)@bad
@  (output)
error starting at "@bad"
@  (end)

The grammar productions above represented by horizontal pattern functions. Horizontal pattern functions are denoted visually by a horizontal syntax: their elements are written side by side on a single logical line. Horizontal function definitions can be broken into multiple physical lines and indented, with the help of the @\ continuation sequence, which consumes all leading whitespace from the following line, like this:

@(define term)@\

Sample runs from Unix command line:

$ txr expr.txr 'a + (3 * b/(c + 4))'
$ txr expr.txr 'a + (3 * b/(c + 4)))'
error starting at ")"
$ txr expr.txr 'a + (3 * b/(c + 4)'
error starting at "+ (3 * b/(c + 4)"

As you can see, this program matches the longest prefix of the input which is a well-formed expression. The expression is recognized using the simple function call @(expr) which could be placed into the middle of a text template as easily as a variable.  The @(cases) directive is used to recognize two situations: either the argument completely parses, or there is stray material that is not recognized, which can be captured into a variable called bad. The grammar itself is straightforward.

Look at the grammar production for factor. It contains two literal characters: the parentheses around @(expr). The syntax coloring reveals them to be what they are: they stand for themselves.

The ability to parse grammars happened in TXR by accident. It's a consequence of combining pattern matching and functions. In creating TXR, I independently discovered a concept known as PEGs: Parsing Expression Grammars.

Note how the program easily deals with lexical analysis and higher level parsing in one grammar: no need for a division of the task into "tokenizing" and "parsing".  Tokenizing is necessary with classic parsers, like LALR(1) machines, because these parsers normally have only one token of lookahead and avoid backtracking. So they are fed characters instead of tokens, they cannot do very much due to running into ambiguities arising from complex tokens. By itself, a classic parser cannot decide whether "i" is the beginning of the C "int" keyword, or just the start of an identifier like "input".It needs the tokenizer to scan these (done with a regular language based on regular expression) and do the classification, so the parser sees a KEYWORD or IDENT token.

Embedded Lisp

Just like the TXR pattern matching primitves are embedded in plain text, within the pattern matching language, there is an embedded Lisp dialect. Here is one way to tabulate a frequency histogram of the letters A-Z, using the pattern language to extract the letters from the input, and TXR Lisp to tabulate:

@(do (defvar h (hash :equal-based)))
@(collect :vars ())
@(coll :vars ())@\
  @{letter /[A-Za-z]/}@(filter :upcase letter)@\
  @(do (inc [h letter 0]))@\
@(do (dohash (key value h)
       (format t "~a: ~a\n" key value)))

Here is an approach using purely TXR Lisp. Now while some aspects of this may appear, to Lisp programmers, if not entirely familiar, then at least clear. For instance, it is probably obvious that open-file opens a file and returns a stream, and that the let construct binds that stream to the variable s. Note the gun operator. Its name stands for "generate until nil": it returns a lazy list, possibly infinite, whose elements are formed by repeated calls to the enclosed expression, in this case (get char s). This lazy list of characters can then be conveniently processed using the each operator. The square bracket expression (inc [h (chr-toupper ch) 0]) is a shorthand equivalent for (inc (gethash h (chr-toupper ch) 0)) which means increment the value in hash table h corresponding to the key (chr-toupper ch) (the character ch converted to upper case). If the entry does not exist, then it is created and initialized with 0 then incremented.

@(do (let ((h (hash))
           (s (open-file "/usr/share/dict/words" "r")))
       (each ((ch (gun (get-char s))))
         (if (chr-isalpha ch)
           (inc [h (chr-toupper ch) 0])))
       (let ((sorted [sort (hash-pairs h) > second]))
         (each ((pair sorted))
           (tree-bind (key value) pair
              (put-line `@key: @value`))))))

Source Code

Releases and snapshots can be pulled directly from the git repository.

To build the program, you need a C compiler, a yacc utility (I've never tried anything but GNU Bison and Berkeley Yacc) and GNU flex. (Flex extensions are used: in particular start conditions). A few POSIX features are required from the host platform, like the popen function, and <dirent.h>. These are available on Windows through the MinGW compiler and environment.

The configure script and Makefile are geared toward a gcc and glibc environment, and rely on some GNU make features. Building for Windows therefore requires a GNU environment: MinGW. There is an issue with GNU flex on MinGW, requiring the following argument to the configure script: libflex="-L/usr/lib -lfl".

If you have porting issues, contact the TXR mailing list!


TXR is truly free software because it is distributed under a variation of the two-clause BSD license which allows pretty much every kind of free use.

Make a Donation

If you find TXR to be a valuable tool in your arsenal, here is one way to show your appreciation and support! Developing stuff like this takes countless hours.

Binary Downloads

Compiled builds of TXR are available in the file download area at Bintray. Older builds of TXR (109 and older) are available in the file download area at SourceForge.