module: token_support

Applies Python tokenize analysis to each line of a text file.

class iocdoc.token_support.TokenLog[source]

Applies the Python <code>tokenize</code> analysis to each line of a file. This allows a lexical analysis of the file, line-by-line. This is powerful and makes some complex analyses more simple but it assumes the file resembles Python source code.

:note The <code>tokenize</code> analysis is not robust.
Some files will cause exceptions for various reasons.

:see http://docs.python.org/library/tokenize.html :see http://docs.python.org/library/token.html

get(index)[source]

retrieve the indexed token from the list

getCrossReferences()[source]
Returns:dictionary of token cross-references
getFullWord()[source]

parse the token stream for a contiguous word and return it as str

Some words in template files might not be enclosed in quotes and thus the whole word is broken into several tokens. This command rebuilds the word, without stripping quotes (if provided).

getKeyValueSet()[source]

parse a token sequence as a list of macro definitions into a dictionary

example:

{ P=12ida1:,SCANREC=12ida1:scan1 }
{P=12ida1:,SCANREC=12ida1:scan1,Q=m1,POS="$(Q).VAL",RDBK="$(Q).RBV"}
getTokenList()[source]
Returns:list of token dictionaries
lineAnalysis()[source]

analyze the tokens by line

:return dictionary with all the lines, including tokenized form

next()[source]

return the next token or raise a StopIteration exception upon reaching the end of the sequence

Returns:token object
Raises:StopIteration – reached the end of the sequence
nextActionable(skip_list=None)[source]

walk through the tokens and find the next actionable token

Parameters:skip_list ((str)) –

list of tokens to ignore

default list: (‘COMMENT’, ‘NEWLINE’, ‘ENDMARKER’,
‘ERRORTOKEN’, ‘INDENT’, ‘DEDENT’)
Returns:token object or None if no more tokens
previous()[source]

return the previous token

Returns:token object
Raises:StopIteration – reached the beginning of the sequence
processFile(filename)[source]

process just one file

report()[source]

prints (to stdout) results contained in tokenList list and xref dictionary

setTokenPointer(position=None)[source]

set the token pointer to the given position

Parameters:position – index position within list of tokens
Raises:Exception – token pointer position errors
summary(alsoPrint=False)[source]

Summarizes the xref dictionary contents. Reports number of each different token name (type).

Parameters:alsoPrint – boolean to enable print to stdout
Returns:dictionary of token frequencies
tokenName(tokType)[source]

convert token number to a useful string

tokenReceiver(tokType, tokStr, start, end, tokLine)[source]

called by tokenize package, logs tokens as they are called

tokens_to_list()[source]

parse an enclosed list of tokens into a list

Assume token_pointer is pointing at start terminator

examples:

(DESC, "motor $(P)$(M)") --> ['DESC', 'motor $(P)$(M)']
{P,      S, BL,    T1, T2, A}  --> ['P', 'S', 'BL', 'T1', 'T2', 'A']
{12ida1: A  "##ID" 1   2   1}  --> ['12ida1:', 'A', '##ID', '1', '2', '1']

TODO: alias($(IOC):IOC_CPU_LOAD,"$(IOC):load")
iocdoc.token_support.parse_bracketed_macro_definitions(tokenLog)[source]

walk through a bracketed string, keeping track of delimiters

verify we start on an opening delimiter

iocdoc.token_support.reconstruct_line(tokens=[], firstIndex=1)[source]

reconstruct the line from the list of tokens presented

Parameters:
  • tokens – as used throughout this module
  • firstIndex – first index in tokens list to use
Returns:

reconstructed line

iocdoc.token_support.token_key(tkn)[source]

developer use, short string identifying the type and text of this token