docs form David Van Horn; Slideshow tweaks
svn: r8661
This commit is contained in:
parent
e3d8a676cb
commit
ca1a9dc8bf
|
@ -52,6 +52,7 @@
|
|||
"Francisco Solsona, "
|
||||
"Sam Tobin-Hochstadt, "
|
||||
"Neil W. Van Dyke, "
|
||||
"David Van Horn, "
|
||||
"Anton van Straaten, "
|
||||
"Dale Vaillancourt, "
|
||||
"Stephanie Weirich, "
|
||||
|
|
|
@ -1,429 +0,0 @@
|
|||
_parser-tools_
|
||||
|
||||
This documentation assumes familiarity with lex and yacc style lexer
|
||||
and parser generators.
|
||||
|
||||
_lex.ss_
|
||||
A _regular expression_ is one of the following:
|
||||
|
||||
identifier expands to the named lex abbreviation
|
||||
string matches the sequence of characters
|
||||
character matches the character
|
||||
> (repetition lo hi re) matches re repeated between lo and hi times,
|
||||
inclusive. hi = +inf.0 for unbounded repetitions
|
||||
> (union re ...) matches if any of the sub-expressions match
|
||||
> (intersection re ...) matches if all of the sub-expressions match
|
||||
> (complement re) matches anything that the sub-expressions does not
|
||||
> (concatenation re ...) matches each sub-expression in succession
|
||||
> (char-range char char) matches any character between the two (inclusive)
|
||||
(A single character string can be used here)
|
||||
> (char-complement re) matches any character not matched by the sub-re.
|
||||
The sub-expression must be a set of characters re.
|
||||
> (char-set string) matches any character in the string
|
||||
(op form ...) expands the re macro named op
|
||||
|
||||
(Both (concatenation) and "" match the empty string.
|
||||
(union) matches nothing.
|
||||
(intersection) matches any string.
|
||||
(char-complement) matches any single character.)
|
||||
|
||||
This regular expression language is not designed to be used directly,
|
||||
but rather as a basis for a user-friendly notation written with
|
||||
regular expression macros. For example, _lex-sre.ss_ supplies
|
||||
operators from Olin Shivers's SREs and _lex-plt-v200.ss_ supplies
|
||||
operators from the previous version of this library. The old plt
|
||||
syntax is deprecated. To use one of these syntaxes, use one of the
|
||||
following commands:
|
||||
|
||||
(require (prefix : (lib "lex-sre.ss" "parser-tools")))
|
||||
(require (prefix : (lib "lex-plt-v200.ss" "parser-tools")))
|
||||
|
||||
The regular expression operators occupy the same namespace as other
|
||||
Scheme identifiers, so common operator names, such as * and +,
|
||||
conflict with built-in identifiers. The suggested prefix form of
|
||||
require prefixes each imported operator with :, :* and :+, for
|
||||
example. Of course, a prefix other than : (such as re-) will work
|
||||
too. Unfortunately, this causes the regexps in v20x lexers to need
|
||||
some translation, even with the lex-plt-v200 operators. The exports
|
||||
of each of these libraries is explained in their sections below.
|
||||
|
||||
Since negation is not a common operator on regular expressions, here
|
||||
are a few examples, using : prefixed SRE syntax:
|
||||
|
||||
(complement "1") matches all strings except the string "1",
|
||||
including "11", "111", "0", "01", "", and so on.
|
||||
|
||||
(complement (:* "1")) matches all strings that are not
|
||||
sequences of "1", including "0", "00", "11110", "0111", "11001010"
|
||||
and so on.
|
||||
|
||||
(:& (:: any-string "111" any-string)
|
||||
(complement (:or (:: any-string "01") (:+ "1"))))
|
||||
matches all strings that have 3 consecutive ones, but not those that
|
||||
end in "01" and not those that are ones only. These include "1110",
|
||||
"0001000111" and "0111" but not "", "11", "11101", "111" and
|
||||
"11111".
|
||||
|
||||
(:: "/*" (complement (:: any-string "*/" any-string)) "*/")
|
||||
matches Java/C block comments. "/**/", "/******/", "/*////*/",
|
||||
"/*asg4*/" and so on. It does not match "/**/*/", "/* */ */" and so
|
||||
on. (:: any-string "*/" any-string) matches any string that has a
|
||||
"*/" in is, so (complement (:: any-string "*/" any-string)) matches
|
||||
any string without a "*/" in it.
|
||||
|
||||
(:: "/*" (:* (complement "*/")) "*/")
|
||||
matches any string that starts with "/*" and and ends with "*/",
|
||||
including "/* */ */ */". (complement "*/") matches any string
|
||||
except "*/". This includes "*" and "/" separately. Thus (:*
|
||||
(complement "*/")) matches "*/" by first matching "*" and then
|
||||
matching "/". Any other string is matched directly by (complement
|
||||
"*/"). In other words, (:* (complement "xx")) = any-string. It is
|
||||
usually not correct to place a :* around a complement.
|
||||
|
||||
|
||||
The following require imports the _lexer generator_.
|
||||
|
||||
(require (lib "lex.ss" "parser-tools"))
|
||||
|
||||
|
||||
"lex.ss" exports the following named regular expressions:
|
||||
|
||||
> any-char: matches any character
|
||||
> any-string: matches any string
|
||||
> nothing: matches no string
|
||||
> alphabetic: see the mzscheme manual section 3.4
|
||||
> lower-case: see the mzscheme manual section 3.4
|
||||
> upper-case: see the mzscheme manual section 3.4
|
||||
> title-case numeric: see the mzscheme manual section 3.4
|
||||
> symbolic: see the mzscheme manual section 3.4
|
||||
> punctuation: see the mzscheme manual section 3.4
|
||||
> graphic: see the mzscheme manual section 3.4
|
||||
> whitespace: see the mzscheme manual section 3.4
|
||||
> blank: see the mzscheme manual section 3.4
|
||||
> iso-control: see the mzscheme manual section 3.4
|
||||
|
||||
|
||||
"lex.ss" exports the following variables:
|
||||
|
||||
> start-pos
|
||||
> end-pos
|
||||
> lexeme
|
||||
> input-port
|
||||
> return-without-pos
|
||||
Use of these names outside of a lexer action is a syntax error.
|
||||
See the lexer form below for their meaning when used inside a
|
||||
lexer action.
|
||||
|
||||
"lex.ss" exports the following syntactic forms:
|
||||
|
||||
> (define-lex-abbrev name re) which associates a regular expression
|
||||
with a name to be used in other regular expressions with the
|
||||
identifier form. The definition of name has the same scoping
|
||||
properties as a normal mzscheme macro definition.
|
||||
|
||||
> (define-lex-abbrevs (name re) ...) defines several lex-abbrevs
|
||||
|
||||
> (define-lex-trans name trans) define a regular expression macro.
|
||||
When the name appears as an operator in a regular expression, the
|
||||
expression is replaced with the result of the transformer. The
|
||||
definition of name has the same scoping properties as a normal
|
||||
mzscheme macro definition. For examples, see the file
|
||||
${PLTHOME}/collects/parser-tools/lex-sre.ss which contains simple,
|
||||
but useful, regular expression macro definitions.
|
||||
|
||||
|
||||
> (lexer (re action) ...) expands into a function that takes an
|
||||
input-port, matches the re's against the buffer, and returns the
|
||||
result of executing the corresponding action. Each action is
|
||||
scheme code that has the same scope as the enclosing lexer definition.
|
||||
The following variables have special meaning inside of a lexer
|
||||
action:
|
||||
start-pos - a position struct for the first character matched
|
||||
end-pos - a position struct for the character after the last
|
||||
character in the match
|
||||
lexeme - the matched string
|
||||
input-port - the input-port being processed (this is useful for
|
||||
matching input with multiple lexers)
|
||||
(return-without-pos x) is a function (continuation) that
|
||||
immediately returns the value of x from the lexer. This useful
|
||||
in a src-pos lexer to prevent the lexer from adding source
|
||||
information. For example:
|
||||
(define get-token
|
||||
(lexer-src-pos
|
||||
...
|
||||
((comment) (get-token input-port))
|
||||
...))
|
||||
would wrap the source location information for the comment around
|
||||
the value of the recursive call. Using
|
||||
((comment) (return-without-pos (get-token input-port)))
|
||||
will cause the value of the recursive call to be returned without
|
||||
wrapping position around it.
|
||||
|
||||
The lexer raises an exception (exn:read) if none of the regular
|
||||
expressions match the input.
|
||||
Hint: If (any-char custom-error-behavior) is the last rule,
|
||||
then there will always be a match and custom-error-behavior will be
|
||||
executed to handle the error situation as desired, only consuming
|
||||
the first character from the input buffer.
|
||||
|
||||
The lexer treats the rules ((eof) action), ((special) action), and
|
||||
((special-comment) action) specially. In addition to returning
|
||||
characters, input ports can return eof-objects. Custom input ports
|
||||
can also return a special-comment value to indicate a non-textual
|
||||
comment, or return another arbitrary value (a special).
|
||||
|
||||
The eof rule is matched when the input port returns an eof value.
|
||||
If no eof rule is present, the lexer returns the symbol 'eof when
|
||||
the port returns an eof value.
|
||||
|
||||
The special-comment rule is matched when the input port returns a
|
||||
special-comment structure. If no special-comment rule is present,
|
||||
the lexer automatically tries to return the next token from the
|
||||
input port.
|
||||
|
||||
The special rule is matched when the input port returns a value
|
||||
other than a character, eof-object, or special-comment structure.
|
||||
If no special rule is present, the lexer returns void.
|
||||
|
||||
Eofs, specials, special-comments and special-errors can never be
|
||||
part of a lexeme with surrounding characters.
|
||||
|
||||
When peeking from the input port raises an exception (such as by an
|
||||
embedded XML editor with malformed syntax), the exception can be
|
||||
raised before all tokens preceding the exception have been
|
||||
returned.
|
||||
|
||||
|
||||
> (lexer-src-pos (re action) ...) is a lexer that returns
|
||||
(make-position-token action-result start-pos end-pos) instead of
|
||||
simply action-result.
|
||||
|
||||
> (define-tokens group-name (token-name ...)) binds group-name to the
|
||||
group of tokens being defined. For each token-name, t, a function
|
||||
(token-t expr) is created. It constructs a token structure with
|
||||
name t and stores the value of expr in it. The definition of
|
||||
group-name has the same scoping properties as a normal mzscheme
|
||||
macro definition. A token cannot be named error since it has
|
||||
special use in the parser.
|
||||
|
||||
> (define-empty-tokens group-name (token-name ...)) is like
|
||||
define-tokens, except a resulting token constructor take no
|
||||
arguments and returns the name as a symbol. (token-t) returns 't.
|
||||
|
||||
|
||||
HINT: Token definitions are usually necessary for inter-operating with
|
||||
a generated parser, but may not be the right choice when using the
|
||||
lexer in other situations.
|
||||
|
||||
"lex.ss" exports the following token helper functions:
|
||||
|
||||
> (token-name token) returns the name of a token that is represented either
|
||||
by a symbol or a token structure.
|
||||
|
||||
> (token-value token) returns the value of a token that is represented either
|
||||
by a symbol or a token structure. It returns #f for a symbol token.
|
||||
> (token? val) returns #t if val is a token structure and #f otherwise.
|
||||
|
||||
|
||||
"lex.ss" exports the following structures:
|
||||
|
||||
> (struct position (offset line col))
|
||||
These structures are bound to start-pos and end-pos.
|
||||
Offset is the offset of the character in the input.
|
||||
Line in the line number of the character.
|
||||
Col is the offset in the current line.
|
||||
|
||||
> (struct position-token (token start-pos end-pos))
|
||||
src-pos-lexers return these.
|
||||
|
||||
"lex.ss" exports the following parameter for tracking the source file:
|
||||
|
||||
> (file-path string) - sets the parameter file-path, which the lexer
|
||||
will use as the source location if it raises a read error. This
|
||||
allows DrScheme to open the file containing the error.
|
||||
|
||||
|
||||
Each time the scheme code for a lexer is compiled (e.g. when a .ss
|
||||
file containing a (lex ...) is loaded/required) the lexer generator is
|
||||
run. To avoid this overhead place the lexer into a module and compile
|
||||
the module to a .zo with 'mzc --zo --auto-dir filename'. This should
|
||||
create a .zo file in the 'compiled' subdirectory.
|
||||
|
||||
Compiling the lex.ss file to an extension can produce a good speedup
|
||||
in generated lexers since the lex.ss file contains the interpreter for
|
||||
the generated lex tables. If mzscheme is able to compile extensions
|
||||
(a c compiler must be available) run the commands:
|
||||
cd ${PLTHOME}/collects/parser-tools
|
||||
mzc --auto-dir lex.ss
|
||||
|
||||
NOTE: Since the lexer gets its source information from the port, use
|
||||
port-count-lines! to enable the tracking of line and column information.
|
||||
Otherwise the line and column information will return #f.
|
||||
|
||||
The file ${PLTHOME}/collects/syntax-color/scheme-lexer.ss contains a
|
||||
lexer for Scheme, as extended by mzscheme. The files in
|
||||
${PLTHOME}/collects/parser-tools/examples contain simpler example
|
||||
lexers.
|
||||
|
||||
|
||||
|
||||
|
||||
_lex-sre.ss_
|
||||
The lex-sre.ss module exports the following regular expression
|
||||
operators:
|
||||
> (* re ...) repetition of regexps 0 or more times
|
||||
> (+ re ...) repetition of regexps 1 or more times
|
||||
> (? re ...) 0 or 1 occurrence of regexps
|
||||
> (= natural-number re ...)
|
||||
exactly n occurrences of regexps
|
||||
> (>= natural-number re ...)
|
||||
at least n occurrences of regexps
|
||||
> (** natural-number natural-number-or-#f-or-+inf.0 re ...)
|
||||
between m and n of regexps, inclusive
|
||||
#f and +inf.0 both indicate no upper limit
|
||||
> (or re ...) union
|
||||
> (: re ...) concatenation
|
||||
> (seq re ...) concatenation
|
||||
> (& re ...) intersection
|
||||
> (- re re ...) set difference
|
||||
> (~ re ...) character set complement, each given regexp must match
|
||||
exactly one character
|
||||
> (/ char-or-string ...)
|
||||
character ranges, matches characters between successive
|
||||
pairs of chars.
|
||||
|
||||
|
||||
_yacc.ss_
|
||||
|
||||
To use the _parser generator_ (require (lib "yacc.ss" "parser-tools")).
|
||||
This module provides the following syntactic form:
|
||||
|
||||
> (parser args ...) where the possible args may come in any order (as
|
||||
long as there are no duplicates and all non-optional arguments are
|
||||
present) and are:
|
||||
|
||||
> (debug filename) OPTIONAL causes the parser generator to write the
|
||||
LALR table to the file named filename (unless the file exists).
|
||||
filename must be a string. Additionally, if a debug file is
|
||||
specified, when a running generated parser encounters a parse
|
||||
error on some input file, after the user specified error
|
||||
expression returns, the complete parse stack is printed to assist
|
||||
in debugging the grammar of that particular parser. The numbers
|
||||
in the stack printout correspond to the state numbers in the LALR
|
||||
table file.
|
||||
|
||||
> (yacc-output filename) OPTIONAL causes the parser generator to
|
||||
write a grammar file in the syntax of YACC/Bison. The file might
|
||||
not be a valid YACC file because the scheme grammar can use
|
||||
symbols that are invalid in C.
|
||||
|
||||
> (suppress) OPTIONAL causes the parser generator not to report
|
||||
shift/reduce or reduce/reduce conflicts.
|
||||
|
||||
> (src-pos) OPTIONAL causes the generated parser to expect input in
|
||||
the form (make-position-token token position position) instead of
|
||||
simply token. Include this option when using the parser with a
|
||||
lexer generated with lexer-src-pos.
|
||||
|
||||
> (error expression) expression should evaluate to a function which
|
||||
will be executed for its side-effect whenever the parser
|
||||
encounters an error. If the src-pos option is present, the
|
||||
function should accept 5 arguments,
|
||||
(lambda (token-ok token-name token-value start-pos end-pos) ...).
|
||||
Otherwise it should accept 3,
|
||||
(lambda (token-ok token-name token-value) ...).
|
||||
The first argument will be #f iff the error is that an invalid
|
||||
token was received. The second and third arguments will be the
|
||||
name and the value of the token at which the error was detected.
|
||||
The fourth and fifth arguments, if present, provide the source
|
||||
positions of that token.
|
||||
|
||||
> (tokens group-name ...) declares that all of the tokens defined in
|
||||
the groups can be handled by this parser.
|
||||
|
||||
> (start non-terminal-name ...) declares a list of starting
|
||||
non-terminals for the grammar.
|
||||
|
||||
> (end token-name ...) specifies a set of tokens from which some
|
||||
member must follow any valid parse. For example an EOF token
|
||||
would be specified for a parser that parses entire files and a
|
||||
NEWLINE token for a parser that parses entire lines individually.
|
||||
|
||||
> (precs (assoc token-name ...) ...) OPTIONAL precedence
|
||||
declarations to resolve shift/reduce and reduce/reduce conflicts
|
||||
as in YACC/BISON. assoc must be one of left, right or nonassoc.
|
||||
States with multiple shift/reduce or reduce/reduce conflicts or
|
||||
some combination thereof are not resolved with precedence.
|
||||
|
||||
> (grammar (non-terminal ((grammar-symbol ...) (prec token-name) expression)
|
||||
...)
|
||||
...)
|
||||
|
||||
declares the grammar to be parsed. Each grammar-symbol must be a
|
||||
token-name or non-terminal. The prec declaration is optional.
|
||||
expression is a semantic action which will be evaluated when the
|
||||
input is found to match its corresponding production. Each
|
||||
action is scheme code that has the same scope as its parser's
|
||||
definition, except that the variables $1, ..., $n are bound in
|
||||
the expression and may hide outside bindings of $1, ... $n. $x
|
||||
is bound to the result of the action for the $xth grammar symbol
|
||||
on the right of the production, if that grammar symbol is a
|
||||
non-terminal, or the value stored in the token if the grammar
|
||||
symbol is a terminal. Here n is the number of grammar-symbols on
|
||||
the right of the production. If the src-pos option is present in
|
||||
the parser, variables $1-start-pos, ..., $n-start-pos and
|
||||
$1-end-pos, ..., $n-end-pos are also available and refer to the
|
||||
position structures corresponding to the start and end of the
|
||||
corresponding grammar-symbol. Grammar symbols defined as
|
||||
empty-tokens have no $n associated, but do have $n-start-pos and
|
||||
$n-end-pos. All of the productions for a given non-terminal must
|
||||
be grouped with it, i.e. No non-terminal may appear twice on the
|
||||
left hand side in a parser.
|
||||
|
||||
The result of a parser expression with one start non-terminal is a
|
||||
function, f, that takes one argument. This argument must be a zero
|
||||
argument function, t, that produces successive tokens of the input
|
||||
each time it is called. If desired, the t may return symbols instead
|
||||
of tokens. The parser will treat symbols as tokens of the
|
||||
corresponding name (with #f as a value, so it is usual to return
|
||||
symbols only in the case of empty tokens). f returns the value
|
||||
associated with the parse tree by the semantic actions. If the parser
|
||||
encounters an error, after invoking the supplied error function, it
|
||||
will try to use error productions to continue parsing. If it cannot,
|
||||
it raises a read error.
|
||||
|
||||
If multiple start non-terminals are provided, the parser expression
|
||||
will result in a list of parsing functions (each one will individually
|
||||
behave as if it were the result of a parser expression with only one
|
||||
start non-terminal), one for each start non-terminal, in the same order.
|
||||
|
||||
Each time the scheme code for a parser is compiled (e.g. when a .ss
|
||||
file containing a (parser ...) is loaded/required) the parser
|
||||
generator is run. To avoid this overhead place the lexer into a
|
||||
module and compile the module to a .zo with 'mzc --zo --auto-dir
|
||||
filename'. This should create a .zo file in the 'compiled'
|
||||
subdirectory.
|
||||
|
||||
Compiling the yacc.ss file to an extension can produce a good speedup
|
||||
in generated parsers since the yacc.ss file contains the interpreter
|
||||
for the generated parse tables. If mzscheme is able to compile
|
||||
extensions (a c compiler must be available) run the commands:
|
||||
cd ${PLTHOME}/collects/parser-tools
|
||||
mzc --auto-dir yacc.ss
|
||||
|
||||
|
||||
_yacc-to-scheme.ss_
|
||||
This library provides one function:
|
||||
> (trans filename) - reads a C YACC/BISON grammar from filename and
|
||||
produces an s-expression that represents a scheme parser for use
|
||||
with the yacc.ss module.
|
||||
This library is intended to assist in the manual conversion of
|
||||
grammars for use with yacc.ss and not as a fully automatic conversion
|
||||
tool. It is not entirely robust. For example, if the C actions in
|
||||
the original grammar have nested blocks the tool will fail.
|
||||
|
||||
|
||||
Annotated examples are in the examples subdirectory of the parser-tools
|
||||
collection directory.
|
||||
|
||||
|
|
@ -1,3 +1,3 @@
|
|||
#lang setup/infotab
|
||||
|
||||
(define scribblings '(("parser-tools.scrbl" ())))
|
||||
(define scribblings '(("parser-tools.scrbl" (multi-page))))
|
||||
|
|
|
@ -3,14 +3,19 @@
|
|||
scribble/struct
|
||||
scribble/xref
|
||||
scribble/bnf
|
||||
(for-label parser-tools/lex
|
||||
(prefix-in : parser-tools/lex-sre)))
|
||||
(for-label scheme/base
|
||||
scheme/contract
|
||||
parser-tools/lex
|
||||
(prefix-in : parser-tools/lex-sre)
|
||||
parser-tools/yacc))
|
||||
|
||||
@title{@bold{Parser Tools}: @exec{lex} and @exec{yacc}-style Parsing}
|
||||
|
||||
This documentation assumes familiarity with @exec{lex} and @exec{yacc}
|
||||
style lexer and parser generators.
|
||||
|
||||
@table-of-contents[]
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Lexers}
|
||||
|
@ -22,7 +27,7 @@ style lexer and parser generators.
|
|||
@subsection{Creating a Lexer}
|
||||
|
||||
@defform/subs[#:literals (repetition union intersection complement concatenation
|
||||
char-range char-complement char-set
|
||||
char-range char-complement
|
||||
eof special special-comment)
|
||||
(lexer [trigger action-expr] ...)
|
||||
([trigger re
|
||||
|
@ -39,7 +44,6 @@ style lexer and parser generators.
|
|||
(concatenation re ...)
|
||||
(char-range char char)
|
||||
(char-complement re)
|
||||
(char-set string)
|
||||
(id datum ...)])]{
|
||||
|
||||
Produces a function that takes an input-port, matches the
|
||||
|
@ -70,7 +74,6 @@ style lexer and parser generators.
|
|||
a single character string can be used as a @scheme[char].}
|
||||
@item{@scheme[(char-complement re)] --- matches any character not matched by @scheme[re].
|
||||
The sub-expression must be a set of characters @scheme[re].}
|
||||
@item{@scheme[(char-set string)] --- matches any character in the string.}
|
||||
@item{@scheme[(id datum ...)] --- expands the @deftech{lexer macro} named @scheme[id]; macros
|
||||
are defined via @scheme[define-lex-trans].}
|
||||
}
|
||||
|
@ -276,6 +279,10 @@ error.}
|
|||
|
||||
@subsection{Lexer Abbreviations and Macros}
|
||||
|
||||
@defform[(char-set string)]{
|
||||
|
||||
A @tech{lexer macro} that matches any character in @scheme[string].}
|
||||
|
||||
@defidform[any-char]{A @tech{lexer abbreviation} that matches any character.}
|
||||
|
||||
@defidform[any-string]{A @tech{lexer abbreviation} that matches any string.}
|
||||
|
@ -321,54 +328,7 @@ characters, @scheme[char-lower-case?] characters, etc.}
|
|||
|
||||
@; ----------------------------------------
|
||||
|
||||
@subsection{Tokens}
|
||||
|
||||
Each @scheme[_action-expr] in a @scheme[lexer] form can produce any
|
||||
kind of value, but for many purposes, producing a @deftech{token}
|
||||
value is useful. Tokens are usually necessary for inter-operating with
|
||||
a parser generated by @scheme[parser-tools/parser], but tokens not be
|
||||
the right choice when using @scheme[lexer] in other situations.
|
||||
|
||||
@defform[(define-tokens group-id (token-id ...))]{
|
||||
|
||||
Binds @scheme[group-id] to the group of tokens being defined. For
|
||||
each @scheme[token-id], a function
|
||||
@schemeidfont{token-}@scheme[token-id] is created that takes any
|
||||
value and puts it in a token record specific to @scheme[token-id].
|
||||
The token value is inspected using @scheme[token-name] and
|
||||
@scheme[token-value].
|
||||
|
||||
A token cannot be named @schemeidfont{error}, since
|
||||
@schemeidfont{error} it has special use in the parser.}
|
||||
|
||||
@defform[(define-empty-tokens group-id (token-id ...) )]{
|
||||
|
||||
|
||||
Like @scheme[define-tokens], except a each token constructor
|
||||
@schemeidfont{token-}@scheme[token-id] take nos arguments and returns
|
||||
@scheme[(@scheme[quote] token-id)].}
|
||||
|
||||
|
||||
@defproc[(token-name [t (or/c token? symbol?)]) symbol?]{
|
||||
|
||||
Returns the name of a token that is represented either by a symbol
|
||||
or a token structure.}
|
||||
|
||||
|
||||
@defproc[(token-value [t (or/c token? symbol?)]) any/c]{
|
||||
|
||||
Returns the value of a token that is represented either by a symbol
|
||||
or a token structure, returning @scheme[#f] for a symbol token.}
|
||||
|
||||
|
||||
@defproc[(token? [v any/c]) boolean?]{
|
||||
|
||||
Returns @scheme[#t] if @scheme[val] is a
|
||||
token structure, @scheme[#f] otherwise.}
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Lex SRE Operators}
|
||||
@subsection{Lexer SRE Operators}
|
||||
|
||||
@defmodule[parser-tools/lex-sre]
|
||||
|
||||
|
@ -442,26 +402,218 @@ characters.}
|
|||
|
||||
@(lex-sre-doc)
|
||||
|
||||
@; ----------------------------------------
|
||||
|
||||
@subsection{Lexer Legacy Operators}
|
||||
|
||||
@defmodule[parser-tools/lex-plt-v200]
|
||||
|
||||
@(define-syntax-rule (lex-v200-doc)
|
||||
(...
|
||||
(begin
|
||||
(require (for-label parser-tools/lex-plt-v200))
|
||||
|
||||
@t{The @schememodname[parser-tools/lex-plt-v200] module re-exports
|
||||
@scheme[*], @scheme[+], @scheme[?], and @scheme[&] from
|
||||
@schememodname[parser-tools/lex-sre]. It also re-exports
|
||||
@scheme[:or] as @scheme[:], @scheme[::] as @scheme[|@|], @scheme[:~]
|
||||
as @scheme[^], and @scheme[:/] as @scheme[-].}
|
||||
|
||||
@defform[(epsilon)]{
|
||||
|
||||
A @tech{lexer macro} that matches an empty sequence.}
|
||||
|
||||
@defform[(~ re ...)]{
|
||||
|
||||
The same as @scheme[(complement re ...)].})))
|
||||
|
||||
@(lex-v200-doc)
|
||||
|
||||
@; ----------------------------------------
|
||||
|
||||
@subsection{Tokens}
|
||||
|
||||
Each @scheme[_action-expr] in a @scheme[lexer] form can produce any
|
||||
kind of value, but for many purposes, producing a @deftech{token}
|
||||
value is useful. Tokens are usually necessary for inter-operating with
|
||||
a parser generated by @scheme[parser-tools/parser], but tokens not be
|
||||
the right choice when using @scheme[lexer] in other situations.
|
||||
|
||||
@defform[(define-tokens group-id (token-id ...))]{
|
||||
|
||||
Binds @scheme[group-id] to the group of tokens being defined. For
|
||||
each @scheme[token-id], a function
|
||||
@schemeidfont{token-}@scheme[token-id] is created that takes any
|
||||
value and puts it in a token record specific to @scheme[token-id].
|
||||
The token value is inspected using @scheme[token-id] and
|
||||
@scheme[token-value].
|
||||
|
||||
A token cannot be named @schemeidfont{error}, since
|
||||
@schemeidfont{error} it has special use in the parser.}
|
||||
|
||||
@defform[(define-empty-tokens group-id (token-id ...) )]{
|
||||
|
||||
|
||||
Like @scheme[define-tokens], except a each token constructor
|
||||
@schemeidfont{token-}@scheme[token-id] take nos arguments and returns
|
||||
@scheme[(@scheme[quote] token-id)].}
|
||||
|
||||
|
||||
@defproc[(token-name [t (or/c token? symbol?)]) symbol?]{
|
||||
|
||||
Returns the name of a token that is represented either by a symbol
|
||||
or a token structure.}
|
||||
|
||||
|
||||
@defproc[(token-value [t (or/c token? symbol?)]) any/c]{
|
||||
|
||||
Returns the value of a token that is represented either by a symbol
|
||||
or a token structure, returning @scheme[#f] for a symbol token.}
|
||||
|
||||
|
||||
@defproc[(token? [v any/c]) boolean?]{
|
||||
|
||||
Returns @scheme[#t] if @scheme[val] is a
|
||||
token structure, @scheme[#f] otherwise.}
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Parsers}
|
||||
|
||||
@defmodule[parser-tools/yacc]
|
||||
|
||||
@defform/subs[(parser clause ...)
|
||||
([clause ....])]{
|
||||
@defform/subs[#:literals (grammar tokens start end precs error src-pos
|
||||
suppress debug yacc-output prec)
|
||||
(parser clause ...)
|
||||
([clause (grammar (non-terminal-id
|
||||
((grammar-id ...) maybe-prec expr)
|
||||
...)
|
||||
...)
|
||||
(tokens group-id ...)
|
||||
(start non-terminal-id ...)
|
||||
(end token-id ...)
|
||||
(error expr)
|
||||
(precs (assoc token-id ...) ...)
|
||||
(src-pos)
|
||||
(suppress)
|
||||
(debug filename)
|
||||
(yacc-output filename)]
|
||||
[maybe-prec code:blank
|
||||
(prec token-id)]
|
||||
[assoc left right nonassoc])]{
|
||||
|
||||
Creates a parser. The clauses may be in any order (as
|
||||
long as there are no duplicates and all non-optional arguments are
|
||||
present).
|
||||
Creates a parser. The clauses may be in any order, as long as there
|
||||
are no duplicates and all non-@italic{OPTIONAL} declarations are
|
||||
present:
|
||||
|
||||
@itemize{
|
||||
|
||||
@item{@schemeblock0[(grammar (non-terminal-id
|
||||
((grammar-id ...) maybe-prec expr)
|
||||
...)
|
||||
...)]
|
||||
|
||||
Declares the grammar to be parsed. Each @scheme[grammar-id] can
|
||||
be a @scheme[token-id] from a @scheme[group-id] named in a
|
||||
@scheme[tokens] declaration, or it can be a
|
||||
@scheme[non-terminal-id] declared in the @scheme[grammar]
|
||||
declaration. The optional @scheme[prec] declaration works with
|
||||
the @scheme[precs] declaration. The @scheme[expr] is a
|
||||
``semantic action,'' which is evaluated when the input is found
|
||||
to match its corresponding production.
|
||||
|
||||
Each action is scheme code that has the same scope as its
|
||||
parser's definition, except that the variables @scheme[$1], ...,
|
||||
@schemeidfont{$}@math{n} are bound, where @math{n} is the number
|
||||
of @scheme[grammar-id]s in the corresponding production. Each
|
||||
@schemeidfont{$}@math{i} is bound to the result of the action
|
||||
for the @math{i}@superscript{th} grammar symbol on the right of
|
||||
the production, if that grammar symbol is a non-terminal, or the
|
||||
value stored in the token if the grammar symbol is a terminal.
|
||||
If the @scheme[src-pos] option is present in the parser, then
|
||||
variables @scheme[$1-start-pos], ...,
|
||||
@schemeidfont{$}@math{n}@schemeidfont{-start-pos} and
|
||||
@scheme[$1-end-pos], ...,
|
||||
@schemeidfont{$}@math{n}@schemeidfont{-end-pos} and are also
|
||||
available, and they refer to the position structures
|
||||
corresponding to the start and end of the corresponding
|
||||
@scheme[grammar-symbol]. Grammar symbols defined as empty-tokens
|
||||
have no @schemeidfont{$}@math{i} associated, but do have
|
||||
@schemeidfont{$}@math{i}@schemeidfont{-start-pos} and
|
||||
@schemeidfont{$}@math{i}@schemeidfont{-end-pos}.
|
||||
|
||||
All of the productions for a given non-terminal must be grouped
|
||||
with it. That is, no @scheme[non-terminal-id] may appear twice
|
||||
on the left hand side in a parser.}
|
||||
|
||||
|
||||
@item{@scheme[(tokens group-id ...)]
|
||||
|
||||
Declares that all of the tokens defined in each
|
||||
@scheme[group-id] can be used by the parser in the
|
||||
@scheme[grammar] declaration.}
|
||||
|
||||
|
||||
@item{@scheme[(start non-terminal-id ...)]
|
||||
|
||||
Declares a list of starting non-terminals for the grammar.}
|
||||
|
||||
|
||||
@item{@scheme[(end token-id ...)]
|
||||
|
||||
Specifies a set of tokens from which some member must follow any
|
||||
valid parse. For example, an EOF token would be specified for a
|
||||
parser that parses entire files and a newline token for a parser
|
||||
that parses entire lines individually.}
|
||||
|
||||
|
||||
@item{@scheme[(error expr)]
|
||||
|
||||
The @scheme[expr] should evaluate to a function which will be
|
||||
executed for its side-effect whenever the parser encounters an
|
||||
error.
|
||||
|
||||
If the @scheme[src-pos] declaration is present, the function
|
||||
should accept 5 arguments,:
|
||||
|
||||
@schemeblock[(lambda (tok-ok? tok-name tok-value _start-pos _end-pos)
|
||||
....)]
|
||||
|
||||
Otherwise it should accept 3:
|
||||
|
||||
@schemeblock[(lambda (tok-ok? tok-name tok-value)
|
||||
....)]
|
||||
|
||||
The first argument will be @scheme[#f] if and only if the error
|
||||
is that an invalid token was received. The second and third
|
||||
arguments will be the name and the value of the token at which
|
||||
the error was detected. The fourth and fifth arguments, if
|
||||
present, provide the source positions of that token.}
|
||||
|
||||
|
||||
@item{@scheme[(precs (assoc token-id ...) ...)]
|
||||
@italic{OPTIONAL}
|
||||
|
||||
Precedence declarations to resolve shift/reduce and
|
||||
reduce/reduce conflicts as in @exec{yacc}/@exec{bison}. An
|
||||
@scheme[assoc] must be one of @scheme[left], @scheme[right] or
|
||||
@scheme[nonassoc]. States with multiple shift/reduce or
|
||||
reduce/reduce conflicts (or some combination thereof) are not
|
||||
resolved with precedence.}
|
||||
|
||||
@item{@scheme[(src-pos)] @italic{OPTIONAL}
|
||||
|
||||
Causes the generated parser to expect input in the form
|
||||
@scheme[(make-position-token _token _start-pos _end-pos)] instead
|
||||
of simply @scheme[_token]. Include this option when using the
|
||||
parser with a lexer generated with @scheme[lexer-src-pos].}
|
||||
|
||||
|
||||
@item{@scheme[(debug filename)] @italic{OPTIONAL}
|
||||
|
||||
causes the parser generator to write the LALR table to the file
|
||||
named @filepath{filename} (unless the file exists).
|
||||
@filepath{filename} must be a string. Additionally, if a debug
|
||||
Causes the parser generator to write the LALR table to the file
|
||||
named @scheme[filename] (unless the file exists), where
|
||||
@scheme[filename] is a literal string. Additionally, if a debug
|
||||
file is specified, when a running generated parser encounters a
|
||||
parse error on some input file, after the user specified error
|
||||
expression returns, the complete parse stack is printed to
|
||||
|
@ -469,117 +621,47 @@ characters.}
|
|||
numbers in the stack printout correspond to the state numbers in
|
||||
the LALR table file.}
|
||||
|
||||
|
||||
@item{@scheme[(yacc-output filename)] @italic{OPTIONAL}
|
||||
|
||||
causes the parser generator to write a grammar file in the
|
||||
syntax of YACC/Bison. The file might not be a valid YACC file
|
||||
because the scheme grammar can use symbols that are invalid in
|
||||
C.}
|
||||
Causes the parser generator to write a grammar file in
|
||||
approximately the syntax of @exec{yacc}/@exec{bison}. The file
|
||||
might not be a valid @exec{yacc} file, because the scheme
|
||||
grammar can use symbols that are invalid in C.}
|
||||
|
||||
|
||||
@item{@scheme[(suppress)] @italic{OPTIONAL}
|
||||
|
||||
causes the parser generator not to report shift/reduce or
|
||||
Causes the parser generator not to report shift/reduce or
|
||||
reduce/reduce conflicts.}
|
||||
|
||||
@item{@scheme[(src-pos)] @italic{OPTIONAL}
|
||||
|
||||
causes the generated parser to expect input in the form
|
||||
@scheme[(make-position-token token position position)] instead
|
||||
of simply @scheme[token]. Include this option when using the
|
||||
parser with a lexer generated with @scheme[lexer-src-pos].}
|
||||
|
||||
@item{@scheme[(error expression)]
|
||||
|
||||
expression should evaluate to a function which will be executed
|
||||
for its side-effect whenever the parser encounters an error. If
|
||||
the @scheme[src-pos] option is present, the function should
|
||||
accept 5 arguments, @schemeblock[(lambda (token-ok token-name
|
||||
token-value start-pos end-pos) ...)]. Otherwise it should
|
||||
accept 3, @schemeblock[(lambda (token-ok token-name token-value)
|
||||
...)]. The first argument will be @scheme[#f] iff the error is
|
||||
that an invalid token was received. The second and third
|
||||
arguments will be the name and the value of the token at which
|
||||
the error was detected. The fourth and fifth arguments, if
|
||||
present, provide the source positions of that token.}
|
||||
|
||||
@item{@scheme[(tokens group-name ...)]
|
||||
|
||||
declares that all of the tokens defined in the groups can be
|
||||
handled by this parser.}
|
||||
|
||||
@item{@scheme[(start non-terminal-name ...)]
|
||||
|
||||
declares a list of starting non-terminals for the grammar.}
|
||||
|
||||
@item{@scheme[(end token-name ...)]
|
||||
|
||||
specifies a set of tokens from which some member must follow any
|
||||
valid parse. For example an EOF token would be specified for a
|
||||
parser that parses entire files and a @nonterm{newline} token
|
||||
for a parser that parses entire lines individually.}
|
||||
|
||||
@item{@scheme[(precs (assoc token-name ...) ...)]
|
||||
@italic{OPTIONAL}
|
||||
|
||||
precedence declarations to resolve shift/reduce and
|
||||
reduce/reduce conflicts as in YACC/BISON. @scheme[assoc] must
|
||||
be one of @scheme[left], @scheme[right] or @scheme[nonassoc].
|
||||
States with multiple shift/reduce or reduce/reduce conflicts or
|
||||
some combination thereof are not resolved with precedence.}
|
||||
|
||||
@item{@schemeblock0[(grammar (non-terminal ((grammar-symbol ...) (prec token-name) expression)
|
||||
...)
|
||||
...)]
|
||||
|
||||
declares the @scheme[grammar] to be parsed. Each
|
||||
@scheme[grammar-symbol] must be a @scheme[token-name] or
|
||||
@scheme[non-terminal]. The @scheme[prec] declaration is
|
||||
optional. @scheme[expression] is a semantic action which will
|
||||
be evaluated when the input is found to match its corresponding
|
||||
production. Each action is scheme code that has the same scope
|
||||
as its parser's definition, except that the variables
|
||||
@scheme[$1], ..., @scheme[$n] are bound in the expression and
|
||||
may hide outside bindings of @scheme[$1], ... @scheme[$n].
|
||||
@scheme[$x] is bound to the result of the action for the
|
||||
@scheme[$x]@superscript{th} grammar symbol on the right of the
|
||||
production, if that grammar symbol is a non-terminal, or the
|
||||
value stored in the token if the grammar symbol is a terminal.
|
||||
Here @scheme[n] is the number of @scheme[grammar-symbol]s on the
|
||||
right of the production. If the @scheme[src-pos] option is
|
||||
present in the parser, variables @scheme[$1-start-pos], ...,
|
||||
@scheme[$n-start-pos] and @scheme[$1-end-pos], ...,
|
||||
@scheme[$n-end-pos] are also available and refer to the position
|
||||
structures corresponding to the start and end of the
|
||||
corresponding @scheme[grammar-symbol]. Grammar symbols defined
|
||||
as empty-tokens have no @scheme[$n] associated, but do have
|
||||
@scheme[$n-start-pos] and @scheme[$n-end-pos]. All of the
|
||||
productions for a given non-terminal must be grouped with it,
|
||||
i.e., no non-terminal may appear twice on the left hand side in
|
||||
a parser.}
|
||||
|
||||
}
|
||||
|
||||
The result of a parser expression with one start non-terminal is a
|
||||
function, @scheme[f], that takes one argument. This argument must be
|
||||
a zero argument function, @scheme[t], that produces successive tokens
|
||||
of the input each time it is called. If desired, the @scheme[t] may
|
||||
return symbols instead of tokens. The parser will treat symbols as
|
||||
tokens of the corresponding name (with @scheme[#f] as a value, so it
|
||||
is usual to return symbols only in the case of empty tokens).
|
||||
@scheme[f] returns the value associated with the parse tree by the
|
||||
semantic actions. If the parser encounters an error, after invoking
|
||||
the supplied error function, it will try to use error productions to
|
||||
continue parsing. If it cannot, it raises a read error.
|
||||
The result of a @scheme[parser] expression with one @scheme[start]
|
||||
non-terminal is a function, @scheme[_parse], that takes one
|
||||
argument. This argument must be a zero argument function,
|
||||
@scheme[_gen], that produces successive tokens of the input each
|
||||
time it is called. If desired, the @scheme[_gen] may return
|
||||
symbols instead of tokens, and the parser will treat symbols as
|
||||
tokens of the corresponding name (with @scheme[#f] as a value, so
|
||||
it is usual to return symbols only in the case of empty tokens).
|
||||
The @scheme[_parse] function returns the value associated with the
|
||||
parse tree by the semantic actions. If the parser encounters an
|
||||
error, after invoking the supplied error function, it will try to
|
||||
use error productions to continue parsing. If it cannot, it
|
||||
raises @scheme[exn:fail:read].
|
||||
|
||||
If multiple start non-terminals are provided, the parser expression
|
||||
will result in a list of parsing functions (each one will individually
|
||||
behave as if it were the result of a parser expression with only one
|
||||
start non-terminal), one for each start non-terminal, in the same order.
|
||||
If multiple non-terminals are provided in @scheme[start], the
|
||||
@scheme[parser] expression produces a list of parsing functions,
|
||||
one for each non-terminal in the same order. Each parsing function
|
||||
is like the result of a parser expression with only one
|
||||
@scheme[start] non-terminal,
|
||||
|
||||
Each time the scheme code for a lexer is compiled (e.g. when a
|
||||
@filepath{.ss} file containing a @scheme[parser] form is loaded), the
|
||||
parser generator is run. To avoid this overhead place the parser into
|
||||
a module and compile the module to a @filepath{.zo} bytecode file.}
|
||||
Each time the scheme code for a @scheme[parser] is compiled
|
||||
(e.g. when a @filepath{.ss} file containing a @scheme[parser] form
|
||||
is loaded), the parser generator is run. To avoid this overhead
|
||||
place the parser into a module and compile the module to a
|
||||
@filepath{.zo} bytecode file.}
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
|
@ -601,3 +683,6 @@ actions in the original grammar have nested blocks, the tool will fail.
|
|||
Annotated examples are in the @filepath{examples} subdirectory of the
|
||||
@filepath{parser-tools} collection.}
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@index-section[]
|
||||
|
|
|
@ -172,7 +172,7 @@ header, and then write a ``Hello, world!'' web page as the result:
|
|||
@schemeblock[
|
||||
(define (handle in out)
|
||||
(code:comment #, @t{Discard the request header (up to blank line):})
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
(code:comment #, @t{Send reply:})
|
||||
(display "HTTP/1.0 200 Okay\r\n" out)
|
||||
(display "Server: k\r\nContent-Type: text/html\r\n\r\n" out)
|
||||
|
@ -445,7 +445,7 @@ takes a requested URL and produces a result value suitable to use with
|
|||
(read-line in)))
|
||||
(when req
|
||||
(code:comment #, @t{Discard the rest of the header (up to blank line):})
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
(code:comment #, @t{Dispatch:})
|
||||
(let ([xexpr (dispatch (list-ref req 1))])
|
||||
(code:comment #, @t{Send reply:})
|
||||
|
|
|
@ -15,7 +15,7 @@
|
|||
|
||||
(define (handle in out)
|
||||
;; Discard the request header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Send reply:
|
||||
(display "HTTP/1.0 200 Okay\r\n" out)
|
||||
(display "Server: k\r\nContent-Type: text/html\r\n\r\n" out)
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
|
||||
(define (handle in out)
|
||||
;; Discard the request header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Send reply:
|
||||
(display "HTTP/1.0 200 Okay\r\n" out)
|
||||
(display "Server: k\r\nContent-Type: text/html\r\n\r\n" out)
|
||||
|
|
|
@ -24,7 +24,7 @@
|
|||
|
||||
(define (handle in out)
|
||||
;; Discard the request header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Send reply:
|
||||
(display "HTTP/1.0 200 Okay\r\n" out)
|
||||
(display "Server: k\r\nContent-Type: text/html\r\n\r\n" out)
|
||||
|
|
|
@ -29,7 +29,7 @@
|
|||
|
||||
(define (handle in out)
|
||||
;; Discard the request header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Send reply:
|
||||
(display "HTTP/1.0 200 Okay\r\n" out)
|
||||
(display "Server: k\r\nContent-Type: text/html\r\n\r\n" out)
|
||||
|
|
|
@ -40,7 +40,7 @@
|
|||
(read-line in)))
|
||||
(when req
|
||||
;; Discard the rest of the header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Dispatch:
|
||||
(let ([xexpr (dispatch (list-ref req 1))])
|
||||
;; Send reply:
|
||||
|
|
|
@ -36,7 +36,7 @@
|
|||
(read-line in)))
|
||||
(when req
|
||||
;; Discard the rest of the header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Dispatch:
|
||||
(let ([xexpr (dispatch (list-ref req 1))])
|
||||
;; Send reply:
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
(read-line in)))
|
||||
(when req
|
||||
;; Discard the rest of the header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Dispatch:
|
||||
(let ([xexpr (dispatch (list-ref req 1))])
|
||||
;; Send reply:
|
||||
|
|
|
@ -37,7 +37,7 @@
|
|||
(read-line in)))
|
||||
(when req
|
||||
;; Discard the rest of the header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Dispatch:
|
||||
(let ([xexpr (dispatch (list-ref req 1))])
|
||||
;; Send reply:
|
||||
|
|
|
@ -38,7 +38,7 @@
|
|||
(read-line in)))
|
||||
(when req
|
||||
;; Discard the rest of the header (up to blank line):
|
||||
(regexp-match #rx#"(\r\n|^)\r\n" in)
|
||||
(regexp-match #rx"(\r\n|^)\r\n" in)
|
||||
;; Dispatch:
|
||||
(let ([xexpr (prompt (dispatch (list-ref req 1)))]) ;; <<< changed
|
||||
;; Send reply:
|
||||
|
|
|
@ -631,14 +631,19 @@ which case true means @scheme["gray"] and false means
|
|||
|
||||
@defproc[(standard-fish [w real?]
|
||||
[h real?]
|
||||
[direction (one-of/c 'left 'right)]
|
||||
[color (or/c string? (is-a?/c color%)) "blue"]
|
||||
[eye-color (or/c string? (is-a?/c color%)) "black"]
|
||||
[open-mouth? any/c #t])
|
||||
[#:direction direction (one-of/c 'left 'right) 'left]
|
||||
[#:color color (or/c string? (is-a?/c color%)) "blue"]
|
||||
[#:eye-color eye-color (or/c string? (is-a?/c color%) false/c) "black"]
|
||||
[#:open-mouth open-mouth (or/c boolean? real?) #f])
|
||||
pict?]{
|
||||
|
||||
Creates a fish, swimming either @scheme['left] or @scheme['right].}
|
||||
Creates a fish swimming either @scheme['left] or @scheme['right].
|
||||
If @scheme[eye-color] is @scheme[#f], no eye is drawn.
|
||||
|
||||
The @scheme[open-mouth] argument can be either @scheme[#f] (mouth
|
||||
closed), @scheme[#t] (mouth fully open), or a number: @scheme[0.0] is
|
||||
closed, @scheme[1.0] is fully open, and numbers in between are
|
||||
partially open.}
|
||||
|
||||
@defproc[(jack-o-lantern [size real?]
|
||||
[pumpkin-color (or/c string? (is-a?/c color%)) "orange"]
|
||||
|
|
|
@ -164,7 +164,9 @@
|
|||
(if i
|
||||
(hash-table-put! deps i #t)
|
||||
(unless (or (memq 'depends-all (doc-flags (info-doc info)))
|
||||
(and (doc-under-main? (info-doc i))
|
||||
(and (if (info? d)
|
||||
(doc-under-main? (info-doc d))
|
||||
(not (path? (path->main-collects-relative d))))
|
||||
(memq 'depends-all-main (doc-flags (info-doc info)))))
|
||||
(set! added? #t)
|
||||
(when (verbose)
|
||||
|
|
|
@ -82,6 +82,16 @@
|
|||
(cc-superimpose l p)
|
||||
(cc-superimpose p l))))
|
||||
|
||||
(define fish
|
||||
(let ([standard-fish
|
||||
(lambda (w h
|
||||
#:direction [direction 'left]
|
||||
#:color [color "blue"]
|
||||
#:eye-color [eye-color "black"]
|
||||
#:open-mouth [open-mouth #f])
|
||||
(standard-fish w h direction color eye-color open-mouth))])
|
||||
standard-fish))
|
||||
|
||||
(provide hline vline
|
||||
frame
|
||||
pict-path?
|
||||
|
@ -136,4 +146,7 @@
|
|||
|
||||
explode-star
|
||||
|
||||
find-pen find-brush)))
|
||||
standard-fish
|
||||
|
||||
find-pen find-brush)
|
||||
(rename-out [fish standard-fish])))
|
||||
|
|
158
collects/slideshow/play.ss
Normal file
158
collects/slideshow/play.ss
Normal file
|
@ -0,0 +1,158 @@
|
|||
#lang scheme/base
|
||||
(require slideshow/base
|
||||
slideshow/pict)
|
||||
|
||||
(provide play play-n
|
||||
fade-pict
|
||||
slide-pict
|
||||
sequence-animations
|
||||
reverse-animations)
|
||||
|
||||
(define (fail-gracefully t)
|
||||
(with-handlers ([exn:fail? (lambda (x) (values 0 0))])
|
||||
(t)))
|
||||
|
||||
;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Animation player
|
||||
|
||||
;; Create a slide sequence where `mid' takes a number from 0.0 to 1.0.
|
||||
;; The 0.0 slide will wit until you advance, but the remaining ones will
|
||||
;; time out automatically to create the animation.
|
||||
(define (play #:title [title #f] mid)
|
||||
(slide #:title title (mid 0))
|
||||
(if condense?
|
||||
(skip-slides 10)
|
||||
(map (lambda (n)
|
||||
(slide #:title title #:timeout 0.05 (mid n)))
|
||||
(let ([cnt 10])
|
||||
(let loop ([n cnt])
|
||||
(if (zero? n)
|
||||
null
|
||||
(cons (/ (- cnt -1 n) 1.0 cnt)
|
||||
(loop (sub1 n)))))))))
|
||||
|
||||
;; Create a sequences of N `play' sequences, where `mid' takes
|
||||
;; N arguments, each a number between 0.0 and 1.0. Initially, all
|
||||
;; arguments will be 0.0. The first argument goes from 0.0 to 1.0
|
||||
;; for the first `play' sequence, and then it stays at 1.0 while
|
||||
;; the second goes from 0.0 to 1.0 for the second sequence, etc.
|
||||
(define (play-n #:title [title #f] mid)
|
||||
(let ([n (procedure-arity mid)])
|
||||
(let loop ([post (vector->list (make-vector n))]
|
||||
[pre null])
|
||||
(if (null? post)
|
||||
(slide #:title title (apply mid pre))
|
||||
(begin
|
||||
(play #:title title
|
||||
(lambda (n)
|
||||
(apply mid (append pre (list n) (cdr post)))))
|
||||
(loop (cdr post) (cons 1.0 pre)))))))
|
||||
|
||||
|
||||
;; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
|
||||
;; Animation combinators
|
||||
|
||||
;; "Morph" from one pict to another. Use `combine' to align
|
||||
;; the picts relative to another. Only the bounding box is
|
||||
;; actually morphed; the drawing part transitions by fading
|
||||
;; the original `a' out and the new `b' in. The `n' argument
|
||||
;; ranges from 0.0 (= `a') to 1.0 (= `b').
|
||||
(define (fade-pict #:combine [combine cc-superimpose] n a b)
|
||||
;; Combine ghosts of scaled pictures:
|
||||
(let ([orig (combine (cellophane a (- 1.0 n))
|
||||
(cellophane b n))])
|
||||
(cond
|
||||
[(zero? n) (refocus orig a)]
|
||||
[(= n 1.0) (refocus orig b)]
|
||||
[else
|
||||
(let-values ([(atx aty) (ltl-find orig a)]
|
||||
[(abx aby) (rbl-find orig a)]
|
||||
[(btx bty) (ltl-find orig b)]
|
||||
[(bbx bby) (rbl-find orig b)])
|
||||
(let ([da (+ aty (* (- bty aty) n))]
|
||||
[dd (- (pict-height orig)
|
||||
(+ aby (* (- bby aby) n)))]
|
||||
[orig
|
||||
;; Generate intermediate last-pict
|
||||
(let ([ae (or (pict-last a) a)]
|
||||
[be (or (pict-last b) b)])
|
||||
(let-values ([(al at) (lt-find orig ae)]
|
||||
[(bl bt) (lt-find orig be)])
|
||||
(let ([ar (+ al (pict-width ae))]
|
||||
[ab (+ at (pict-height ae))]
|
||||
[br (+ bl (pict-width be))]
|
||||
[bb (+ bt (pict-height be))])
|
||||
(let ([atl (+ at (pict-ascent ae))]
|
||||
[abl (- ab (pict-descent ae))]
|
||||
[btl (+ bt (pict-ascent be))]
|
||||
[bbl (- bb (pict-descent be))]
|
||||
[btw (lambda (a b)
|
||||
(+ a (* (- b a) n)))])
|
||||
(let ([t (btw at bt)]
|
||||
[l (btw al bl)])
|
||||
(let ([b (max t (btw ab bb))]
|
||||
[r (max l (btw ar br))])
|
||||
(let ([tl (max t (min (btw atl btl) b))]
|
||||
[bl (max t (min (btw abl bbl) b))])
|
||||
(let ([p (blank (- r l) (- b t)
|
||||
(- tl t) (- b bl))])
|
||||
(use-last (pin-over orig l t p) p)))))))))])
|
||||
(let ([p (make-pict (pict-draw orig)
|
||||
(pict-width orig)
|
||||
(pict-height orig)
|
||||
da
|
||||
dd
|
||||
(list (make-child orig 0 0 1 1))
|
||||
#f
|
||||
(pict-last orig))])
|
||||
(let ([left (+ atx (* (- btx atx) n))]
|
||||
[right (+ abx (* (- bbx abx) n))])
|
||||
(let ([hp (inset p
|
||||
(- left)
|
||||
0
|
||||
(- right (pict-width p))
|
||||
0)])
|
||||
(let-values ([(atx aty) (lt-find hp a)]
|
||||
[(abx aby) (lb-find hp a)]
|
||||
[(btx bty) (lt-find hp b)]
|
||||
[(bbx bby) (lb-find hp b)])
|
||||
(let ([top (+ aty (* (- bty aty) n))]
|
||||
[bottom (+ aby (* (- bby aby) n))])
|
||||
(inset hp
|
||||
0
|
||||
(- top)
|
||||
0
|
||||
(- bottom (pict-height hp))))))))))])))
|
||||
|
||||
;; Pin `p' into `base', sliding from `p-from' to `p-to'
|
||||
;; (which are picts within `base') as `n' goes from 0.0 to 1.0.
|
||||
;; The `p-from' and `p-to' picts are typically ghosts of
|
||||
;; `p' within `base', but they can be any picts within
|
||||
;; `base'. The top-left locations of `p-from' and `p-to'
|
||||
;; determine the placement of the top-left of `p'.
|
||||
(define (slide-pict base p p-from p-to n)
|
||||
(let-values ([(x1 y1) (fail-gracefully (lambda () (lt-find base p-from)))]
|
||||
[(x2 y2) (fail-gracefully (lambda () (lt-find base p-to)))])
|
||||
(pin-over base
|
||||
(+ x1 (* (- x2 x1) n))
|
||||
(+ y1 (* (- y2 y1) n))
|
||||
p)))
|
||||
|
||||
;; Concatenate a sequence of animations
|
||||
(define (sequence-animations . l)
|
||||
(let ([len (length l)])
|
||||
(lambda (n)
|
||||
(cond
|
||||
[(zero? n)
|
||||
((car l) 0.0)]
|
||||
[(= n 1.0)
|
||||
((list-ref l (sub1 len)) n)]
|
||||
[else
|
||||
(let ([pos (inexact->exact (floor (* n len)))])
|
||||
((list-ref l pos) (* len (- n (* pos (/ len))))))]))))
|
||||
|
||||
;; Reverse a sequence of animations
|
||||
(define (reverse-animations . l)
|
||||
(let ([s (apply sequence-animations l)])
|
||||
(lambda (n)
|
||||
(s (- 1 n)))))
|
|
@ -279,8 +279,8 @@ pict snip :
|
|||
(error 'pict-snip "expected a pict to be the result of each embedded snip, got ~e"
|
||||
pict))
|
||||
(let* ([bm (make-object bitmap%
|
||||
(max 1 (inexact->exact (ceiling w)))
|
||||
(max 1 (inexact->exact (ceiling h))))]
|
||||
(max 1 (add1 (inexact->exact (ceiling w))))
|
||||
(max 1 (add1 (inexact->exact (ceiling h)))))]
|
||||
[bdc (make-object bitmap-dc% bm)])
|
||||
(send bdc clear)
|
||||
(send bdc set-smoothing 'aligned)
|
||||
|
@ -891,8 +891,8 @@ pict snip :
|
|||
[pict-height (dynamic-require '(lib "texpict/mrpict.ss") 'pict-height)]
|
||||
[draw-pict (dynamic-require '(lib "texpict/mrpict.ss") 'draw-pict)]
|
||||
[bm (make-object bitmap%
|
||||
(max 1 (inexact->exact (ceiling (pict-width p))))
|
||||
(max 1 (inexact->exact (ceiling (pict-height p)))))]
|
||||
(max 1 (add1 (inexact->exact (ceiling (pict-width p)))))
|
||||
(max 1 (add1 (inexact->exact (ceiling (pict-height p))))))]
|
||||
[bdc (make-object bitmap-dc% bm)])
|
||||
(send bdc clear)
|
||||
(send bdc set-smoothing 'aligned)
|
||||
|
|
|
@ -502,7 +502,10 @@
|
|||
(linewidth 3 (colorize (pip-arrow-line 50 50 gap-size) "orange"))))
|
||||
"without changing the layout"))
|
||||
|
||||
(define blue-fish (standard-fish (* 3 gap-size) (* 2 gap-size) 'right "blue" "white"))
|
||||
(define blue-fish (standard-fish (* 3 gap-size) (* 2 gap-size)
|
||||
#:direction 'right
|
||||
#:color "blue"
|
||||
#:eye-color "white"))
|
||||
(define plain-file (file-icon (* 2 gap-size) (* 3 gap-size) #t))
|
||||
(define fish-file-scene (bound-frame
|
||||
(inset (ht-append (* 4 gap-size)
|
||||
|
|
|
@ -1,78 +0,0 @@
|
|||
_String Constants_
|
||||
|
||||
This library provides the facility for multiple languages in
|
||||
DrScheme's GUI. These are the exported syntactic forms and
|
||||
procedures from
|
||||
|
||||
(lib "string-constant.ss" "string-constants")
|
||||
|
||||
are
|
||||
|
||||
> (string-constant name) : string
|
||||
|
||||
This form returns the string constant named `name'.
|
||||
|
||||
> (string-constants name) : (listof string)
|
||||
|
||||
This form returns a list of string constants, one for each
|
||||
language that DrScheme's GUI supports.
|
||||
|
||||
> (this-language) : symbol
|
||||
|
||||
This form returns the name of the current language
|
||||
|
||||
> (all-languages) : (listof symbol)
|
||||
|
||||
This form returns a list of symbols (in the same order as
|
||||
those returned from string-constants) naming each
|
||||
language.
|
||||
|
||||
> (set-language-pref lang) : void
|
||||
|
||||
Sets the language for the next run of DrScheme to
|
||||
lang, which must be a symbol returned from `all-languages'.
|
||||
Does not affect the running DrScheme.
|
||||
|
||||
============================================================
|
||||
|
||||
To add string-constants to DrScheme, see the files:
|
||||
|
||||
_english-string-constants.ss_
|
||||
_french-string-constants.ss_
|
||||
_spanish-string-constants.ss_
|
||||
_german-string-constants.ss_
|
||||
_danish-string-constants.ss_
|
||||
_italian-string-constants.ss_
|
||||
|
||||
Each file has the same format. They are each modules
|
||||
in the "string-constant-lang.ss" language. The body of each
|
||||
module is a finite mapping table that gives the mapping
|
||||
from the symbolic name of a string constant to its
|
||||
translation in the appropriate language.
|
||||
|
||||
The english-string-constants.ss is considered the master
|
||||
file -- string constants will be set there and translated
|
||||
into each of the other language files. In addition, the
|
||||
english-string-constants.ss file should contain hints about
|
||||
the context of the strings whose symbol name might not be
|
||||
clear.
|
||||
|
||||
============================================================
|
||||
|
||||
_PLTSTRINGCONSTANTS_ environment variable
|
||||
_STRINGCONSTANTS_ environment variable
|
||||
|
||||
If either of these environment variables are set, DrScheme
|
||||
shows you, during startup, which string constants are not
|
||||
yet defined for each language.
|
||||
|
||||
You can also specify which languages you are interested
|
||||
in. If either environment variable is bound to a symbol (as
|
||||
interpreted by `read') you see only the corresponding
|
||||
language's messages. If either one is bound to a list of
|
||||
symbol (again, as interpreted by `read') you see the
|
||||
messages for all the languages in the list. If either is
|
||||
bound to anything else, you see all of the languages.
|
||||
|
||||
The PLTSTRINGCONSTANTS environment variable takes precedence
|
||||
of the STRINGCONSTANTS environment variable.
|
|
@ -1 +1,3 @@
|
|||
#lang setup/infotab
|
||||
|
||||
(define scribblings '(("string-constant.scrbl")))
|
||||
|
|
88
collects/string-constants/string-constant.scrbl
Normal file
88
collects/string-constants/string-constant.scrbl
Normal file
|
@ -0,0 +1,88 @@
|
|||
#lang scribble/doc
|
||||
@(require scribble/manual
|
||||
(for-label string-constants/string-constant
|
||||
scheme))
|
||||
|
||||
@title{@bold{String Constants}: GUI Internationalization}
|
||||
|
||||
This library provides the facility for multiple languages in
|
||||
DrScheme's GUI.
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Using String Constants}
|
||||
@defmodule[string-constants/string-constant]
|
||||
|
||||
@defform[(string-constant name)]{
|
||||
|
||||
This form returns the string constant named @scheme[name].}
|
||||
|
||||
@defform[(string-constants name)]{
|
||||
|
||||
This form returns a list of string constants, one for each language
|
||||
that DrScheme's GUI supports.}
|
||||
|
||||
@defform[(this-language)]{
|
||||
|
||||
This form returns the name of the current language as a symbol.}
|
||||
|
||||
@defform[(all-languages)]{
|
||||
|
||||
This form returns a list of symbols (in the same order as those
|
||||
returned from @scheme[string-constants]) naming each language.}
|
||||
|
||||
@defproc[(set-language-pref [lang string?]) void?]{
|
||||
|
||||
Sets the language for the next run of DrScheme to @scheme[lang], which
|
||||
must be a symbol returned from @scheme[all-languages]. Does not affect the
|
||||
running DrScheme.}
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Adding String Constants}
|
||||
@defmodule[string-constants/string-constant-lang]
|
||||
|
||||
To add string constants to DrScheme, see the files:
|
||||
|
||||
@itemize{
|
||||
@item{@filepath{english-string-constants.ss}}
|
||||
@item{@filepath{french-string-constants.ss}}
|
||||
@item{@filepath{spanish-string-constants.ss}}
|
||||
@item{@filepath{german-string-constants.ss}}
|
||||
@item{@filepath{danish-string-constants.ss}}
|
||||
@item{@filepath{italian-string-constants.ss}}}
|
||||
|
||||
Each file has the same format. They are each modules in the
|
||||
@schememodname[string-constants/string-constant-lang] language. The
|
||||
body of each module is a finite mapping table that gives the mapping
|
||||
from the symbolic name of a string constant to its translation in the
|
||||
appropriate language.
|
||||
|
||||
The @filepath{english-string-constants} is considered the master file;
|
||||
string constants will be set there and translated into each of the
|
||||
other language files. In addition, the
|
||||
@filepath{english-string-constants.ss} file should contain hints about
|
||||
the context of the strings whose symbol name might not be clear.
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Language Environment Variables}
|
||||
|
||||
@itemize{
|
||||
@item{@indexed-envvar{PLTSTRINGCONSTANTS}}
|
||||
@item{@indexed-envvar{STRINGCONSTANTS}}}
|
||||
|
||||
If either of these environment variables are set, DrScheme
|
||||
shows you, during startup, which string constants are not
|
||||
yet defined for each language.
|
||||
|
||||
You can also specify which languages you are interested
|
||||
in. If either environment variable is bound to a symbol (as
|
||||
interpreted by @scheme[read]) you see only the corresponding
|
||||
language's messages. If either one is bound to a list of
|
||||
symbols (again, as interpreted by @scheme[read]) you see the
|
||||
messages for all the languages in the list. If either is
|
||||
bound to anything else, you see all of the languages.
|
||||
|
||||
The @envvar{PLTSTRINGCONSTANTS} environment variable takes precedence
|
||||
of the @envvar{STRINGCONSTANTS} environment variable.
|
|
@ -1,52 +0,0 @@
|
|||
_syntax-color_
|
||||
|
||||
The syntax-color collection provides the underlying data structures
|
||||
and some helpful utilities for the color:text% class of the framework
|
||||
(collects/framework/private/color.ss).
|
||||
|
||||
_token-tree.ss_
|
||||
A splay-tree class specifically geared for the task of on-the-fly
|
||||
tokenization.
|
||||
|
||||
_paren-tree.ss_
|
||||
Parenthesis matching code built on top of token-tree.
|
||||
|
||||
_scheme-lexer.ss_
|
||||
A lexer for Scheme (including mzscheme extensions) build
|
||||
specifically for color:text%.
|
||||
|
||||
scheme-lexer returns 5 values:
|
||||
|
||||
- A string containing the matching text. Block comments and specials
|
||||
currently return an empty string. This may change in the future to
|
||||
other string or non-string data.
|
||||
|
||||
- A symbol in '(error comment sexp-comment white-space constant string
|
||||
no-color parenthesis other symbol eof)
|
||||
|
||||
- A symbol in '(|(| |)| |[| |]| |{| |}|) or #f
|
||||
|
||||
- A number representing the starting position of the match.
|
||||
|
||||
- A number representing the ending position of the match.
|
||||
|
||||
_default-lexer.ss_
|
||||
A lexer that only identifies the following: ( ) [ ] { }
|
||||
also build specifically for color:text%.
|
||||
|
||||
default-lexer returns 5 values:
|
||||
|
||||
- A string containing the matching text. Block specials currently
|
||||
return an empty string. This may change in the future to other
|
||||
string or non-string data.
|
||||
|
||||
- A symbol in '(comment white-space no-color eof)
|
||||
|
||||
- A symbol in '(|(| |)| |[| |]| |{| |}|) or #f
|
||||
|
||||
- A number representing the starting position of the match.
|
||||
|
||||
- A number representing the ending position of the match.
|
||||
|
||||
|
||||
|
|
@ -1 +1,3 @@
|
|||
#lang setup/infotab
|
||||
|
||||
(define scribblings '(("syntax-color.scrbl")))
|
||||
|
|
153
collects/syntax-color/syntax-color.scrbl
Normal file
153
collects/syntax-color/syntax-color.scrbl
Normal file
|
@ -0,0 +1,153 @@
|
|||
#lang scribble/doc
|
||||
@(require scribble/manual
|
||||
(for-label syntax-color/token-tree
|
||||
syntax-color/paren-tree
|
||||
syntax-color/scheme-lexer
|
||||
syntax-color/default-lexer
|
||||
framework/framework
|
||||
framework/private/color
|
||||
scheme))
|
||||
|
||||
@title{@bold{Syntax Color}: Utilities}
|
||||
|
||||
The @filepath{syntax-color} collection provides the underlying data
|
||||
structures and some helpful utilities for the @scheme[color:text%]
|
||||
class of the @other-manual['(lib
|
||||
"scribblings/framework/framework.scrbl")].
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Parenthesis Matching}
|
||||
|
||||
@defmodule[syntax-color/paren-tree]
|
||||
|
||||
@defclass[paren-tree% object% ()]
|
||||
|
||||
Parenthesis matching code built on top of @scheme[token-tree%].
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Scheme Lexer}
|
||||
|
||||
@defmodule[syntax-color/scheme-lexer]
|
||||
|
||||
@defproc[(scheme-lexer [in input-port?])
|
||||
(values (or/c string? eof-object?)
|
||||
symbol?
|
||||
(or/c symbol? false/c)
|
||||
(or/c number? false/c)
|
||||
(or/c number? false/c))]
|
||||
|
||||
A lexer for Scheme, including reader extensions (@secref[#:doc'(lib
|
||||
"scribblings/reference/reference.scrbl")]{Reader_Extension}), built
|
||||
specifically for @scheme[color:text%].
|
||||
|
||||
The @scheme[scheme-lexer] function returns 5 values:
|
||||
|
||||
@itemize{
|
||||
@item{Either a string containing the matching text or the eof object.
|
||||
Block comments and specials currently return an empty string.
|
||||
This may change in the future to other string or non-string data.}
|
||||
|
||||
@item{A symbol in @scheme['(error comment sexp-comment
|
||||
white-space constant string no-color parenthesis other symbol eof)].}
|
||||
|
||||
@item{A symbol in @scheme['(|(| |)| |[| |]| |{| |}|)] or @scheme[#f].}
|
||||
|
||||
@item{A number representing the starting position of the match (or @scheme[#f] if eof).}
|
||||
|
||||
@item{A number representing the ending position of the match (or @scheme[#f] if eof).}}
|
||||
|
||||
@section{Default lexer}
|
||||
@defmodule[syntax-color/default-lexer]
|
||||
|
||||
@defproc[(default-lexer [in input-port?])
|
||||
(values (or/c string? eof-object?)
|
||||
symbol?
|
||||
(or/c symbol? false/c)
|
||||
(or/c number? false/c)
|
||||
(or/c number? false/c))]
|
||||
|
||||
A lexer that only identifies @litchar{(}, @litchar{)}, @litchar{[},
|
||||
@litchar{]}, @litchar["{"], and @litchar["}"] built specifically for
|
||||
@scheme[color:text%].
|
||||
|
||||
@scheme[default-lexer] returns 5 values:
|
||||
|
||||
@itemize{
|
||||
@item{Either a string containing the matching text or the eof object.
|
||||
Block specials currently return an empty string.
|
||||
This may change in the future to other string or non-string data.}
|
||||
|
||||
@item{A symbol in @scheme['(comment white-space no-color eof)].}
|
||||
|
||||
@item{A symbol in @scheme['(|(| |)| |[| |]| |{| |}|)] or @scheme[#f].}
|
||||
|
||||
@item{A number representing the starting position of the match (or @scheme[#f] if eof).}
|
||||
|
||||
@item{A number representing the ending position of the match (or @scheme[#f] if eof).}}
|
||||
|
||||
|
||||
@; ----------------------------------------------------------------------
|
||||
|
||||
@section{Splay Tree for Tokenization}
|
||||
@defmodule[syntax-color/token-tree]
|
||||
|
||||
@defclass[token-tree% object% ()]{
|
||||
|
||||
A splay-tree class specifically geared for the task of on-the-fly
|
||||
tokenization. Instead of keying nodes on values, each node has a
|
||||
length, and they are found by finding a node that follows a certain
|
||||
total length of preceding nodes.
|
||||
|
||||
FIXME: many methods are not yet documented.
|
||||
|
||||
@defconstructor[([len (or/c exact-nonnegative-integer? fasle/c)]
|
||||
[data any/c])]{
|
||||
Creates a token tree with a single element.
|
||||
}
|
||||
|
||||
@defmethod[(get-root) (or/c node? false/c)]{
|
||||
Returns the root node in the tree.
|
||||
}
|
||||
|
||||
@defmethod[(search! [key-position natural-number/c]) void?]{
|
||||
Splays, setting the root node to be the closest node to
|
||||
offset @scheme[key-position] (i.e., making the total length of
|
||||
the left tree at least @scheme[key-position], if possible).
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@deftogether[(
|
||||
@defproc[(node? [v any/c]) boolean?]
|
||||
@defproc[(node-token-length [n node?]) natural-number/c]
|
||||
@defproc[(node-token-data [n node?]) any/c]
|
||||
@defproc[(node-left-subtree-length [n node?]) natural-number/c]
|
||||
@defproc[(node-left [n node?]) (or/c node? false/c)]
|
||||
@defproc[(node-right [n node?]) (or/c node? false/c)]
|
||||
)]{
|
||||
|
||||
Functions for working with nodes in a @scheme[token-tree%].}
|
||||
|
||||
|
||||
@defproc[(insert-first! [tree1 (is-a?/c token-tree%)]
|
||||
[tree2 (is-a?/c token-tree%)])
|
||||
void?]{
|
||||
|
||||
Inserts @scheme[tree1] into @scheme[tree2] as the first thing, setting
|
||||
@scheme[tree2]'s root to @scheme[#f].}
|
||||
|
||||
|
||||
@defproc[(insert-last! [tree1 (is-a?/c token-tree%)]
|
||||
[tree2 (is-a?/c token-tree%)])
|
||||
void?]{
|
||||
|
||||
Inserts @scheme[tree1] into @scheme[tree2] as the last thing, setting
|
||||
@scheme[tree2]'s root to @scheme[#f].}
|
||||
|
||||
|
||||
@defproc[(insert-last-spec! [tree (is-a?/c token-tree%)] [n natural-number/c] [v any/c]) void?]{
|
||||
|
||||
Same as @scheme[(insert-last! tree (new token-tree% [length n] [data
|
||||
v]))]. This optimization is important for the colorer.}
|
|
@ -2,7 +2,7 @@
|
|||
(require (lib "class.ss"))
|
||||
|
||||
(provide token-tree% insert-first! insert-last! insert-last-spec!
|
||||
node-token-length node-token-data node-left-subtree-length node-left node-right)
|
||||
node? node-token-length node-token-data node-left-subtree-length node-left node-right)
|
||||
|
||||
;; A tree is
|
||||
;; - #f
|
||||
|
|
|
@ -1,17 +0,0 @@
|
|||
|
||||
Test Box Recovery Tool
|
||||
----------------------
|
||||
|
||||
This tool allows DrScheme v370 and later to read programs created
|
||||
using v360 and earlier that include test-case boxes.
|
||||
|
||||
When opened using this tool, test-case boxes rae turned into
|
||||
`check-expect' forms that work with the "testing.ss" teachpack.
|
||||
|
||||
Test boxes plain-text tests and expected results are converted to
|
||||
plain-text `check-expect' forms.
|
||||
|
||||
If either the test or espected-result expression contains non-text
|
||||
(e.g., an image), the converted form is a comment box containing a
|
||||
`check-expect' form. The box should be easy to remove using the
|
||||
"Uncomment" menu item in DrScheme.
|
|
@ -4,3 +4,6 @@
|
|||
(define required-core-version "370")
|
||||
(define tools (list '("tool.ss")))
|
||||
(define tool-names (list "Test Box Recovery"))
|
||||
|
||||
(define scribblings '(("test-box-recovery.scrbl")))
|
||||
(define doc-categories '(other))
|
||||
|
|
20
collects/test-box-recovery/test-box-recovery.scrbl
Normal file
20
collects/test-box-recovery/test-box-recovery.scrbl
Normal file
|
@ -0,0 +1,20 @@
|
|||
#lang scribble/doc
|
||||
@(require scribble/manual
|
||||
(for-label htdp/testing))
|
||||
|
||||
@title{Test Box Recovery Tool}
|
||||
|
||||
The text-box recovery tool allows DrScheme v370 and later to read
|
||||
programs created using v360 and earlier that include test-case boxes.
|
||||
|
||||
When opened using this tool, test-case boxes are turned into
|
||||
@scheme[check-expect] forms that work with the
|
||||
@schememodname[htdp/testing] teachpack.
|
||||
|
||||
Test boxes plain-text tests and expected results are converted to
|
||||
plain-text @scheme[check-expect] forms.
|
||||
|
||||
If either the test or expected-result expression contains non-text
|
||||
(e.g., an image), the converted form is a comment box containing a
|
||||
@scheme[check-expect] form. The box should be easy to remove using the
|
||||
@menuitem["Scheme" "Uncomment"] menu item in DrScheme.
|
|
@ -643,12 +643,18 @@
|
|||
size (* 1.1 size))))
|
||||
|
||||
(define standard-fish
|
||||
(opt-lambda (w h [direction 'left] [c "blue"] [ec #f] [mouth-open? #f])
|
||||
(opt-lambda (w h [direction 'left] [c "blue"] [ec #f] [mouth-open #f])
|
||||
(define no-pen (send the-pen-list find-or-create-pen "black" 1 'transparent))
|
||||
(define color (if (string? c) (make-object color% c) c))
|
||||
(define dark-color (scale-color 0.8 color))
|
||||
(define eye-color (and ec (if (string? ec) (make-object color% ec) ec)))
|
||||
(define dark-eye-color color)
|
||||
(define mouth-open? (and mouth-open
|
||||
(or (not (number? mouth-open))
|
||||
(not (zero? mouth-open)))))
|
||||
(define mouth-open-amt (if (number? mouth-open)
|
||||
mouth-open
|
||||
(if mouth-open 1.0 0.0)))
|
||||
(dc (lambda (dc x y)
|
||||
(let ([rgn (make-object region% dc)]
|
||||
[old-rgn (send dc get-clipping-region)]
|
||||
|
@ -671,8 +677,9 @@
|
|||
(make-object point% w (- (* 1/2 h) dy))
|
||||
(make-object point% (* 1/6 w) (- (* 1/2 h) dy))
|
||||
(make-object point% 0 (if flip?
|
||||
(* 1/6 h)
|
||||
(* 1/3 h))))
|
||||
(* 1/6 mouth-open-amt h)
|
||||
(+ (* 1/3 h)
|
||||
(* 1/6 (- 1 mouth-open-amt) h)))))
|
||||
x (+ y dy))
|
||||
(send rgn set-rectangle
|
||||
x (+ y dy)
|
||||
|
@ -697,6 +704,7 @@
|
|||
(make-object point% (flip-rel (- w i)) (- (* 9/10 h) i)))
|
||||
x y))
|
||||
#f #t)
|
||||
|
||||
(set-rgn rgn #f)
|
||||
(send dc set-clipping-region rgn)
|
||||
(color-series
|
||||
|
@ -707,6 +715,7 @@
|
|||
(- (* 6/4 w) (* 2 i)) (- (* 4 h) (* 2 i))))
|
||||
#f #t)
|
||||
(send dc set-clipping-region old-rgn)
|
||||
|
||||
(set-rgn rgn #t)
|
||||
(send dc set-clipping-region rgn)
|
||||
(color-series
|
||||
|
@ -717,6 +726,16 @@
|
|||
(- (* 6/4 w) (* 2 i)) (- (* 4 h) (* 2 i))))
|
||||
#f #t)
|
||||
(send dc set-clipping-region old-rgn)
|
||||
|
||||
(when mouth-open?
|
||||
;; Repaint border, just in case round-off does weird things
|
||||
(send dc set-pen color 1 'solid)
|
||||
(let ([y (+ y (/ h 2))])
|
||||
(send dc draw-line
|
||||
(+ x (* 1/6 w)) y
|
||||
(+ x w -6) y))
|
||||
(send dc set-pen no-pen))
|
||||
|
||||
(color-series
|
||||
dc 4 1
|
||||
dark-color color
|
||||
|
|
Loading…
Reference in New Issue
Block a user