More precisely, do this for nested flows with the "refcontent" style.
For instance this Scribble:
@margin-note{Note: This is a note. Let's make it long enough that the
markdown output will have to line-wrap, to make sure the > mark starts
each line properly.}
Will render as this Markdown:
> Note: This is a note. Let's make it long enough that the markdown output
> will have to line-wrap, to make sure the > mark starts each line
> properly.
A site like GitHub.com will render this in a block-quote style
suitable for notes:
> Note: This is a note. Let's make it long enough that the markdown output
> will have to line-wrap, to make sure the > mark starts each line
> properly.
A phantom byte string is a small object that the memory
manager treats as an arbitrary-sized object, where the
size is specified when the phantom byte string is created
or or when size is changed via `set-phantom-bytes!'.
Keep track of whether any Racket-managed subprocesses are pending,
and use waitpid(0, ...) only if there is one, to better cooperate
with an embedding environment.
Also, add a chapter to the "Inside" manual to explain the issues.
Also allow `#:break' and `#:final' in all the `for:' macros.
Unfortunately, the expansion of `#:break' and `#:final' cannot be
typechecked at the moment.
Changing `current-url-encode-mode' from 'recommended to 'unreserved
causes `url->string' to encode !, *, ', (, and ) using %, which
can avoid confusing some parsers.
See also https://github.com/plt/racket/pull/198
The revised protocol for a progress procedure doesn't create
the thread automatically, and it provides an event to indicate
when the progress count changes.
Render Scribble like
@hyperlink["url" "content"]
as Markdown like
[content](url)
Note that this only works for `@hyperlink`. The motivation is to
preserve content the author has explicitly written. (Previously,
`markdown-render.rkt` was discarding this; `text-render.rkt` still
does so.)
This does _not_ attempt to handle everything that `html-render.rkt`
would automatically generate and render as `<a>`. It simply can't --
things like hotlinked Racket keywords in code blocks simply won't work
in Markdown.
URL map were handled.
Previously, only ".." at the beginning of the URL were checked; now it
looks at the entire URL for a path that ultimately leaves the base.
Uses "Github flavored markdown". Specifically, code blocks are opened
using ```scheme so that Github will lex and format them as Scheme code
rather than generic monospace.
Note: I would have used ```racket, but we are still waiting for the
pygments.rb project to pull again from pygments-main -- to which I
contributed a Racket lexer back in August. After pygments.rb pulls,
can update this to use ```racket instead.
This should have been like this all along; I think it can lead to
race-conditions with high-priority events. In particular, something
might be pending in the event queue and then the test suite might
queue a high-priority event to check for it, which could happen before
the event that actually does the work that's being checked for!
Allowing them would require support for immutable fxvectors and
flvectors, interning, and more. Since the motivation for reader
support is to make marshaling and unmarshaling easier, allow
them only in `read' mode. Change printing to make then unquotable.
More generally, a `splicing-syntax-parameterize' wrapping immediate
compile-time code effectively parameterizes the compile-time code as
well as any macro-triggered compile-time code. This is implemented by
using a compile-time parameter that complements each syntax binding.
Use `raise-user-error' for `raco pkg ...' errors, so that stack
traces don't print out for external errors. Reformat error messages
generally to match current conventions. Use logging for debugging
output.
The default `raco pkg' mode should work right for a
multiple-version installation (because everything in
Racket should work in a multiple-version installation).
Along the same lines, `raco pkg' should work if the
installation directory is unwriteable. So, the default
mode is user-specific and version-specific.
Use `--shared' or `-s' for user-specific, all-version
installs.
By default, `raco pkg show' now shows packages installed
in all three modes (installation-wide, user- and version-
specific, and user-specific all-version). Use `-i', `-u',
or `-s' to show just one of them.
For now, "METADATA.rktd" is still recognized as a fallback.
Also, rewrite package source type and name inference,
make ".zip" the default format for `raco pkg create',
and many doc edits.
Now works with the handler argument omitted, in which case
the default handler is used. Note that the default handler
cannot be used in conjunction with the default prompt tag
because it is unsound to do so.
Change `bit-vector-count' to `bit-vector-length', add arguments
to `bit-vector-copy', use `racket/private/vector-wraps' (which
should be moved to a public place) to implement things like
`for/bit-vector'.
Handle close parentheses in a smarter way while in
auto-parens mode and be a little more smart about
inserting brace pairs in general.
In summary:
- Add some "smart-skip" behavior to insert-close-paren,
described in the documentation.
- When auto-parens mode is enabled,
the existing "balance-parens" keybinding invokes
insert-close-paren with a smart-skip argument of
'adjacent
- A new "balance-parens-forward" keybinding invokes
insert-close-paren with a smart-skip argument of
'forward (whether or not auto-parens mode is
enabled)
- Enable basic smart-skip behavior for
strings ("...") and |...| pairs, specifically, typing
a double-quote or bar character when the cursor
immediately precedes one causes the cursor to simply
skip over the existing one
- Tweak auto-insertion of block comment pairs; i.e.
typing hash and a bar results in a properly balanced
#||# pair. Also, when you type a bar character when
the cursor immediately precedes a closing bar and
hash of a comment, then the cursor skips over both
characters (this seems better than having it just
skip over the bar, and then having to introduce a
new keybinding to detect when a hash is typed while
the cursor is between a bar and a hash)
- In strings and line/block comments, auto-parens mode
no longer has any effect (you can still use the M+..
keybindings to force insertion of a particular brace
pair)
- Detect when a character constant is being typed, and
don't insert brace pairs if so; i.e. if the cursor
is immediately after #\ , then typing any open parens,
double quote, or bar, does _not_ result in the
insertion of an open/close pair even in auto-parens
mode
- Add a bunch of tests related to auto-parens, matching
pairs of braces, strings, comments, etc. to
collects/tests/framework/racket.rkt
Changes the implementation of highlight-range so that it
only recomputes all of the new locations from the positions
when on-reflow is called (otherwise only computing the
relevant ones) and make the on-reflow callback chop itself
up, in case there are lots of highlighted ranges to avoid
tying up the event loop.
Changes searching so that it doesn't neccessarily compute
the entire search results in a single event callback
(but also make it start the computation more aggressively)
Overall, this changes the strategy from one that, for any potentially
long-running callback, just tried to push it off into the future, into
a strategy that tries to avoid long-running callbacks by breaking the
work up into chunks, but starting the first chunk immediately (in a
low-priority callback).
Also, misc other changes to make this work better and generally clean
things up.
For example, the cross-reference information for the
Reference is now broken into about 16 pieces, so that
resolving a cross-reference into the Reference doesn't
require loading all cross-reference information for
the Reference.
Every document is split into two pieces, so that the title
of a document is roughly in its own piece. That way,
re-building the page of all installed documentation can be more
scalable (after some further changes).
- add enqueue-front!
- add queue-filter!
- use the predicates instead of the /c contracts
- make queue-length take constant time
- add some random tests
- note the running times of all of the operations in the docs
- make queues be sequences directly (and use make-do-sequence
to implement in-queue instead of building a list)
- added non-empty-queue? (note extra hypen as compared
to the past; this seems better since the function
wasn't exported before and we already have other
functions named "non-empty-<something>" but not
others namedn "nonempty-<something>")
check.rkt:
Added the actual check-match macro.
test.rkt:
Just a provide statement
check-test.rkt:
7 additional tests for check-match, and a macro to help create tests
check.scrbl:
Documentation and examples for check-match
It was pulling from `scheme/gui/base', instead. The one from `scheme/gui/base'
is now different and still pulls from `scheme/gui/base'.
This could break some programs that accidentally depended on `scheme/gui/base'
exports from `gui-dynamic-require', but it's more likely to fix problems.
The `scheme/base' module had become unreachable from the `mred' module.
While that normally would be a good thing, it lead to troublesome
multiple instantiations of `scheme/base' that caused problems for
attaching further modules to the namespace.
inside the same collection so this file can (when other
things aren't too different) be used in a version of racket
that doesn't generally have the tests
Track fixnum results in the same way as flonum results to enable
unboxing, if that turns out to be useful. The intent of the change,
though, is to support other types in the future, such as "extnums".
The output `raco decompile' no longer includes `#%in', `#%flonum',
etc., annotations, which are mostly obvious and difficult to
keep in sync with the implementation. A local-binding name now
reflects a known type, however.
The change includes a bug repair for he bytecode compiler that
is independent of the generalization (i.e., the new test case
triggered the old problem using flonums).
The `lazy-require' form expands to `define-runtime-module-path-index',
whch doesn't work right at phase levels other than 0. Work around the
problem by generating a submodule to hold the
`define-runtime-module-path-index' form.
This repair fixes `raco exe' on certain uses of `match', which in turn
uses `lazy-require' at compile time.
Also, use `register-external-module' to generate appropriate
dependencies on lazily loaded modules.
The JIT was pessimistically using 64-bit jumps for long branches
or any jump between code that is allocated at different times.
Normally, though, code allocation stays within the same 32-bit
range of the heap, so stick to 32-bit jumps until forced by
allocation addresses to use 64-bit jump targets.
Commit ffe45ecce had introduced a regression with some
polymorphic functions imported between typed modules due to
miscommunicated variance information.
correct-xexpr?. Inverted the logic and replaced the
continuation-passing style with simpler test-for-error logic. Also
corrected typo in attribute symbol checker that could otherwise lead
to a contract error. (taking the cadr of a non-cadrable value)
I started from tabs that are not on the beginning of lines, and in
several places I did further cleanings.
If you're worried about knowing who wrote some code, for example, if you
get to this commit in "git blame", then note that you can use the "-w"
flag in many git commands to ignore whitespaces. For example, to see
per-line authors, use "git blame -w <file>". Another example: to see
the (*much* smaller) non-whitespace changes in this (or any other)
commit, use "git log -p -w -1 <sha1>".
In `(if (pair? x) E1 E2)', convert `(car x)' in E1 to
`(unsafe-car x)', and similarly for `(cdr x)'. Also,
`(begin (car x) (cdr x))' converts to `(begin (car x)
(unsafe-cdr x))' since `(car x)' implies a `pair?' test
on `x'.
typing them in, in the module language test suite
this speeds it up; going from 140 to 105 seconds on my (mac)
machine. (drdr was taking 240 or so seconds, tho)
cannot change its revision number during reading
This restriction was enforced only for editors that have non
string-snip% snips. The restriction was in place because the
implementation strategy was to chain thru the snips in the editor
using (send snip next) and that isn't safe if the revision number
changes.
The lifting of the restriction is implemented by tracking the position
in the editor where the last snip ended and, if the revision number
changes, starting over trying to get a snip from that position. This
has the effect that, if the revision number never changes, the code
should behave the same as it was doing before (so hopefully any new
bugs I've introduced in this commit will only show up if the old
implementation would have raised an error)
Also, exploit the lifting of this restriction in the colorer so it
doesn't to restart the port during to coloring that happens along with
the parsing
Shape information allows the linker to check the importing
module's compile-time expectation against the run-time
value of its imports. The JIT, in turn, can rely on that
checking to better inline structure-type predicates, etc.,
and to more directy call JIT-generated code across
module boundaries.
In addition to checking the "shape" of an import, the import's
JITted vs. non-JITted state must be consistent. To prevent shifts
in JIT state, the `eval-jit-enabled' parameter is now restricted
in its effect to top-level bindings.
so that it waits until online check syntax actually
finishes (otherwise, there actually is a leak;
the link is broken when the message comes back from the
other place)
This tracking allows the compiler to treat structure sub-type
declarations as generating constant results, and it also allows
the compiler to recognize an applications of a constructor or
predicate as functional.
The JIT takes advantage of known-constant bindings to avoid the
check that a variable is still bound to a structure predicate,
selector, or mutator; that makes the code short enough to really
inline. The inlined version takes about half the time of the
indirect version.
The compiler does not yet track bindings precisely enough to
recognize constants for sub-type declarations.
This fixes several issues:
- `Parameter` generates impersonator contracts correctly
- `Any` handling now copies immutable data when possible
- `Any` now recognizes more atomic base types
Merge to 5.3.1.
This reverts commit d39780a130.
Matthew says this test is really about TCP, so it should not be
changed. Although perhaps we can use a more basic TCP test to check if
this should be done.
Turn use of a finalized ffi callout into a reported error,
instead of a crash. Clarify the existence of the finalizer
in the docs. Fix error logging of the finalizer thread.
Merge to v5.3.1
Bytecode changes in two small ways to help the validator:
* a cross-module variable reference preserves the compiler's
annotation on whether the reference is constant, fixed, or other
* lifted procedures now appear in the module body just before the
definitions that use them, instead of at the beginning of the
module body
The new argument gets to chaperone/impersonate a guard at
the prompt, and it is applied when the continuation is applied ---
based on a wrapper on th prompt tag of the continuation (as opposed to
the prompt tag of the prompt).
The new argument gets to filter results that come from a
non-composable continuation that replaces one delimited
by a prompt using the chaperoned/impersonated prompt tag.
When thie JIT guesses that an identifier is bound to a
structure predicate, getter, setter, etc., but that guess
turns out to be wrong, and the call is in a tail position,
then preserve tail-call behavior.
(Changes include some setup to inline structure constructors.)
The contract now has two major differences:
- It raises an error when it would have to wrap.
- It uses chaperones to delay errors as long as possible
In general, using `Any` as a type when exporting to untyped
code will now just work, unless the untyped code tries to
communicate values back to the typed side, in which case an
immediate error will be raised.
Much of the implementation comes from the membrane design
from [Strickland et al, OOPSLA 2012].
raw-module-path inside of a phaseless-spec (see
the #%require docs for the description of these).
Also, Rackety
in conjunction with commit 9047427 (and an earlier
commit in those files/dirs), this commit:
closes PR 7815
closes PR 10455
closes PR 10788
(the way things currently stand, check syntax needs more information
from the fully expanded form, but at least now it has a better chance
to actually use that information, if it were there ...)
related to PR 7815
related to PR 10455
related to PR 10788
An attempt to detect a submodule could trigger the original module
name resolver when the would-be enclosing module would be handled
by the embedding-specific resolver. When a submodule is not found
but its would-be enclosing module is embedded, then assume that
the default resolver wouldn't find the submodule, eithe --- and
therefore avoid a potential "collection not found" error.
Also, add 'lsquo as allowed content.
Omitting the ` conversion in the first place was over-conservative.
There's a backward-compatibility issue with this addition (i.e., a
document might contain a backquote in a decoded context that is
meant to be rendered as a backquote), but the potential problems
seem minor.
The `slideshow/code-pict' library is the same as `slideshow/code', but
it works in non-GUI settings. Only the `slideshow/code' library connects
the code font size to `current-font-size', though.
The `code' macro, `define-code', etc., now support "code transformers",
which are syntax bindings that trigger otherwise-unescaped transformations
in the code to typeset (which can make the code easier to read and
friendlier to auto-indentation).
Other major changes:
- pg code now uses only binary format
- pg timestamptz now always UTC (tz = 0), added doc section
- added contracts to most pg "can't-convert" errors
Support for break clauses complicates expansion to `for/fold/derived';
a new `syntax/for-body' library provides a helper for macros that need
to split a `for'-style body into a prefix part and wrappable part.
Allows the use of `in-generator' to produce multiple values in a
position other than immediately within `for' (where the arity
can be inferred).
Closes PR 11662
The new parameter (and supporting environment variables and
command-line flags) can bytecode lookup to a tree other than
where a source file resides, so that sources and generated
compiled files can be kept separate. It also supports storing
bytecode files in a version-specific location (either with
the source or elsewhere).
The `make-log-receiver' function now includes a logger-name
filter. This filter is implemented as a low enough level that
it affects `log-level?' tests to check whether a log message
needs to be constructed at all.
The -W and -L flags and PLTSTDERR and PLTSYSLOG environment variables
support filters of the form "<level> <level>@<name> ...", where
<level>@<name> specializes filtering of events for a logger whose
name matches <name> to show <level> and higher.
The old `cast' didn't work right for a mismatch between
a pointer GCableness and the source or target types, and
it didn't work right for an GCable pointer with a non-zero
offset. While those pitfalls were documented, the first
of them definitely has been a source of bugs in code that
I wrote.
Also added `cpointer-gcable?'
Add `file-position*', which can return #f instead of raising
an exception when a port's position is unknown. Change
`make-input-port' and `make-output-port' to accept more
kinds of values as the initial position.
These changes make it possible to synchronize a port's
position with a `port-commit-peeked' action. It's ugly,
which I think reflect something broken about position
tracking in the port protocol (which seems difficult to fix
without breaking compaibility).
Providing a port instead of a reading or writing procedure
redirects the read/write to the specified port. This shortcut
is kind of a hack, but the run-time system can easily streamline
the redirection when it's exposed this way.
Using the new redirection feature reduces overhead in
`with-output-to-bytes' and `pretty-print'.
We can't disallow the creation of bad mutators without breaking
old code, but we can prevent the JIT from treating them like
good ones.
Closes PR 13062
Stream generic operations stopped working for lists
since the operations used only the generic dispatcher
instead of the real generic functions.
(Moral of this story: write more tests)
that it drops from the expansion (like define/public) by
adding them to the origin syntax property (and sometimes
to disappeared-use; see the add-decl-props function
for details on those that aren't in the origin property)
this means that check syntax will now pick them up
so they'll show up in the blue boxes in drracket
Thanks Matthew, for some helpful advice and
comments on an initial version of the commit.
Generalize splitting of `(let-values ([(x ...) (values e ...)]) ....)'
to `(let ([x e] ...) ....)' for any `e', since it's always equivalent.
Right?
(The old requirements on the `e's seem to be needed only for
`letrec-values' splitting and maybe mutable variables.)
Treat unsafe functional operations (which never raise an
exception) as omitable, which means that simple `let-values'
combinations can be split into `let' bindings, etc.
Basically, Racklog (and all versions of schelog) implement ! by
causing the failure continuation of the entire relation being
returned. They did not also cause the unification caused by the
relation to be un-done.
However, it is not easy to separate un-doing the local changes because
the unification just returns a failure continuation too. I had to call
that fail continuation but use state to communicate to its target that
the next clause should not be visited.
I don't know if this is correct. My test suite contains a lot of cut
tests that still pass. Erik's test passes too. But I'm not confident
that this really works.
Internally, there's a `prop:method-arity-error' property that is
used for keyword-accepting methods. The same thing could be
accomplished with `procedure->method', but the new property avoids
a wrapper. It might be nice to expose the property from `racket/base',
but that creates trouble for generating arity errors for keyword-
requiring procedures (i.e., when such a procedure is wrapped), so
keep it provate for now.
Closes PR 12982