Allowing them would require support for immutable fxvectors and
flvectors, interning, and more. Since the motivation for reader
support is to make marshaling and unmarshaling easier, allow
them only in `read' mode. Change printing to make then unquotable.
More generally, a `splicing-syntax-parameterize' wrapping immediate
compile-time code effectively parameterizes the compile-time code as
well as any macro-triggered compile-time code. This is implemented by
using a compile-time parameter that complements each syntax binding.
Use `raise-user-error' for `raco pkg ...' errors, so that stack
traces don't print out for external errors. Reformat error messages
generally to match current conventions. Use logging for debugging
output.
The default `raco pkg' mode should work right for a
multiple-version installation (because everything in
Racket should work in a multiple-version installation).
Along the same lines, `raco pkg' should work if the
installation directory is unwriteable. So, the default
mode is user-specific and version-specific.
Use `--shared' or `-s' for user-specific, all-version
installs.
By default, `raco pkg show' now shows packages installed
in all three modes (installation-wide, user- and version-
specific, and user-specific all-version). Use `-i', `-u',
or `-s' to show just one of them.
For now, "METADATA.rktd" is still recognized as a fallback.
Also, rewrite package source type and name inference,
make ".zip" the default format for `raco pkg create',
and many doc edits.
Now works with the handler argument omitted, in which case
the default handler is used. Note that the default handler
cannot be used in conjunction with the default prompt tag
because it is unsound to do so.
Change `bit-vector-count' to `bit-vector-length', add arguments
to `bit-vector-copy', use `racket/private/vector-wraps' (which
should be moved to a public place) to implement things like
`for/bit-vector'.
Handle close parentheses in a smarter way while in
auto-parens mode and be a little more smart about
inserting brace pairs in general.
In summary:
- Add some "smart-skip" behavior to insert-close-paren,
described in the documentation.
- When auto-parens mode is enabled,
the existing "balance-parens" keybinding invokes
insert-close-paren with a smart-skip argument of
'adjacent
- A new "balance-parens-forward" keybinding invokes
insert-close-paren with a smart-skip argument of
'forward (whether or not auto-parens mode is
enabled)
- Enable basic smart-skip behavior for
strings ("...") and |...| pairs, specifically, typing
a double-quote or bar character when the cursor
immediately precedes one causes the cursor to simply
skip over the existing one
- Tweak auto-insertion of block comment pairs; i.e.
typing hash and a bar results in a properly balanced
#||# pair. Also, when you type a bar character when
the cursor immediately precedes a closing bar and
hash of a comment, then the cursor skips over both
characters (this seems better than having it just
skip over the bar, and then having to introduce a
new keybinding to detect when a hash is typed while
the cursor is between a bar and a hash)
- In strings and line/block comments, auto-parens mode
no longer has any effect (you can still use the M+..
keybindings to force insertion of a particular brace
pair)
- Detect when a character constant is being typed, and
don't insert brace pairs if so; i.e. if the cursor
is immediately after #\ , then typing any open parens,
double quote, or bar, does _not_ result in the
insertion of an open/close pair even in auto-parens
mode
- Add a bunch of tests related to auto-parens, matching
pairs of braces, strings, comments, etc. to
collects/tests/framework/racket.rkt
Changes the implementation of highlight-range so that it
only recomputes all of the new locations from the positions
when on-reflow is called (otherwise only computing the
relevant ones) and make the on-reflow callback chop itself
up, in case there are lots of highlighted ranges to avoid
tying up the event loop.
Changes searching so that it doesn't neccessarily compute
the entire search results in a single event callback
(but also make it start the computation more aggressively)
Overall, this changes the strategy from one that, for any potentially
long-running callback, just tried to push it off into the future, into
a strategy that tries to avoid long-running callbacks by breaking the
work up into chunks, but starting the first chunk immediately (in a
low-priority callback).
Also, misc other changes to make this work better and generally clean
things up.
For example, the cross-reference information for the
Reference is now broken into about 16 pieces, so that
resolving a cross-reference into the Reference doesn't
require loading all cross-reference information for
the Reference.
Every document is split into two pieces, so that the title
of a document is roughly in its own piece. That way,
re-building the page of all installed documentation can be more
scalable (after some further changes).
- add enqueue-front!
- add queue-filter!
- use the predicates instead of the /c contracts
- make queue-length take constant time
- add some random tests
- note the running times of all of the operations in the docs
- make queues be sequences directly (and use make-do-sequence
to implement in-queue instead of building a list)
- added non-empty-queue? (note extra hypen as compared
to the past; this seems better since the function
wasn't exported before and we already have other
functions named "non-empty-<something>" but not
others namedn "nonempty-<something>")
check.rkt:
Added the actual check-match macro.
test.rkt:
Just a provide statement
check-test.rkt:
7 additional tests for check-match, and a macro to help create tests
check.scrbl:
Documentation and examples for check-match
It was pulling from `scheme/gui/base', instead. The one from `scheme/gui/base'
is now different and still pulls from `scheme/gui/base'.
This could break some programs that accidentally depended on `scheme/gui/base'
exports from `gui-dynamic-require', but it's more likely to fix problems.
The `scheme/base' module had become unreachable from the `mred' module.
While that normally would be a good thing, it lead to troublesome
multiple instantiations of `scheme/base' that caused problems for
attaching further modules to the namespace.
inside the same collection so this file can (when other
things aren't too different) be used in a version of racket
that doesn't generally have the tests
Track fixnum results in the same way as flonum results to enable
unboxing, if that turns out to be useful. The intent of the change,
though, is to support other types in the future, such as "extnums".
The output `raco decompile' no longer includes `#%in', `#%flonum',
etc., annotations, which are mostly obvious and difficult to
keep in sync with the implementation. A local-binding name now
reflects a known type, however.
The change includes a bug repair for he bytecode compiler that
is independent of the generalization (i.e., the new test case
triggered the old problem using flonums).
The `lazy-require' form expands to `define-runtime-module-path-index',
whch doesn't work right at phase levels other than 0. Work around the
problem by generating a submodule to hold the
`define-runtime-module-path-index' form.
This repair fixes `raco exe' on certain uses of `match', which in turn
uses `lazy-require' at compile time.
Also, use `register-external-module' to generate appropriate
dependencies on lazily loaded modules.
The JIT was pessimistically using 64-bit jumps for long branches
or any jump between code that is allocated at different times.
Normally, though, code allocation stays within the same 32-bit
range of the heap, so stick to 32-bit jumps until forced by
allocation addresses to use 64-bit jump targets.
Commit ffe45ecce had introduced a regression with some
polymorphic functions imported between typed modules due to
miscommunicated variance information.
correct-xexpr?. Inverted the logic and replaced the
continuation-passing style with simpler test-for-error logic. Also
corrected typo in attribute symbol checker that could otherwise lead
to a contract error. (taking the cadr of a non-cadrable value)
I started from tabs that are not on the beginning of lines, and in
several places I did further cleanings.
If you're worried about knowing who wrote some code, for example, if you
get to this commit in "git blame", then note that you can use the "-w"
flag in many git commands to ignore whitespaces. For example, to see
per-line authors, use "git blame -w <file>". Another example: to see
the (*much* smaller) non-whitespace changes in this (or any other)
commit, use "git log -p -w -1 <sha1>".
In `(if (pair? x) E1 E2)', convert `(car x)' in E1 to
`(unsafe-car x)', and similarly for `(cdr x)'. Also,
`(begin (car x) (cdr x))' converts to `(begin (car x)
(unsafe-cdr x))' since `(car x)' implies a `pair?' test
on `x'.
typing them in, in the module language test suite
this speeds it up; going from 140 to 105 seconds on my (mac)
machine. (drdr was taking 240 or so seconds, tho)
cannot change its revision number during reading
This restriction was enforced only for editors that have non
string-snip% snips. The restriction was in place because the
implementation strategy was to chain thru the snips in the editor
using (send snip next) and that isn't safe if the revision number
changes.
The lifting of the restriction is implemented by tracking the position
in the editor where the last snip ended and, if the revision number
changes, starting over trying to get a snip from that position. This
has the effect that, if the revision number never changes, the code
should behave the same as it was doing before (so hopefully any new
bugs I've introduced in this commit will only show up if the old
implementation would have raised an error)
Also, exploit the lifting of this restriction in the colorer so it
doesn't to restart the port during to coloring that happens along with
the parsing
Shape information allows the linker to check the importing
module's compile-time expectation against the run-time
value of its imports. The JIT, in turn, can rely on that
checking to better inline structure-type predicates, etc.,
and to more directy call JIT-generated code across
module boundaries.
In addition to checking the "shape" of an import, the import's
JITted vs. non-JITted state must be consistent. To prevent shifts
in JIT state, the `eval-jit-enabled' parameter is now restricted
in its effect to top-level bindings.
so that it waits until online check syntax actually
finishes (otherwise, there actually is a leak;
the link is broken when the message comes back from the
other place)
This tracking allows the compiler to treat structure sub-type
declarations as generating constant results, and it also allows
the compiler to recognize an applications of a constructor or
predicate as functional.
The JIT takes advantage of known-constant bindings to avoid the
check that a variable is still bound to a structure predicate,
selector, or mutator; that makes the code short enough to really
inline. The inlined version takes about half the time of the
indirect version.
The compiler does not yet track bindings precisely enough to
recognize constants for sub-type declarations.
This fixes several issues:
- `Parameter` generates impersonator contracts correctly
- `Any` handling now copies immutable data when possible
- `Any` now recognizes more atomic base types
Merge to 5.3.1.
This reverts commit d39780a130.
Matthew says this test is really about TCP, so it should not be
changed. Although perhaps we can use a more basic TCP test to check if
this should be done.
Turn use of a finalized ffi callout into a reported error,
instead of a crash. Clarify the existence of the finalizer
in the docs. Fix error logging of the finalizer thread.
Merge to v5.3.1
Bytecode changes in two small ways to help the validator:
* a cross-module variable reference preserves the compiler's
annotation on whether the reference is constant, fixed, or other
* lifted procedures now appear in the module body just before the
definitions that use them, instead of at the beginning of the
module body
The new argument gets to chaperone/impersonate a guard at
the prompt, and it is applied when the continuation is applied ---
based on a wrapper on th prompt tag of the continuation (as opposed to
the prompt tag of the prompt).
The new argument gets to filter results that come from a
non-composable continuation that replaces one delimited
by a prompt using the chaperoned/impersonated prompt tag.
When thie JIT guesses that an identifier is bound to a
structure predicate, getter, setter, etc., but that guess
turns out to be wrong, and the call is in a tail position,
then preserve tail-call behavior.
(Changes include some setup to inline structure constructors.)
The contract now has two major differences:
- It raises an error when it would have to wrap.
- It uses chaperones to delay errors as long as possible
In general, using `Any` as a type when exporting to untyped
code will now just work, unless the untyped code tries to
communicate values back to the typed side, in which case an
immediate error will be raised.
Much of the implementation comes from the membrane design
from [Strickland et al, OOPSLA 2012].
raw-module-path inside of a phaseless-spec (see
the #%require docs for the description of these).
Also, Rackety
in conjunction with commit 9047427 (and an earlier
commit in those files/dirs), this commit:
closes PR 7815
closes PR 10455
closes PR 10788
(the way things currently stand, check syntax needs more information
from the fully expanded form, but at least now it has a better chance
to actually use that information, if it were there ...)
related to PR 7815
related to PR 10455
related to PR 10788
An attempt to detect a submodule could trigger the original module
name resolver when the would-be enclosing module would be handled
by the embedding-specific resolver. When a submodule is not found
but its would-be enclosing module is embedded, then assume that
the default resolver wouldn't find the submodule, eithe --- and
therefore avoid a potential "collection not found" error.
Also, add 'lsquo as allowed content.
Omitting the ` conversion in the first place was over-conservative.
There's a backward-compatibility issue with this addition (i.e., a
document might contain a backquote in a decoded context that is
meant to be rendered as a backquote), but the potential problems
seem minor.
The `slideshow/code-pict' library is the same as `slideshow/code', but
it works in non-GUI settings. Only the `slideshow/code' library connects
the code font size to `current-font-size', though.
The `code' macro, `define-code', etc., now support "code transformers",
which are syntax bindings that trigger otherwise-unescaped transformations
in the code to typeset (which can make the code easier to read and
friendlier to auto-indentation).
Other major changes:
- pg code now uses only binary format
- pg timestamptz now always UTC (tz = 0), added doc section
- added contracts to most pg "can't-convert" errors
Support for break clauses complicates expansion to `for/fold/derived';
a new `syntax/for-body' library provides a helper for macros that need
to split a `for'-style body into a prefix part and wrappable part.
Allows the use of `in-generator' to produce multiple values in a
position other than immediately within `for' (where the arity
can be inferred).
Closes PR 11662
The new parameter (and supporting environment variables and
command-line flags) can bytecode lookup to a tree other than
where a source file resides, so that sources and generated
compiled files can be kept separate. It also supports storing
bytecode files in a version-specific location (either with
the source or elsewhere).
The `make-log-receiver' function now includes a logger-name
filter. This filter is implemented as a low enough level that
it affects `log-level?' tests to check whether a log message
needs to be constructed at all.
The -W and -L flags and PLTSTDERR and PLTSYSLOG environment variables
support filters of the form "<level> <level>@<name> ...", where
<level>@<name> specializes filtering of events for a logger whose
name matches <name> to show <level> and higher.
The old `cast' didn't work right for a mismatch between
a pointer GCableness and the source or target types, and
it didn't work right for an GCable pointer with a non-zero
offset. While those pitfalls were documented, the first
of them definitely has been a source of bugs in code that
I wrote.
Also added `cpointer-gcable?'
Add `file-position*', which can return #f instead of raising
an exception when a port's position is unknown. Change
`make-input-port' and `make-output-port' to accept more
kinds of values as the initial position.
These changes make it possible to synchronize a port's
position with a `port-commit-peeked' action. It's ugly,
which I think reflect something broken about position
tracking in the port protocol (which seems difficult to fix
without breaking compaibility).
Providing a port instead of a reading or writing procedure
redirects the read/write to the specified port. This shortcut
is kind of a hack, but the run-time system can easily streamline
the redirection when it's exposed this way.
Using the new redirection feature reduces overhead in
`with-output-to-bytes' and `pretty-print'.
We can't disallow the creation of bad mutators without breaking
old code, but we can prevent the JIT from treating them like
good ones.
Closes PR 13062
Stream generic operations stopped working for lists
since the operations used only the generic dispatcher
instead of the real generic functions.
(Moral of this story: write more tests)
that it drops from the expansion (like define/public) by
adding them to the origin syntax property (and sometimes
to disappeared-use; see the add-decl-props function
for details on those that aren't in the origin property)
this means that check syntax will now pick them up
so they'll show up in the blue boxes in drracket
Thanks Matthew, for some helpful advice and
comments on an initial version of the commit.
Generalize splitting of `(let-values ([(x ...) (values e ...)]) ....)'
to `(let ([x e] ...) ....)' for any `e', since it's always equivalent.
Right?
(The old requirements on the `e's seem to be needed only for
`letrec-values' splitting and maybe mutable variables.)
Treat unsafe functional operations (which never raise an
exception) as omitable, which means that simple `let-values'
combinations can be split into `let' bindings, etc.
Basically, Racklog (and all versions of schelog) implement ! by
causing the failure continuation of the entire relation being
returned. They did not also cause the unification caused by the
relation to be un-done.
However, it is not easy to separate un-doing the local changes because
the unification just returns a failure continuation too. I had to call
that fail continuation but use state to communicate to its target that
the next clause should not be visited.
I don't know if this is correct. My test suite contains a lot of cut
tests that still pass. Erik's test passes too. But I'm not confident
that this really works.
Internally, there's a `prop:method-arity-error' property that is
used for keyword-accepting methods. The same thing could be
accomplished with `procedure->method', but the new property avoids
a wrapper. It might be nice to expose the property from `racket/base',
but that creates trouble for generating arity errors for keyword-
requiring procedures (i.e., when such a procedure is wrapped), so
keep it provate for now.
Closes PR 12982
Split out base-abbrev.rkt so that subtype is not dependent on abbrev.rkt.
Remove unused code in numeric-tower.rkt so that it is now a dependent of
abbrev.rkt, which allows the body of convenience.rkt to be merged back in.
Remove special casing for union.rkt and extraneous subtyping checks.
Remove union-maker.
conventions in 9.2.1 of the reference (altho the messages do
not yet do the extra level of indenting when a field is too
long, nor are there any field names ending in ...)
Also, fix the docs for the #:stronger argument to
make-contract, make-chaperone-contract, and make-flat-contract
Specifically, it seems like about 20% of the time (in drdr),
running the program
(let l()(l))
in DrRacket and then clicking the break button results in a state
where DrRacket's focus is not in the definitions window. I can't seem
to make this happen on my own machine and I'm not sure if this a
race-condition in the test suite or a real bug in DrRacket but it
seems minor enough (given all of the other focus-based testing that is
happening in this (and related) test suites) that I'm just going to
give up on this particular test.
A progress evt from a close input port must be initially ready,
and the primitive `peek-bytes-avail!' checks a progress evt
before checking whether the port is closed.
These changes resolve a race in `read-bytes-evt' and related evt
constructors.
This is a follow-up to commit ec6f3fd610. We're still
seeing crashes while rendering the "plot" documentation, and this
change seems to make things work on my machine.
Fix tchecking for a rest argument to a function that
is lifted by closure conversion so that one of its
arguments is a mutable local variable's location.
Also reject bytecode that would pass too many arguments
to a lifted function, since that would trigger an arity
error that might try to use a location as a value.
Merge to v5.3
When a `port-commit-peeked' succeeds, position information should
(appear to) be updated. This patch synchronizes commits and
position information for primitive ports, but synchronizing
them for user ports remains a problem.
Convert
(hash-ref <hash> <key-expr> (lambda () <literal>))
to
(hash-ref <hash> <key-expr> <literal>)
which is useful for making the `case' expansion fit
Typed Racket.
appear in saved wxme format files
also, improve the testing support for testing snip loading
(before this, the testing infrastructure could let one test
"leak" into another one in a way that could mask failures)
please include in release branch
Leave it working in splicing mode. I prefer doing that over always
splicing them, since that would make a less uniform interface, so I
rather keep all options open. There is no longer a `#:nothing' keyword,
which is the main point of this downgrade.
(See mailing list discussion on "no-argument" for the reason.)
and the change to the racket/gui load-handler
(unfortunately, there is still another problem that keeps
the test suite from passing)
please merge to the release branch
When a module is loaded from bytecode and then the value of
`use-compiled-file-paths' changes, an attempt to load a submodule
would fail, because source isn't used if the main module is
already declared, and the bytecode code is not used according to
`use-compiled-file-paths'. Make the bytecode path stick when it
is used once, so that submodule loads succeed, and make it work
even with `namespace-module-attach'.
The module-attach part of this protocol requires a change to the
API of a module name resolver: the notification mode gets two
arguments, instead of one, where the second argument is an
environment.
The lambda-lifting transformation needs to iterate to a fixpoint
where each lambda's added arguments and order are known. The
check for whether something changed was formerly just the number
of added arguments, but that's not good enough, because a binding
might get lifted away while another one acquires an extra argument.
The right test is to check the count and original bindings for the
added arguments.
Closes PR 12910
- Allow indexing into a VectorTop, with result `Any`.
- Don't use special typing rules for applications when the operator
has an annotation or instantiation.
Closes PR 12887.
Closes PR 12888.
The optional arguments for `call-as-current' for `gl-context<%>'
were not implemented, and the locking implementation didn't match
the documentation in other ways.
The libraries moved were:
- mzlib/control => racket/control
- mzlib/date => racket/date
- mzlib/deflate => file/gzip
- mzlib/inflate => file/gunzip
- mzlib/port => racket/port
- mzlib/process => racket/system
- mzlib/runtime-path => racket/runtime-path
- mzlib/shared => racket/shared
- mzlib/unit => racket/unit
- mzlib/unit-exptime => racket/unit-exptime
- mzlib/zip => file/zip
The old modules in mzlib are now pointers to the
new modules. These are all modules that were already
redirected in the documentation.
Added empty-sequence type (prints funny but works polymorphically; will submit bug report)
Loosened type of sequence-andmap (can't mimic andmap's predicate type)
Paths are left as paths, instead of trying to convert them to strings
or byte strings. Submodule path elements should be unquoted -- in the
same form as a `submod' form. All extra parts are submodule path elements,
never module paths or ".".
Each typed module now defines a submodule named `type-decl`.
This module performs the type environment initialization (along
with other environment updates) when invoked. Additionall,
every typed module, when invoked, performs a for-syntax addition
to a list specifying the submodules that need invocation.
This invocation is then performed by the `#%module-begin` from
Typed Racket.
The `type-decl` module always goes at the beginning of the
expanded module, so that it's available at syntax-time for all
the other submodules. This involved adding pre- and post-
syntaxes for the results of typechecking.
This allows significant runtime dependency reduction from the
main `typed/racket` and `typed/racket/base` languages (not yet
complete).
Fixed problems related to sorting, more than two references for
one citation, and "specific" additions like page numbers.
Also, removed a set of parentheses around disambiguated dates
in the bibliography, because I don't think they belong there.
The doc format was confused; for example, square brackets don't mean
optional in a syntactic form documentation, but instead mean square
brackets.