The `from` string argument is converted to a regexp and cached. When `from` is
a mutable string this can cause wrong results in the following calls
to string-replace. So the string is first converted to an immutable string to
be used as the key for the cache.
When `place` expands, the body of the `place` form is placed into a
`(module* place-body-<n> #f ....)` submodule.
The `place` form previously placed its body in a lifted function,
where the function's exported name was based on
`(current-inexact-milliseconds)`. The generated submodules have
deterministic names, so that compilation is deterministic, and
submodule names don't collide (unlike exported function names) when
multiple `place`-using module are imported into some other module.
Also, using a submodule avoids the problem that the clock doesn't
change fast enough on Windows.
Those aliases were moved out of `#%kernel` as part of the
determinstic-bytecode changes, but putting them in
`racket/private/pre-base` meant that they weren't included in
`mzscheme` or Pretty Big. The new location is with `let/cc`, which
makes more sense, and makes them picked up by `mzscheme` and Pretty
Big.
`net/url` provides functions for both converting strings
and paths to and from URLs.
`net/url` also includes functions for creating (pure and import)
network ports. This functionality `require` the HTTP client stack
which is unnecessary when URLs simple need parsing for their
"bits".
New library: `net/url-strings` handles `url->string` and `string->url`
(and also the related `path->url` and `url->path` functions). This is
required by net/url for compatability.
`net/url-exception.rkt` is factored out for use by both libraries.
- See also racket/net changes for T&D
url-string.rkt changes requested by mflatt
url-strings.rkt is now called url-string.rkt
identifiers from url-string.rkt are reprovided by url.rkt
using (all-from-out "url-string.rkt") instead of explicit
exports
Changes:
- Allow unit contracts to import and export the same signature.
- Add "invoke" contracts that will wrap the result of invoking a unit contract,
no wrapping occurs when a body contract is not specified
- Improve error messages
- Support for init-depend clauses in unit contracts.
- Fix documentation to refelct the above
- Overhaul of unit related tests
Handling init-depend clauses in unit contracts is a rather large and somewhat
non-backwards-compatible change to unit contracts. Unit contracts must now
specify at least as many initialization dependencies as the unit value being
contracted, but may include more. These new dependencies are now actually
specified in the unit wrapper so that they will be checked by compound-unit
expressions.
This commit also adds more information to the first-order-check
error messages. If a unit imports tagged signatures, previously the errror
message did not specify which tag was missing from the unit contract. Now
the tag is printed along with the signature name.
Documentation has been edited to reflect the changes to unit/c contracts
made by this commit.
Additionally this commit overhauls all tests for units and unit contracts.
Test cases now actually check that expected error messages are triggered when
checking contract, syntax, and runtime errors. Test forms now expand into uses
of rackunit's check-exn form so only test failures are reported and all tests in
a file are run on every run of the test file.
Progress toward making the bytecode compiler deterministic, so that a
fresh `make base` always produces exactly the same bytecode from the
same sources. Most changes involve avoiding hash-table order
dependencies and adjusting scope identity. The namespace used to load
a reader extension is also better defined. Plus many other little
changes.
The identity of a scope that is unmarshaled from a bytecode file now
incorporates the hash of the file, and the relative order of scopes is
preserved in a bytecode file. This combination allows compilation to
start with modules that loaded and compiled in different orders
(including delayed loading of bytecode fragments within one file).
Formerly, a reader extension triggered by `#lang` or `#reader` was
loaded in whatever namespace happens to be current. That's
unpredictable and can pollute a module build at the level of bytecode.
To help make builds deterministic, reader extensions are now loaded in
a root namespace of the current namespace.
Deterministic compilation in general relies on deterministic macros.
The two most common ways for a macro to be non-deterministic are by
using `gensym` (use `generate-temporaries`, instead) and by using an
unsorted hash-table traversal (don't do that).
At this point, bytecode generation is unlikely to be completely
deterministic, since I uncovered non-determinism mostly by iterating
attempts over the base collections. For now, the intent is not to
provide guarantees outside of the compilation of the base collections
--- but "more deterministic" is likely to be useful in the short run,
and we can improve further in the long run.
Specialize a
(call-with-immediate-continuation-mark _key (lambda (_arg) _body) _def-val)
call to an internal
(with-immediate-continuation-mark [_arg (#%immediate _key _def_val)] _body)
form, which avoids a closure allocation and more.
This optimization is useful for contracts, which use
`call-with-immediate-continuation-mark` to avoid redundant
contract checks.
On OS X, it seems that access() can sometimes fail with EPERM
when checking for execute permission on a file without it.
I've previously seen this result when running as the superuser,
but that's apparently not the only possibility; a long path
may also be relevant.
Re-linking in a new namespace doesn't need the namespace of
compilation.
A "namespac.rktl" test exposed this problem, where the "transfer a
definition of a macro-introduced variable" test could fail if a GC
occurred between compilation in one namespace and evaluation in
another.
When a module is currently installed as bytecode, but without
corresponding source and without a "info.rkt" specification that
bytecode should be preserved without source, then `raco pkg` should
not count that module bytecode as a conflict (since `raco setup`
will remove it).
In case a collection "a" is composed from two places, and in
case the first place has a bytecode file for "x.rkt" while
only the second place has the source of "x.rkt" (probably it
was recently moved), then `raco setup` should delete the
sourceless bytecode so that any dependency on "x.rkt" will
reference the right version.
Nested splicing forms would lead to an "ambigious binding" error
when the nested forms bind the same name, such as in
(splicing-let ([a 1])
(splicing-let ([a 2])
(define x a)))
The problem is that splicing is implemented by adding a scope to
everything in the form's body, but removing it back off the
identifiers of a definition (so the `x` above ends up with no new
scopes). Meanwhile, a splicing form expands to a set of definitions,
where the locally bound identifier keeps the extra scope (unlike
definitions from the body). A local identifier for a nested splicing
form would then keep the inner scope but lose the outer scope, while
a local identifier from the outer splicing form would keep the outer
scope but no have the inner one --- leading to ambiguity.
The solution in this commit is to annotate a local identifier for a
splicing form with a property that says "intended to be local", so the
nested definition will keep the scope for the outer splicing form as
well as the inner one. It's not clear that this is the right approach,
but it's the best idea I have for now.
Although `eval-syntax` is not supposed to add the current namespace's
"outer edge" scope, it must add the "inner edge" scope to be consistent
with adding the inner edge to every intermediate expansion (as in
other definition contexts).
In addition, `eval`, `eval-syntax`, `expand`, and `expand-syntax`
did not cooperate properly with `local-expand` on the inner edge.
Some failure paths were missing an update before calling failure
code, and the new failure paths need to unconditionally update the
runstack pointer (because the common stub doesn't know whether the
calling context needs an update).
Genereating a use-site scope, instead of a macro-introduction scope,
prevents the scope's presense from triggering a #f result from
`syntax-original?`.
This change mostly reverts 1465ff25fc, which turned out to be a hassle
because it created more cyclic structure.
A simpler strategy is to allow a phase-specific scope to be detached
(perhaps temporarily, due to on-demand loading of bytecode) from its
group; when that's possible, the scope is not reachable from a place
where it can be moved to other syntax objects, so it's ok to be
detached. Debugging output needs to handle that gracefully, though.
Also, in case of broken bytecode, fix up a detached scope if it
does end up in an unexpected place.
Formerly, compiling a definition in one namespace and evaluating it in
another would cause the definition to take place in the original
namespace --- unless the compiled code is marshaled to a byte string
and back. Adjust the "linking" process to redirect the variable
definition and any references to the new namespace. (This is a change
relative to the compiler with the old macro expander.)
Also, repair a compiled `require` form along similar lines. (This is
*not* a change relative to the compiler with the old macro expander;
the mismatch is part of the motivation for changing `define`
handling.)
Part of the expansion to handle contracts confused internal and
external names of signature elements. The new macro expander is less
tolerant of the mistake.