To make fasl writing as determinsitic and portable as possible, write
+nan.0 and +nan.f always with a specific bit pattern.
This choice risks losing information that is potentially useful, but
given the way that Racket treats all NaN encodings as equivalent, that
rick seems low.
For example, `#hasheq()` is `eq?` to `(hasheq)` and `(hash-remove
(hasheq 'x 2) 'x)`. Making empty hash table unique avoids some
potential and actual inconsistencies between traditional Racket and
Racket CS, such as in machine-independent bytecode.
Move different handling of serialized syntax data to the schemify
layer instead of te expander, so that the result of compiling in
machine-independent form is the same for traditional Racket and Racket
CS.
The `--recompile-only` flag is intended to help dectect build
problems, especially distribution builds where packages are
supposed to be in built form.
This allows it to cooperate better with Typed Racket, particularly
regarding the `Any` type. The guard and use of `#:authentic` also
check that it's still a singleton in all cases.
Avoid parsing cross-linklet optimization information until it is
needed. This change also avoids a problem with saving hash codes
that are platform-specific.
Insteda of just consulting `lib-search-dirs` in the host system's
config during cross-build mode, use `lib-dir` if set to arrive at
the expected default when `lib-search-dirs` is not set.
Handle not-this-platform paths that manage to evade the heuristics for
converting paths to and from relative form. Otherwise, building can go
wrong on on Windows when using machine-independent starting files
generated on Unix-like systems.
The `--error-out` and `--error-in` flags are meant to work together to
chain a sequence of `raco setup` steps where one of them might fail,
but other steps should proceed. The last step in that sequence should
use only `--error-in`, so that it exits with failure if any of the
steps failed.
The `both` target of the toplevel makefile uses `--error-out` and
`--error-in` to let a Racket CS build proceed as long as the
traditional Racket build made it to the last `raco setup` step, which
means that it survives package-build errors.
The Chez Scheme fasl format is not machine-independent when record
types are involved, so use the process that serves compilation to also
serve fasl encoding.
In parallel build mode, if attempting to compile a file triggers a
cycle error that is caught and discarded, don't leave behind a
dependency (that is effectively resolved by the error) in the
parallel-worker manager.
It doesn't do anything, but make it a conforming variant of the
identity function. Also, fill in checking for `compile-linklet`,
and correction documentation errors for `compile-linklet` and
`recompile-linklet`.
Makefile and configure refinements, including targets to let the
distro-build package drive a cross-build from scratch. A cross
build on Mac OS for Windows now works, for example.
The intent was never for the data argument to be optional, but a
mistake in traditional Racket's argument dispatch for `log-message`
made it optional in some cases, so the simplest way forward is to make
it consistently optional. Repair traditional Racket to use `#f`
instead of a random value when the data argument is not provided.
Add options to load a "plug-in" cross compiler, which should be a Chez
Scheme patch file plus declarations for the built-in libraries. Since
loading a patch file replaces the initial compiler, a separate
cross-compiler process is used to load the plug-in.
Adjust build process to be able to generate Racket.exe, etc, for
Racket CS using MinGW. Much of this cross-compilation support can work
for building other platforms, too, but some of the details are filled
in only for generating Windows executables.
When `connect` returns an error immediately, save that error instead
of expecting it to be available later via `getsockopt`. That avoids a
problem on TrueOS, for example.
Some parts of the implementation used for comparison were omitted when
allocation operations are not supported (but comparisons don't
allocate). This problem was unconvered by running the "jitinline.rktl"
tests with RacketCGC.
A recent revision to the way modules are instantiated for handling
runtime paths did not work right for modules from source (i.e., no
bytecode available) that have submodules.
Closes#2486
Avoids internal errors (including unsafe behavior) in an example like
```
#lang racket
(begin-for-syntax
(local-expand
#'(#%plain-module-begin
(begin-for-syntax
(define x 42)))
'module-begin
'()))
(begin-for-syntax
(println x))
```
This example is weird, because it creates an `x` binding that doesn't
survive to the full expansion. Before the repair, the disappearing
binding created trouble for the expanded-to-linklet pass.
The example is weird for a second reason, which is that it uses uses
`local-expand` in a place where it will be triggered by visiting the
module. It turns out that raising a syntax error at that time (from
`#%plain-module-begin`) did not work correctly due to lazy
instantiation of the expansion context.
Closes#2458
Add `syntax-protect` to some macro expansions, especially macros in
contex where unsafe operations are imported, which means that a
combination of `local-expand` and `datum->syntaxa could provide access
to the unsafe bindings absent `syntax-protect`.
Inspired by the way the Chez Scheme number parser works, change the
one in the expander to be faster and probably clearer. This improved
performance brings number parsing almost back in line with the v6.12
parser's performance.
The revised parser is faster because it goes through an input string
just once. The new parser is also more xcomplete; it doesn't rely on a
host-system `number->string` (except for dummy extflonums when
extflonums are not supported).
If you're reading the commit history, beware that the note on commit
be19996953 is incorrect about the change to parsing divide-by-zero
errors. (It explains a change that was edited away before merging.)
This commit really does change the bahvior, though, again as a better
match for v6.12. Specifically, "/0" (with no hashes) always triggers
divide-by-zero in an otherwise well-formed number, even if `#i` is
used.
Speed up JSON parsing (usually around x4 to x8) by avoiding regexp
matching and using more direct byte and character operations. Along
similar lines, compute parsed numbers directly instead of converting
to a string and then using `string->number`.
The revised reader behaves differently only in the case of a bad input
stream, where it may consume more bytes from the stream than the old
one due to eagerly reading bytes instead of tentatively matching
peeked bytes. Also, a UTF-8 decoding error is just `exn:fail` like
other input-parsing errors, and not `exn:fail:contract`.
Related to PR #2472, marks a few other functions as NORETURN.
Namely:
- scheme_signal_error
- scheme_wrong_count
- scheme_wrong_count_m
- scheme_case_lambda_wrong_count
- scheme_wrong_type
- scheme_wrong_contract
- scheme_wrong_field_type
- scheme_wrong_field_contract
- scheme_arg_mismatch
- scheme_contract_error
- scheme_wrong_return_arity
- scheme_unbound_global
Unfortunately static analysis is done per compilation unit, so
although, for example, scheme_wrong_contract calls scheme_raise_exn
and the latter is already marked NORETURN, the analyzer does not know
this. Therefore we need to manually propagate the NORETURN for each
function declaration.
The unsafe-fd->evt interface is based on unsafe-{file-descriptor,socket}->semaphore.
The main differences are that these events are level-triggered, not edge-triggered, and
they do not cooperate with ports created by unsafe-{file-descriptor,socket}->port.
scheme_raise_exn raises an exception and doesn't return.
Static analysis tools find a huge amount of problems with regards
to memory leaks that are actually false positives because the tools
are not aware the function does not return. Marking it as such aids
further inspection of real problems.
The documentation and implementation were confused about whether \D,
\S, and \W match non-ASCII characters. Now they do. The new regexp
implementation (as used in Racket CS) already matched them.
I understand what the idea is in this file, except this code won't
work like the author expected it to. Variables marked for wiping won't
be wiped unless they are marked as volatile. The compiler will simply
remove the code wiping the variables and issue a warning, which is
what brought me to look into this code in the first place.
Make the slow path faster by reducing input- and output-end
coordination. Also, avoid retaining one end just because the other end
is retained.
This change involves adding an indirection for the fast-path buffers
so that management for both ends of a pipe can be centralized
independent of the ports.
Sortof. This is where we especially take advantage of vtable
flexibility. The methods of the vtable are really closures,
because that's far more convenient for custom ports.
Change the internal port representation to an object-with-vtable
representation. The syntax looks similar to the class system of
`racket/class`, but everything is first-order: no class values, no
mixins, etc. Also, the vtable can contain non-procedures (like #f for
"not supported" or a port to mean a direcirection).
Using objects will make port instaces smaller and support a
reorganization to eliminate ad hoc `data`-field extensions. It will
also replace a half-step was was in place for byte input
Along with the conversion, change the way the fast path for writing
works: When possible, expose a shared buffer and index into that
buffer.
Only byte string input ports are really converted, so far. A
compatibility layer maps the old protocol to the new one, so
conversion can continue piecewise.
Show the compile-time value that is not a procedure. While
this runs some risk of exposing details that are meant
to be private to a macro/language, a macro/language can
use an applicable structure to provide a more specific
error message. Meanwhile, showing the value is likely to
help for someone who needs to debug a macro problem.
When the desired reference is not an advertised commit, then try
pulling just a few commits --- at depth 8, 16, and 32 -- from the
"master" branch to check whether the commit can be found that way. If
not, fall back to the exhaustive search that requires a full download.
This should help with the common case that a package reference into
the Racket repo is a few commits behind the current master branch
(because the package server hasn't scanned the repo recently enough).
It's much faster to disover that the commit is within the first 32,
which is almost always is, than to download the entire repository.
Upgrading an auto install to an explicit install runs into trouble if
the auto install is in a wider scope. It doens't seem necessary to
promote already-installed packages for migration, anyway.
- Improve performance by using make-apply-contract, lifting,
fast path for dependent flat contracts.
- The positive blame party now consistently means the *macro def*
and the negative party means the *macro use*. The #:arg? argument
controls blame swapping.
Don't make expansion depend on `(system-type 'vm)`, because expansions
should be VM-inpendent. For example, distribution builds use a single
expansion and finish up from there for different Racket
implementations.
The "extension" module protocol predates the modern FFI and depends on
the C API. Since it's not supported on Racket CS, skip the check for
extension modules.
Skipping the check can reduce load time considerably. We should
consider depracting the extension protocol for traditional Racket.
Don't defer any too-early variable checks to Chez Scheme, because the
schmeify-inserted checks use the right names and include a reference
to the enclosing module.
The `char-numeric?` function was missing some Unicode characters that
have the numeric property, because it was calculated from the wrong
field of UnicodeData.txt.
Change from treating exact 0+1i and 0-1i like the corresponding
inexact values.
Also, change from treating `(atan 0 x)` as exact 0 only when x is
exact. That's consistent with `angle` producing exact 0 for a positive
real number.
The cutoff point for large-magnitude exponents (forcing a +inf,0 or
0.0 result) was wrong for bases below 10, and its did not take into
account the mantissa magnitude for some number forms.
Also, change the parsing of numbers with both `/` and `#` to be more
consistent. A `#` anywhere in the number should trigger an inexact
teratment 0 in the denominator (so inifnity or not-a-number instead of
divide-by-zero), even if `#` is only in the numerator. Meanwhile,
setting `read-decimal-as-inexact` to #f should count `#`s as `0`s and
not trigger inexact treatment.
Infer procedure names based on source locations, and suppress a
procedure name when it has #<void> for its 'inferred-name property.
Threading this information through the Chez Scheme layer involves a
hack, where a name starting with "[" indicates either "no name" or
"inferred from path".
Use "cs/c" to be parallel to the source tree, because making them
different is asking for trouble (e.g., using `configure` without
a separate "build" directory goes wrong).
The rktio/parse.rkt grammar doesn't handle empty argument lists and
was choking on this line, before it even got to my new line adding
rktio_udp_set_receive_buffer.
Fix by following example of using `(void)` instead of `()`. Two notes:
- I forget which variation of C or C++ requires (void) instead of ().
- Strictly speaking, this commit isn't part of the theme of this PR.
If I squash the other commits down to one, maybe I should leave this
separate.
Make a call to a foreign function behave as in traditional Racket: the
arguments are considered reachable un their unwrapped forms until the
foreign function returns.
A missing `unwrap` caused references to structure constructors to be
treated as potentially non-primitive procedures, which significantly
slows down calls to the constructor.
Probably, this started going wrong at a point where original names
were more consistently associated to defined identifier.
Report source name when accessing a variable too early, and allow
multiple returns (based on continuation capture) for the right-hand
side of a `letrec`.
The repair directly implements `letrec` as needed in terms of `let`
and `set!`, instead of relying on Chez Scheme's `letrec`, unless
right-hand sides are simple enough. Implementing `letrec` that way
risks losing Chez Scheme optimizations, but schemify takes care
of many improvements already.
Get more of the benefit of traditional Racket's lazy bytecode
unmarshaling by using an explicit `fasl->s-exp` stap on the serialized
form of syntax objects. This approach also avoids generating pointless
machine code for constructing the serialized form, effectively using
`fasl->s-exp` as an interpreter. The result is significantly smaller
".zo" files for RacketCS and slightly faater load times.
* Add support for space-efficient vector and arrow contracts.
When an eleventh contract would be applied to a function or vector,
switch representation for the wrapper and try eliding redundant
checks. The resulting value keeps a constant number of
chaperone/impersonator wrappers regardless of the number of contracts
applied to it, and won't run any (provably) redundant checks.
This avoids a pathological case where, e.g., a function crosses a
boundary inside a loop, and gets wrapped N times (or worse, 2^N).
The optimization for function contracts currently only applies for
fixed-arity functions and contracts, and only for functions with known
result-arity of 1. These limitations are not fundamental.
Checking specific checks is not as optimized as for regular arrow
contracts yet. (Specifically: arity-specific wrappers and
tail-marks-match support is missing.) Again, not a fundamental
limitation.
Further described in the OOPSLA 2018 Paper: "Collapsible Contracts: Fixing a Pathology of Gradual Typing"
In collaboration with Ben Greenman, Christophe Scholliers, Robby Findler, and Vincent St-Amour.
Recent changes to adapt cm to cross-multi mode also attempted to
improve dependency checking to avoid prematurely committing to
compiling an old dependency, but that improvement was broken.
The multi-cross mode, don't rewrite a machine-indepedent file
by recompiling it to itself. This shouldn't matter, but not
touching files makes the result cleaner.
When a `[case-]lambda` form's only free variables are at the module
level, the Schemified form is a `[case-]lambda` form whose only free
variables are in an enclosing `lambda` for a linklet. Since those are
not completely closed, to make the allocation pattern consistent with
traditional Racket, Chez Scheme needs a hint to allocate the closures
once per linklet instantiation.
When an ephemeron is accessed through a weak mapping from the same key
that is used in the ephemeron, and when the key is not otherwise
reachable, there can be a race between extracting the value from the
ephemeron and performing a GC that reclaims the key. Avoid that race
by supplying the key back to `ephemeron-value`, which ensures that the
key remains reachable until the value is extracted.
In many cases, supplying the key as the second argument would also
work --- since that argument is used as a replacement value when the
key is inaccessible, but the key can't become inaccessible if it's
pending as a replacement value. A separarate optional argument to
`ephemeron-value` seems clearer and more general, though.
Avoid retaining namespaces that are created to gather runtime paths.
If expansion generates a lot of instances with a lot of type
information, for example, this repair can save a lot of space.
If the sub-template inside #(...) is unsyntax-splicing instead
of list, produce the template #((~@! . ????)) instead of calling
(datum->syntax o list->vector o syntax->list). Fixes#2402.
Fix some race conditions involving concurrent setup tasks that are
each trying to generate both machine-independent bytecode and
machine-specific bytecode.
add a function to escape any glob wildcards in a path or string
also add a private `glob-element->filename` function so that, e.g., the pattern
`a\*` matches the file named `a*` (previously, the match would fail and
I think it was impossible to match for only `a*`)
Fix the fallback interpreter (which is used for the "outside" of a
module that is too big to compile) so that it's safe-for-space.
This change is unlikely to repair any immediate problems, but space
safety problems are difficult to detect and avoid when the underling
implementation is not safe-for-space so fixing the interpreter is
likely worthwhie in the long run.
Module definitions and expression need to have a prompt around them to
delimit continuation capture, variable assignment needs to happen at
the right point to ensure that reassignment is guarded and
non-assignment is detected. But avoid the prompt when it's not needed,
such as around function definitions.
Closes#2398
Similar to a255def019, but for side effects potentially
exposed by definition RHS expressions, instead of
expressions not in a definition. Improve that commit and
this one by only forcing variable assignments at non-simple
expressions.
Travis is eliminating its container-based infrastructure
and deprecating the `sudo` keyword.
This commit also updates the example build matrix to use
more recent Racket versions.
Corresponds to https://github.com/greghendershott/travis-racket/pull/29
Discard local-variable names to avoid `gensym` artifacts in the same
way that a more complete compilation would discard the names. This
change does not affect function names, which are preserved through
separate properties.
It most cases, it's more important for `compiler/cm` to reliably
replace a file that might be busy than to make the file update atomic.
To suport that kind of use, `call-with-atomic-output-file` implemented
a fairly reliable, multi-step, non-atomic process for replacing a file
on Windows.
For recompilation of bytecode in machine-independent form, however,
`compiler/cm` now really wants to atomically write a replacement
bytecode file. That's not generally possible on Windows (except on
NTFS with transactions, which are discouraged...), but MoveFileEx work
atomically in some cases and it's likely to work for the cases needed
by `compiler/cm`. Probably.
So, add a mode to `call-with-atomic-output-file` to get "more atomic"
updates on Windows. This mode is enabled by a callback that makes the
caller responsible for deciding what to do with the move fails, such
as waiting a while and trying again. And `compiler/cm` now waits a
while and tries again, up to a limit, which should be good enough for
recompilation.
Enable `raco {setup|make}` to build two sets of compiled files: one
set that is suitable for the current machine, and another set that is
suitable for a different machine or for all machines (i.e.,
machine-independent bytecode).
In the long run, this new `raco setup` mode support cross compilation
where the build machine and target machine have different bytecode
formats --- unlike the current cross-compliation mode, which relies on
there being a single bytecode format in traditional Racket for all
platforms.
In the short run, the new mode enables the faster creation of
Racket-on-Chez distribution builds. The build server can send out
machine-independent bytecode to client machines while using
machine-specific bytecode for itself to drive the build process.
The new compilation mode relies on a somewhat delicate balance of the
`current-compile-target-machine` and `current-compiled-file-roots`
parameters (as reflected by the `-M` and `-R` command-line flags for
Racket) as well as cross-compilation mode (as enabled by the `-C`
command-line flag).
The 'target-machine result from `system-type` reports the
default value of `current-compile-target-machine`.
Also, fill in pieces to make `setup/cross-system` work
for RacketCS, although cross-compilation is still several
steps away.
The new path for recompiling from machine-independent files
trues to read a ".zo" file without holding the recmopilation
lock and without an `exn:fail:filesystem` handler.
Wait until replacement is more assured before deleting an existing
".zo" file.
Also, don't delete a ".zo" file that is later in the
`current-compiled-file-roots` search path than the one being written.
This refinement supports setting up a search path to try
machine-specific compiled files and fall back to machine-independent
files, for example.
Add `-M`/`--compile-any` to `raco setup`, `raco pkg install`, etc., to
build machine-independent bytecode, which is useful in the process of
building distributions.
The `parallel-lock-client` protocol expects a #f back when a
file was meanwhile compiled by another process. So, don't
just forget about a file after it is compiled, in case there
is still a lock request on the way for that file.
Actually, the machine-independent-to-specific part is trivial. The
hard part was making `compiled-expression-recompile` enable
cross-linklet optimization as it recompiles, since that involves
pulling apart metadata and putting it back together afterward.
The `compile-machine-indendent` parameter controls whether `compile`
creates a compiled expression that writes (usually in a ".zo" file) to
a machine-independent form that works for anhy Racket platform and
virtual machine. The parameter can be set through the
`-M`/`--compile-any` command-line flag or the `PLT_COMPILE_ANY`
environment variable.
Loading machine-independent code is too slow for many purposes, but
separating macro expansion from backend compilation seems likely to be
a piece of the puzzle from cross-compilation and faster distribution
builds.
Converting "invalid memory reference" to an `exn:fail:contract` (which
is the default conversion) hides crashes as success when a test
expects an error.
Also, fix a bug that was hiding as an expected excdeption.
The Racket and RacketCS implementations had separate copies of
linklet-directory and linklet-bundle reading and writing. Move the
implementation into the expander layer.
The primitive '#%linklet instance now omits directory and bundle
operations and `read-compiled-linklet`. It intead must provide
`write-linklet-bundle-hash`, `read-linklet-bundle-hash`, and
`linklet-virtual-machine-bytes`.
In particular, when there isn't any redundancy detected, then
just make a single call into the projection and create just a single
class.
This seems to help on at least one of the configurations of
dungeon, which completes in about 6 minutes with this commit
and I gave up waiting after 15 minutes for the version of
racket that didn't have it
Improves the error message for:
```
(define-syntax (like-lambda stx)
(syntax-case stx ()
[(_ e) #'(lambda () e)]))
(like-lambda (define x 1))
```
Based on a report from @pkoronkevich.
When an executable distibution is created, some path become
unavailable at run time, such as the result of `find-links-file`.
Change the contract on those functions and adjust the implementation
to return `#f` in those cases. This is a backward-compatible change in
the sense that uses that now return `#f` would have crashed before
(although it does shift the blame in that case).
Based on an initial patch by Shu-Hung.
Closes#2352
Source mode was a leftover from early iterations of the expander. A
bootstrapping mode that uses replacement `compile-linklet`, etc.,
turned out better.
For consistency with traditional Racket and currently matters on
Windows. The Windows implementation of file-truncate should probably
not move the file position as it does, though.
In GUI-application mode (e.g., running GRacket), a console is allocated
on demand if a program tries to use the original ports. Move that
on-demand handling into rktio, where it's simpler and works for
RacketCS.
One more take on the problem addressed by 990e1f1e30. This adjustment
avoids copying properties from the original form to the identifier
that is preserved in 'origin.
The `get-compiled-file-sha1` function assumed that a ".dep" file is
up-to-date when present. That may not be consistent with all uses,
including in `file-stamp-in-paths` as used by DrRacket for "populate
compiled", and an old file can go wrong with the recent ".dep" format
change. Make `get-compiled-file-sha1` at least check the version on
the ".dep" content before trying to use it.
Relevant to #2354
A function that uses `call-with-immediate-continuation-mark` in tail
position should not be flagged as "preserves marks", because the JIT
needs to bump the mark stack if the function is called in non-tail
position.
Closes#2333
The repair is more precisely a repait to xform, which incorrectly
parsed a C function definition that starts "struct" as a struct
declaration. (The function starts "struct" because the return type is
"struct Scheme_Overflow_Jmp *".) Since the function wasn't recognized,
xform didn't convert it to cooperate with the garbage collector.
Closes#2341
Previously, the following program would print "error writing to
stream port" on program exit.
(define cust (make-custodian))
(define out
(parameterize ((current-custodian cust))
(open-output-file "test.data" #:exists 'truncate)))
(write-string "This needs flushing...\n" out)
(custodian-shutdown-all cust)
(exit 0)
So far, bytecode for traditional Racket has been kept separate from
RacketCS bytecode by using a different "compiled" subdirectory for
RacketCS. That makes sense for development work to allow the
implementations to coexist, but it creates trouble for packaging and
distributions, and it (hopefully) won't seem necessary in the long
run. Treating the different virtual machines like different versions
seems more generally in line with our current infrastructure.
Rearrange the configure scripts so that it will be possible to build
RacketCS from a source distribution and have it installed in the right
place. Also, when building Racket3m just to bootstrap RacketCS, don't
install Racket3m.
Retains a strong link to a place-channel write end when there's at
least one waiting thread. This is symmetic to keeping a strong link to
the read end when the place-channel queue is non-empty. The change
repairs a problem building documentation with places in `racocs
setup`.
Refines 2ef8d60cc6 to avoid characterizing the failure as a `(-> any)`
contract on `hash-ref`, since `hash-ref` doesn't enforce that contract
in general. Go back to an `exn:fail:contract:arity` error, but keep
the specialization of the error message to clarify that it's from
`hash-ref`. Also, bring RacketCS into sync.
Although a `directory-exists?` check is useful for providing better
error messages, it's fundentally a race condition, since an external
process can always remove a directory between the check and a use of
the directory. Because of that limitation of `directory-exists?`, we
normally avoid making it part of a contract. This commit adjust
937aa3cdb1 to follow that convention while preserving the helpful
check and documentation improvements.
Their semantics assume all directory `path-string?` arguments point
to existing directories in the filesystem but they do not actually
check to verify resulting in unhelpful inner exceptions
breaking the functions' semantic abstractions.
Fixed by adding appropriate checks.
Test cases included too.
Documentation updated to reflect the requirement for paths to
refer to existing directories.
Also added note that `generate-stripped-directory` does not
compile or render source files.
When catalog is specified via file:// URL to a local directory with local
package directories that have been stripped in various modes, `raco pkg install`
will incorrectly error out with an incompatible package content error when
a binary strip mode is selected and a file-system error when source strip
mode is selected.
The cause is due to the file:// URL's resolved file path not being passed
along to the helper functions in the various code paths handling the different
strip modes; instead the full original file:// URL is passed along and is
misinterpreted as an actual filesystem path resulting in the observed failure
modes.
The repair is simple; change the relevant argument so the resolved filesystem path
is used instead of the original file:// URL.
Test cases added to test various combinations of strip modes when using
`raco pkg install` and local catalog directory.
When a file descriptor cannot be `dup`ed for a place message, the
error message has to be delayed until the partially copied message is
cleaned up. Various problems with the message-serialization code
caused that delay to work incorrectly or incompletely.
Closes#2305
The repair in 4396b841c0 for internal names exposed a problem with the
way `linklet` handled renaming on export, where it mixed up the names
that should be used internally and externally in errors.
Merge to v7.1
Lots of plumbling was in place to preserve the source name (instead of
the symbol generated to avoid collisions for macro-introduced
definitions), but some small pieces were missing.
Closes#2288
When upgrading native-library versions, a still-relevant
patch got lost. The patch corrects a problem with empty
glyphs getting treated as missing glyphs.
Closesracket/pict#42
The pattern
(ephemeron-value
(hash-ref! intern key
(lambda ()
(make-ephemeron key (wrap key)))))
is wrong, because a GC might happen between the time that the
epehemeron is found in the table (or the time that the key was just
added to the table) and the time that `ephemeron-value` is called to
extract the value. If the key is not otherwise accessible, the value
may no longer be in the ephemeron.
If `make cs` is run without specifying a SCHEME_SRC, then make sure
that `configure` and `make` are re-run in the Chez Scheme checkout,
in case it was updated.
A collection can only invoke certain callabcks (e.g., for DrRacket's
GC icon) when the collection is performed in the main thread. Also,
delay posting GC logging events to receivers that cannot work at
interrupt time.
This reverts commit 41fd4f3a5e.
The problems this change was intended to solve can be solved in other
ways, without loosening guarantees about expansion order. See the
discussion in #2154 for more details.
closes#2154
The procsses-based build was technically broken with the addition of a
"prefetch" thread (some time back) to improve parallelism, because the
`write`-based implementation of messages did not protect again
interleaving by different threads. The problem turns out to be easier
to expose when running with RacketCS.
Formerly, `--scope-dir` would include only the specified directory in
the search path for already installed packages, etc., which means that
it would only work right as a kind of installation scope that is a
step beyond "installation" on the "user"-to-"installation" spectrum.
The `'pkgs-search-dirs` confiugration entry, meanwhile, provides more
control over search ordering in installation scope. Make `--scope-dir`
work more consistently with that search-path configration.
This change also makes "instllation"-scope operations use the search
path more consistently, since some actions used to use the whole
search list while others pruned any prefix before the main
installation directory in the search list.
Add a way to declare an Objective-C method call as blocking in the
sense of the `#:blocking?` argument to `_cprocedure`.
Usingf `with-blocking-tell` should allow the Cocoa backend for
`racket/gui` to wait for events in the main place without blocking
other places.
Fix various problems with the implementation of places, and let
`processor-count` return the actual number of processors. A parallel
build via `raco setup` seems to work but not scale well.
Instead of defining `rktio_fd_t` to be independent of a `rktio_t`,
which isnt quite true, introduce `rktio_fd_detach` and
`rktio_fd_attach` functions to make a transfer explicit, such as when
a file descriptor goes through a place channel.
This adjustment avoids a corner case in cleaning up a file descriptor
from an abandoned channel, where the finalizer might run in a Chez
Scheme thread that is not associated with any place.
When a single-use function is inlined late enough in the optimization
process, and when the body has the only use of some variable computed
by an expression that can be move in place of the use in the inlined
function (but not in the non-inlined function), then it's a problem if
the function binding isn't pruned away early enough. Make sure the
binding to the function is marked as unused after the function is
inlined.
This bug was exposed by a recent change to the "dssl2" package.
Implement place channels and messages, and change `place-enabled?` to
claim that places are enabled, but `processor-count` still reports 1.
The implementation of place channels has an interesting use of
ephemerons --- that is, a use that isn't just solving a key-in-value
problem. Using epehemerons solves the problem of forgetting a place
channel and any thread blocked on the read end when there are no
producers on the write end. Along similar lines, when only the write
end is retained (i.e., no readers), the channel data is forgotten and
writes become a no-op. The read end holds a "read key" and references
the channel data through an ephemeron keyed by a "write key"; the
write end similarly holds a "write key" and uses an ephemeron keyed by
the "read key". This use of an epehemeron implements a reachability
"and": retain the place-channel data only if the read end *and* the
write end are both reachable. (Minor point: a read end also holds onto
the "write key" anytime the channel already has data.)
The Rumble layer provides a primitive `fork-place` operation and
`start-place` hook to tie in place initialization from other layers
(especially "io"), but the rest can live in the "thread" layer,
especially to handle place-channel synchronization.
When a stand-alone executable created by `raco exe` needs to load
modules that start with a `#lang` line, there have been various
obstacles to adding the right run-time support via `++lib`. The
`++lang` flag addresses those problems and makes it easy to indicate
that enough should be embedded to support loading modules with a
specified language.
There are problems in the way that various handlers interact for the
"lang/reader.rkt" versus `(submod "." reader)` search path that
converts a language name to a reader. To accomodate the search in a
standalone executable (that does not provide access to collections in
general), the module name resolver must refrain from raising an
exception for a non-existent submodule path that refers to a
non-existent collection.
There's no place-channel communication yet --- just enough of a
conversion to thread-load storage to make places possible.
In contrast to traditional Racket, where the expander linklet is
instantiated once per place, the flattened expander linklet is
instantiated only once in RacketCS (because it's inlined into a Chez
Scheme library). The expander therefore needs to keep per-place state
separate, and the same for the thread, io, and regexp laters.
In the expander/thread/io/regexp source, place-local state is put in
an unsafe place-local cell. For traditional Racket, a place-local cell
is just a box. For RacketCS, the thread through expander layers are
compiled in a way that maps each cell to a fixed index in a vector
that is stored in a virtual register, so the value is roughly two
pointer indirections away (thread context -> virtual register array ->
place-local vector). Multiple Chez Scheme threads in a place, such as
threads to run futures, share the same place-local vector.
Although `place-enabled?` reports #f, `dynamic-place` from `'#%place`
can create a place as a Chez Scheme thread and load a module there.
An extra scope is needed to separate the bindings of a `letrec` from
the `letrec` body, in case a macro moves right-hand-side expressions
to the body.
Michael Ballantyne and William Hatch reported this problem and its
solution in December 2016, but I forgot to add the repair.
Relevant to #2237
The repair in 7176fc4253 did not make the no-discard decision stick
well enough for some cases. Robby found this bug using the Redex model
and random testing, too.
Using a parameter for the current expansion context means that if a
macro spawns a thread, the thread thinks that it's in an expansion
context. Switching to a raw continuation mark avoids that problem.
Along the way, bring Racket and RacketCS more in line by making both
have an internal notion of "root" prompt tag that can be used to get
all continuation marks independent of any prompts. That's not structly
necessary, since a continuation mark could be combined with a distinct
tag to make the mark always accessible, but it's simpler and more
lightweight to use a root prompt tag.
This improvement removes only a few places where a variable use is
considered possibly too early in the RacketCS implementation, but
the improvement is significant for uses of `input-port?` in "io".
Allow a single argument to comparison functions like `<`, and
support the same arities as the generic version for fixnum and
flonum operations like `fx+` or `fl+`.
RacketCS weak `equal?`-based hash tables didn't retain flonums, and
hashing of hash tables was not properly insensitive to the order of
keys.
Racket, meanwhile, didn't limit work consistently for different kinds
of hash tables, and it didn't keep a counter value odd as intended
(but the counter never gets large enough to appear to be a mapped
pointer, anyway).
Racket did not check for a break when escaping from a break-disabled
context to a break-enabled context. RacketCS didn't check in other
cases, either. Fix those various cases.
Commit 1afcbee381 effctively dropped a conditional case that was
marked as "shouldn't happen" --- but it does happen and makes sense.
Adjust the replacement `delete-directory/files` call to accomodate the
case.
Relevant to #2198 and #2236
Part of this change restores a `++direct` that was lost in 98ae91e0ba
for "racket/src/thread" to make the atomicity state a virtual
register. Also make `display` on a byte string more directly call
`write-bytes`. That change restores a 5-10% speed improvement for
`racketcs -cl racket/base`.
Change the expander-performance macro so that it's a very low cost if
not enabled on startup. An extra JIT specialization reduces the cost
further, since the enabled state is known by JIT time.
When a child place terminates, the parent's count of the child's
memory use needs to be updated. Until this repair, the
`current-memory-use` function has been reporting an incorrectly large
number after a place terminates.
Fix liveness for "simple" arguments to inlined functions. Fix
handling of non-authrntic structure access and mutation to
allow the possibility of a GC.
Also, while we're at it, make the functions produced by curry cooperate
better with other parts of Racket. Namely, make the information reported
by procedure-arity and procedure-keywords accurate, and give procedures
more useful dynamic names.
The mask encoding of an arity is often easier to test and manipulate,
and masked-based functions are sometimes faster than functions that
used the old arity representation (while always being at least as
fast).
Attempting to assign an arity like `(expt 2 100)` to `(lambda x x)`
won't work anymore; it will raise an out-of-memory exception, because
the arity is represented internally as a mask. The arities that cannot
be represented aren't sensible arities, anyway.
The main performance improvement is in calling a function returned by
`procedure-{reduce-arity,rename}` when the arity is not a single
integer. Calls to functions with > 29 arguments can be worse, but
that seems like a much rarer case.
Forcing JIT code generation through an environment variable is useful
to get a sense of how much machine code is generated for a program.
Setting `PLT_LINKLET_TIMES` causes the overall memory used by
JIT-generated code (including adminstrative overhead) to be printed on
exit.
In pass 1, syntax class defns (without #:attributes decl) need to parse
patterns to determine exported attributes. But to allow forward refs,
stxclass references are not resolved until pass 2.
Previously, this was done by just reparsing patterns in pass 2. But that
means pattern expanders get expanded twice, and their expansions might
not agree (eg generate-temporaries). So instead, parse in pass 1, insert
"fixup" patterns to delay stxclass resolution, and resolve fixups in pass 2.
Complication: a fixup pattern is assumed S, but can change to H in pass 2.
The actual problem is that build-chaperone-contract-property
exported to the user defaults #:exercise and #:generate
to false. This commit changes the default fallback value
in the case where #:generate is not a procedure instead
of changing build-chaperone-contract-property directly
to stay consistent with the how contract-struct-exercise
currently does it.
The default values updated in commit ffc5720b5 do
not work for very subtle reasons. In build-contract
in racket/contract/private/prop, the default values
should not accept an extra ctc argument since ctc
is already handled by make-flat-contract. The
default gen procedure should also be (λ (fuel) #f)
instead of (λ (ctc) (λ () #f)) since the latter
would generate false when the generation should
have failed. In build-property, the default procedure
(λ (ctc) (λ (fuel) #f)) is correct and should not
be changed to (λ (ctc) (λ () #f)).
Improve the `hash-ref` error message when the failure result does not
accept zero arguments. (This only changes what the error messages says.)
Example:
```(hash-ref #hash() 'a add1)```
Old message:
```
; add1: arity mismatch;
; the expected number of arguments does not match the given number
; expected: 1
; given: 0
```
New message:
```
; hash-ref: contract violation
; expected: (-> any)
; given: #<procedure:add1>
; argument position: 3rd
```
When a `define` that shadows a `require` appears before the `require`,
then the `require` may fail to other, non-shadowed bindings from the
same `require` spec.
Thanks to Matthias for reporting the problem.
The new argument to `hash-iterate-value` and most other
`...-hash-iterate-...` functions determines a result to be returned in
place of raising a "bad index" exception.
For most kinds of hash tables, a "bad index" exception will only
happen when the provided index is wrong or when a hash table is
mutated during an iteration. Mutation during iteration is generally a
bad idea, but in the case of a weak hash table, a potential background
mutation by the garbage collector is difficult to suppress or ignore.
Adding an option to control bad-index behavior makes it easier to
write loops that defend against uncooperative tables, including loops
where a hash-table key disappears asynchronously.
Racket's printer was already using this functionality internally, so
the change to `hash-iterate-value` and company mostly exposes existing
functionality.
The `in-hash` form and related sequence constructors similarly support
a bad-index alternate value so iterations can handle that case
explicitly. They do not use the new bad-index support implicitly to
skip missing entries, because that idea does not play well with the
iteration API. A hash-table index can go bad after `in-hash` has
selected the index and determined that it should be used for the next
iteration, and a sequence can't take back that decision.
The `ffi/unsafe/collect-callback` library exposes functionality
formerly only available via Racket's C interface, but implement
it for both Racket and RacketCs.
Make `map` inline again, which involves optimizing away
(variable-reference-from-unsafe? (#%variable-reference))
early enough. Fix post-schemify optimization for `procedure?
by adding both forms of an import name to the `imports` table.
Fix a problem with inlining operations passed to an inlined
function (as reflected by the addition of `find-known+import`).
At phase 1 and higher, the expander tentatively allows an unbound
identifier so that, for example, `define-for-syntax` can define a
helper function syntactically after a compile-time expression that
uses the helper. While unbound references eventually trigger an error,
the reordering can be consuing, as in the example
#lang racket
(define-syntax (f stx)
(syntax-parse stx
[(_ oops) #'ok]))
which complains about `_` when the real problem is that `syntax-parse`
isn't imported.
To provide better errors, `raise-syntax-error` now implicitly extends
an error message to include a list of previously encountered unbound
identifiersin the current compilation unit. That list will be
non-empty only at phase >= 1. With that change, the error message for the
above example is
bad.rkt:5:5: _: wildcard not allowed as an expression
after encountering unbound identifier (which is possibly the real problem):
syntax-parse
in: (_ oops)
....
Closes#2167
The expander's output normally uses a distinct symbolic name for every
distinct binding within a linklet. That property is useful for
consumers like schemify, but it's counterproductive for minimizing the
diff in changes to "startup.inc", since the traditional Racket
compiler doesn't need that guarantee. Use `--local-rename` to generate
"startup.inc", which should make future diffs smaller and more
composable after changes to the expander.
* Fix fasl bug in Racket 7.0 beta
The following program causes racket to error in Racket 7.
(parameterize ([current-write-relative-directory (build-path "/" "a")])
(s-exp->fasl (build-path "/")))
This bug appears to have been introduced in Racket 7, and not in
Racket 6.x.
* Fix another bug where 'same was put through path-element->bytes
* "/" => (car (root-paths-list))
This is for windows where simply "/" is not a complete path.
* Add similar tests to serialize library.
* Better error message when relative-directory is a bad pair
Before it would give an internal list-tail error, now it returns
a proper bad argument error.
* Better tests, and improved common case
Now that `ffi/unsafe/alloc` deallocations are triggered by a place
exit, it's more likely that an ffi-call lock can be contended during a
place's termination. When that happens, the place cannot cooperate as
usual. Accomodate this rare situation by spinning.
The repair in 49a90ba75e reorders two lines in a way that, in
retrospect, seems worrying. I can't construct an example that goes
wrong, so maybe it's fine, but it seems possible (now or with some
future change) that attempting to visit available modules could lead
to the same attempt in the same thread and therefore a loop.
This commit changes the repair to just always take the lock instead of
fixing up the attempted shortcut. There's a tiny extra cost to always
taking the lock, but that extra cost seems like a better choice
overall.
If interrupts are disabled to prevent thread swaps, then don't react
to `end-atomic` by performing a deferred swap. That might happen
with logger callbacks after a GC where a deferred action was
overlooked due to a rare race condition.
Now that Chez Scheme supports libraries in boot files, make
"racket.so" work when it is converted to a boot file. Loading it as a
boot file can save around 20ms on every major GC, since the
"racket.so" code will be in the static generation.
For now, however, `racketcs` is not set up to use "racket.so" in boot
form.
* Relative paths should still be readable.
The resent PR to enable relative serialization resulted in serialzed
objects that weren't actually readable (containing literal path
elements). This PR converts them to bytes.
* Move from serialize to relative path.
Also change path->bytes to path-element->bytes
* Path elements can also be 'up and 'same.
Also merge in the relevant code from racket/fasl.
Use an association list instead of an eq hashtable. This choice is
compatible with assumptions in traditional Racket (i.e., that the
number of mark keys per continuation frame will be small) and cuts
about 1/4 of the time in a benchmark like
(define f
((contract (-> (-> integer? integer?))
(λ () values)
'pos 'neg)))
(time
(for ([x (in-range 1000000)])
(f x)))
* Add allow the binder in prop:serialize to be a procedure.
This procedure is evaluated at serialize time, and is useful if
the deserializer is not known during object-type creation time,
but is during serialize time.
* Add docs+tests.
* Add a history note.
* Add option to create relative paths for 'serialize'
Serialize would previously always create complete byte-string paths.
This adds an optional parameter to serialize (#:relative-to) to enable
relative path creation.
Now when deserialize finds a relative path, it resolves it with
respect to `current-load-relative-directory`.
* Moved fasl's path<->relative-path-elements functions
I moved it into the private/relative-path module, so that serialize
can make use of it.
* Update serialize to use relative-path.
* Add tests.
* And update docs.
When expanding a `(module* _name #f ...)` submodule, accumulate all
module scopes on the `#f` and use the `#f` for a reexpansion.
Attempting to start each time from the enclosing module's scope loses
scopes that were generated from previous expansions. One consequence
of losind a scope is that an original definition may appear to be
macro-introduced, and the defined variable may become inaccessible
in the module's namespace.
Generating the code for a `_fun` type takes hundreds ot thousands
of times as long as in the traditional Racket VM, to cache results
to reduce the cost.
Further correct the implementation of `variable-reference-constant?`
on bindings to primitive variable.
This repair affects method-call ctype caching in `ffi/unsafe/objc`.
Add some logging there to make problems easier to detect. Also,
add and improve linklet-evel performance logging for comparing
the traditional Racket VM to Racket-on-Chez.
When setting up the namespaces that imitate primitive instances,
the "constant" annotation wasn't set. The v6.x expander gets this
wrong, too, for different reasons.
Make schemify inline structure accessors and mutators across linklet
boundaries --- or, in JIT mode, across function boundaries --- by
replacing an accessor or mutator with a `#%record?` test and
`unsafe-struct*-{ref,set!}` operation.
When linklets are compiled in JIT mode and a called procedure is to be
compiled on demand, consult a cache of compiled fragments (by default,
"jit.sqlite" in the addons directory) and either use an existing
compiled fragment or add to the cache after compiling.
Results for this initial implementation suggests that the idea is
workable. With the cache, starting a JIT-mode program a second time is
almost as fast as non-JIT mode (i.e., directly loading machine code).
Some refinements are needed: limiting the size of the JIT-fragment
cache, better contention handling, and better inlining of structure
operations in JIT mode (which may be useful to cross-linklet
optimization in non-JIT mode, too).
To avoid recording absolute paths from a build environment in bytecode
files, the bytecode writer converts paths to relative form based on
`current-write-relative-directory`. For paths that cannot be made
relative in that way and that are in source locations in syntax
objects, the printer in v6.x converted those paths to strings that
drop most of the path.
The v7 expander serializes syntax objects as part of `compile` instead
of `write`, so it can't truncate paths in the traditional way. To help
out the expander, the core `write` function for compiled code now
allows `srcloc` values --- as long as the source field is a path,
string, byte string, symbol, or #f. (Constraining the source field
avoids various problems, including problems that could be created by
cyclic values.) As the core `write` for compiled code prints a path,
it truncates a source path in the traditional way.
The expander doesn't constrain source locations in syntax objects to
have path, string, etc., source values. It can serialize syntax
objects with non-path source values at `compile` time, so there's no
loss of functionality.
The end result is to fix abolute paths that were getting stored in the
bytecode for compiled packages, since that's no good for installing
packages in built form (which happens, for example, during a
distribution build).
Although SHA-1 hashing functions are available from `openssl`
libraries, a fast crytopgraphic hash is useful for many purposes below
the layer where the OpenSSL library has been opened. And SHA-1 is
reasonably easy to add to rktio.
Meanwhile, provide an equally convenient SHA-2 function to discourage
bad security practices (i.e., using SHA-1 where SHA-2 should be
preferred).
Applying jitify to a linklet now generates fragments of code that
are not nested. The drawback of this approach is that calling
a nested function needs an extra indirection, and the closure
has an extra slot. The advantage is that the fragments can be
separately compiled and fasled, which could enable a cache of
compiled fragments.
Use a `(let ([<name> ....]) <name>)` wrapper to communicate
an 'inferred-name property from correlated objects to
Chez Scheme. This stategy relies on a Chez Scheme patch to
make the wrapper work consistently.
The improvements reported in 74012f8c57 were actually due to a broken
experiment that dropped source locations on application forms, instead
of preserving them in marshaled code. Adjust the expansion pipeline
to do that earlier and intentionally.
The xify pas doesn't help all that much after all, but it's still more
comfortable to be independent of local-variable names.
The xify pass replaces local variable names with `x0`, `x1`, etc.
Using a minimal set of symbols makes the fasled form smaller
and typically take only 60-70% as long to read.
The places test suite included some tests that create lots of places
and don't wait for them, which can lead to an overload of places that
exhausts resources such as file descriptors. Improve the tests, and
also improve a failure behavior from a crash to an error message.
The `place-kill` function sends a message to another place to
terminate, but it didn't wait for that message to take
effect before returning. Worse, it put the place object in a
state that claimed that the place had terminated.
When the first subexpression is complex and the second
is a literal character, the generated JIT code swaps the
argument order, but compilation didn't swap the test for
whether one or the other is a literal character to skip
a run-time test.
When an embedding application calls `scheme_basic_env` a
second time, it's supposed to reset the main namespace, but
the new expander wasn't reset correctly.
* Fix handling of single-percision infinities and nan
* Document that non-`hash-eq?` hash tables are accepted by `jsexpr?`.
* Document that the value of `json-null` is recognized using `eq?`
* Use `case` instead of `assoc`.
* Use contracts
Delay reporting of potential problems until an actual problem
is detected. Correct a mismatch between original and renamed
symbols to restore detection of problems.
* When you delete a file in Windows, then the name doesn't go away
until the file is closed in all processes (and background tasks like
search indexing may open files behind your back). Worse, attempting
to create a new file with the same name reports a permission error,
not a file-exists error; there's seems to be no way to tell whether
a permission error was really a file-exists error.
This creates trouble for `make-temporary-file` when files are
created, deleted, and created again quickly enough and when
something like a search indexer runs in the background (which is the
usual Windows configuration). In practice, that kind of collision
happens often for `raco setup` on my machine.
To compensate, make `make-temporary-file` try up to 32 times on a
permission error. A collision that many times seems extremely
unlikely, and it seems ok to delay an actual permission error.
Windows provides a GetTempFileName function from "kernel.dll" that
must be able to deal with this somehow --- perhaps because it's in
the kernel --- but it doesn't solve the problem for making temporary
directories, hence the 32-tries approach for now.
* When a deleted file's name persists because the file is open in some
process, then a directory containing the file cannot be deleted.
This creates trouble for `delete-directory/files`, since
`delete-file` on a directory's content doesn't necessarily make the
directory empty. In practice, this happens often for package
upgrades on my machine, where the package system wants to delete a
short-lived working space that the indexer is trying to scan.
To compenstate, change `delete-directory/files` to delete a file by
first moving it to the temporary directory with a fresh name, and
then delete the file there. It may take a while for a file to
disappear from the temporary directory, but meanwhile it's not in
the way of the original enclosing directory.
* When a file is open by any process, it prevents renaming any
ancestor directory of the file.
This creates trouble for the package system, which installs a
package by unpacking it in a temporary place and then moving it by
renaming. The package system also removes a package by renaming it
to a subdirectory of a ".trash" directory. If a background indexer
has a package file open, the move fails. In practice, a move fails
often on my machine when I'm attempting to upgrade many packages.
To compensate, make the package system fall back to copy + delete
if moving fails with a permission error.
Bind variables in a way that allows `local-expand` (with an empty stop
list) to replace a reference to the binding with one that has the same
scopes as the binding.
This repair was motivated by tests in the "rex" package. The
new test added here failed before by finding 'new both times,
but in the "rex" case, the mixup led to the same variable
being imported and exported at the linklet level.
struct-out was putting the super-struct's accessors into two parts of a
struct-info: the accessor list and the mutator list
this commit puts the accessors only in the accessor list and the
mutators in the mutator list
Using a frame pointer for the ABI of internal helper functions
should make the stack friendlier to tools like `perf`. There
may be a small performance cost, though.
Alexis's repair, and as she notes, forcing a `post-expansion` context
value in the core `#%module-begin` expander may allow a simplification
in "definition-context.rkt". But it's not immediately obvious, so save
that potential improvement for later.
Relevant to #2118
When `local-expand` receives one or more internal definition contexts,
it would forget about any current post-expansion scopes. That's
particularly a problem in a 'module-begin expansion context, where the
post-expansion scope ensures that any bindings are suitably
phase-specific.
Closes#2115
This is similar to the recent change of functions with optional
values. Using unsafe-undefined instead of a gensym makes it easier
to avoid the check of the missing argument.
When the separator is a string, these function construct a regexp
that is cached to make repeated calls faster. But when the string
is mutated it is necessary to recalculate the regexp.
Commit 32b256886e adds shifts in one place where it shoouldn't;
the "determinsitic-zo" test exposed the problem.
Also, avoid adding shifts that will have no effect, which avoids
accumulating useless shifts in some top-level contexts.
Various parts of the expander, including `local-expand`, always
flipped the use-site scope when flipping an introduction scope. Onlt
`syntax-local-introduce` should flip both of them, though.
Closes#2112
When expanding in a namespace for a module unmarshaled from ".zo"
form, a scope corresponding to the module's "inside edge" is added to
every expansion. Before this repair, the scope was detached from
module path index shifts that might apply to the bindings (including
references to bulk bindings). Repair the problem by adding suitable
shifts when adding the scope.
Thanks to William Hatch for the bug report.
As suggested by Sam: Using `racket/repl` to start a read-eval-print
loop can mean that less code is loaded if a startup language other
than `racket/base` is selected.
Closes#2064
- add comment saying `check-one-object/equivalent` only compares common
members
- put the similar parts of `check-one-object` and
`check-one-object/equivalent` in a helper function
- in `object/c-equivalent?`, check that names match before comparing the
common contracts (because the names should be fast to check-if-incorrect)
Although the documentation claimed that `read/recursive` produces
a plaeholder, that seems to be a leftover from a much older
reader (before `make-reader-graph`). Fix the new `read/recursive`
to be like the old one, and update the documentation.
Thanks to Alex Knauth for tracking down the unnecessary change
in reader behavior.
Related to #2099
Compared to v6.12, `map` & co. already provide better checking in
reporting an error when a keyword-requiring function is provided
with empty lists, but repair the error message to talk about
required keywords instead of just by-position arity.
Thanks to Philip McGrath for reporting the problem.
Related to #2099
If `local-expand` with a 'module-begin context introduces a macro
definition, but the definition is dropped while a non-macro definition
is later introduced, then make sure references go to the non-macro
definition.
This change also addresses a related scenario, where a
`local-expand`-discovered macro definition is not dropped, but it is
given an extra scope --- which amounts to the same thing from the
expander's perspective.
When a module body is expanded with `local-expand`, then submodules
can remain declared even if the submodule is discarded in the final
expansion. Since that's the way it has always been, leave it that way.
But also guard against a way of generating an import cycle via those
leftover declarations.
the fact that blame object equality now works right and
adding context to a blame object doesn't produce an equal?
blame object
also it appears that blame-add-unknown-context is not actually
being called so lets just get rid of that functionality
(but preserve reasonable backwards compatibility in
case someone is actually calling that function or
supplying #f to blame-add-context)
And the interning of blame objects was not intended to be
in 0b3f4b627e, so get rid of it here
closesracket/typed-racket#722
This ensures macro-introduction scopes don’t unintentionally end up on
lifted pieces of syntax, which causes problems for Check Syntax, since
it affects the syntax-original?-ness of the require spec.
Allow `syntax-local-make-definition-context` in places where the
created scope is not accumulated for stripping from `quote-syntax`.
Refine the docs to clarify those situtations.
A test for the repair exposed a problem with use-site scopes
and `quote-syntax`, so fix that, too.
Closes#2062
When `local-expand` is used for a 'module-begin context, use a fresh
binding -> definition-unreadable-symbol table for the nested
expansion. That way, the table used for the main expansion is
unchanged, and re-expanding or evaluating the expanded module will
arrive at the same unreadable symbols as the initial expansion.
The report and example are from Alexis.
The old implementation turns a single optional argument into two
arguments: the optional value and a boolean to indicate whether the
optional value is supplied.
The new expansion uses `unsafe-undefined` in place of not-supplied
arguments, in the general case. If the default-value expression is
simple enough, however, it is copied to call sites that would
otherwise supply `unsafe-undefined`. In the common case where the
default value is `#f`, for example, no run-time test is needed in the
core implementation function to check whether the default is supplied,
because a `#f` will be filled in for callers.
The performance improvement is tiny to non-existent for realistic
programs, but the simpler and reduced generated code may help in the
long run.
In openssl-1.1 (specifically libcrypto) the functions sk_num, sk_value and sk_pop_free are prefixed by 'OPENSSL_'.
Now both symbol names looked for to support both version 1.0 and 1.1.
The repair in 385f9588f8 propagates the
must-be-bound callback too far. It shouldn't be propagated anymore after
a non-rename transformer is applied.
Closes#2048
A module path index used to expand a module must be interned, and the
intern table is an `equal?`-based weak hash table, which means there's
an internal lock on the table that can be damaged if the current
thread is terminated while using the table.
I don't see an easy way to fall back to `eq?`-based tables, so I'm
resorting to an atomic region (which I had managed to avoid until
now).
* More specific error for no-clause match-lambda** (close#1615)
* Remove unused orig-stx parameter from racket/match internals
* Use of match-XYZ/derived for better errors (fix#1431)
* Tests for the exceptions produced by racket/match
The converstion from calling `call-with-immediate-continuation-mark`
to an internal `with-immediate-continuation-mark` form did not handler
a mutable argument variable.
mzlib/unit200 relies upon this behavior, even though it appears to have
been mostly accidental, so this maintains it for the sake of
backwards-compatibility.
Although splicing was set up for applying a composable
comtinuation to most kinds of continuations, it was not
set up right for applying a composable continaution in tail
position for a just-applied composable continuation.
Thanks to Spencer Florence for the report and example.
This commit adds a section to the reference to document how the expander
tracks information about local bindings, and it extends some
syntax-local functions to allow them to accept multiple definition
contexts instead of just one. In addition, it improves the documentation
on how first-class definition contexts interact with local-expand,
syntax-local-value, and syntax-local-bind-syntaxes, and it also
clarifies what it means to create a child definition context.
If the `#:protocols` clauses of a `define-objc-class` form includes
errors, but it simplifies the declaration of protocols that are
introduced in different versions of a framework, and it's effectively
more compatible with the implementation before dc0898f5ef.
The attempt in 82517622c7 was wrong. Using `JIT_R0` for
the result in the internal ABI is fine, and the problem
was using a register for two purposes in the called
stub.
Redundantly setting the signal handler hasn't matter, but it's
confusing and now matters for implementing W^X via a different signal
handler.
Closes#2038
For consistently with the old expander, treat `#%app` and `#%datum`
like unbound if they're bound to a rename transformer whose identifier
does not untimately refer to macro or primitive syntactice form.
Closes#2042
* match: check duplicate identifiers across list-no-order patters
* match: document that list-no-order doesn't support duplicate ids between sub-pats
* match: put duplicate id docs in a margin note between the two variants
Make 9d0ab74e9e more responsible by limiting permission changes to
pages that are intended to be both writable and executable for code
generation. That way, the signal handler doesn't just reopen holes in
loaded foreign libraries that W^X would close.
A new resource section was aligned based on the old section's
virtual address, instead of the PE's specified section
alignment. That could make alignment round up too far, leaving
a disallowed gap in the sections' virtual addresses.
Conform to W^X by using a signal handler that switches between W and X
mode on any fault. That's not the spirit of W^X, certainly, but it
should make Racket work without special configuration.
Beware that this change can turn some crashes into infinite loops.
It may be possible to detect those loops, but I didn't find a
good and portable way, so far.
Creating an executable with embedded DLLs means that the executable
can be truly stand-alone, instead of needing to be kept with its
DLLs in a relative subdirectory.
DLL embedding works by bypassing the OS's LoadLibrary and
GetProcAddress functions, and instead maps the DLL into memory
and performs relocations explicitly. Joachim Bauch's MemoryModule
(Mozilla license) implements those steps.