Allows an inaccessible custodian to be GCed, promoting any values that
it manages to its parent custodian. Also repair memory accounting for
custodian boxes.
For values referenced by a custodian, the nature of the custodian's
weak references is slightly different on Racket CS. The reference is
weak enough that the value can be finalized via will (e.g., to close
an unused port), but it's not weak enough to allow weak boxes, weak
hash table keys, or ephemeron keys to be cleared. That's a consequence
of using ordered finalization instead of finalization/weakness levels.
This difference could be avoided at the cost of an extra wrapper for
any finalized value and a discipline of using such wrappers as the
user-visible reference for all custodian-managed values, but semi-weak
references so far appear to be practical and a better compromise.
The use of a will executor for a custodian is a bit of a hack, and it
doesn't want the "keep live until executed" constraint. So, add an
optional internally.
If a late will executor has pending will, then it needs to stay
live until the enclosing place has terminated (and post-custodian
callbacks are run). Otherwise, `ffi/unsafe/alloc` can lose values
that it expects to finalize, and it reports an "internal error".
The late will executor for `register-finalizer` from `ffi/unsafe`
was kept live in traditional Racket, but only as an accident of
custodian shutdown in a terminating place: the shutdown process skips
threads, since that work is technically not necessary. Relying on that
coincidence is asking for trouble, though, so implement retention more
deliberately.
Recognize `(ptr-ref <ptr> _uint8)`, etc., and turn it into a more
direct `(ptr-ref/uint8 <ptr>)`, etc. This improvement speeds PNG
loading by a factor of 10 to 20, for example, because the
implementation expects the pattern to be recognized.
When the number of places approaches the number of available
processing cores, then a spin lock isn't good enough for a small
number of contended hash tables (maybe just one of them). When
contention is discovered, fall back to a mutex-based lock.
When spawning a new subprocess, it's possible that one or more of the
new process's standard input, output, or error descriptors use file
descriptor 0, 1, or 2, even if they don't correspond to any of the
parent process's original standard input, output, or error descriptors.
This can happen if the parent process closes one of its standard
descriptors, and the operating system reuses the file descriptor number
for a new descriptor.
Therefore, be more careful about closing and copying file descriptors in
the child process before calling `exec`. Specifically, move file
descriptors out of the way as needed so they aren't clobbered, and
accommodate cases where multiple standard streams may share the same
file descriptor in the parent process.
fixes#2634
A subcustodian was incorrectly registered as weak for its parent,
which means that an unreferenced custodian could get lost when
shutting down an ancestor.
Register the port, not the file descriptor, especially since a TCP
connection can have ports that share a file descriptor. Also, I think
a weak reference in the custodian doesn't work as intended (visible
through finalization) if the file descriptor is referenced with a
callback that closes over the port.
No `syntax-protect` is needed for `define/private`, etc., because no
new identifiers or expressions are introduced. Adding extra dye packs
can interfere with other macros that pull apart syntax (although maybe
the macros shouldn't do that without using `syntax-disarm`).
A custodian doesn't provide any order on shutting down the objects
that it manages (I was confused about some past experiments), so
avoid that assumption.
Getting the current CPU time is relatively expensive, so get it only
on thread swaps where a thread used its full quantum or 1/100 swaps
otherwise. This approximation should work because thread-specific CPU
times are rarely requested, and they make the most sense for threads
that don't constantly swap out due to synchronization.
Formerly, an expression like `(arity-at-least-value 7)` could crash,
because the `arity-at-least-value` accessor is created in unsafe mode,
and the slow path to accessor errors attempted to use the accessor to
provoke an error message. Instead of using a potentially unsafe
accessor, have the slow path raise an error explicitly with
`raise-argument-arror`. That change has the added benefit of making
error messages mach traditional Racket (at least for structure types
that are not declared as "authentic").
The problem was exposed by tests added in 55dcdf5538.
Instead of a separate hash table mapping continuations to
linklet-instance names, use a continuation mark. That's faster,
because capturing a continuation means copying part of it on continue.
Currently, Check Syntax has trouble correlating `require` forms and
references to imports that go through a macro-introduced rename
transformer. For example, there's no binding arrow from the final
`starting` to the `racket/list` in
#lang racket/base
(require (for-syntax racket/base))
(define-syntax-rule (define-as-first mod starting)
(begin
(require (only-in mod
[first initial]))
(define-syntax starting (make-rename-transformer #'initial))
starting))
(define-as-first racket/list starting)
starting
But change the last two `starting`s to `initial`, and the binding
arrows work.
Until a general repair is in place for Check Syntax, this commit
adjusts 38d612dba6 to use the original export name for an immediate
binding, which acts as a hint to the current Check Syntax
implemenration.
Note that the source-distribution client must have a
"build/ChezScheme" checkout created, maybe by building as a 'cs
variant. A pruned version of that checkout is then included with other
sources. The resulting source distributon then works for building
either Racket variant.
Adapt the configure scripts and makefiles to use a "ChezScheme"
directory that is bundled with sources.
Some expressions like (date-day) gave usually an arity error, but when they
were inlined by the JIT the arity check was wrong, so they produce a segfault
or a nonsensical result.
Provide a way to build Chez Scheme from source using Racket. In the
short run, this lets us distribute source that ultimately depends only
on a C compiler (since a variant of Racket can be built from source
using just a C compiler).
- change an 'an' to 'a'
- remove 'immutable' where expecting either mutable or immutable (don't
bother to specify which, because `vector-common.rkt` doesn't bother)
- remove extra ','
The `poll` system call doesn't work right for fifos, so switch
back to `select`, but use a new strategy to size fd_set buffers
instead of trying to use `getdtablesize` (because the result
of `getdtablesize` can change dynamically on Mac OS).
Also, add a check for input at the rktio level when trying to read
from devices other than regular files. Otherwise, Racket CS (which
doesn't have some redundant polling that is in traditional Racket)
sees spurious EOFs for unconnected fifos.
Closes#2577
* Remove value store in ready_pos but unread
* Move declaration of ready_pos to where it is used
* Make discard of return value of tcp_check_accept explicit
* Split declaration and var assignment to comply with xform
Making `equal?` do the right thing on classes turned out to be easy---it
just involved adding a straightforward `prop:equal+hash` property to the
`class` struct—but making it work properly for *objects* was the tricky
part. The trouble is that `equal?` on objects that don’t implement the
`equal<%>` interface is just ordinary structure equality, which can be
relevant if objects are inspectable. Writing `(inspect #f)` in a class
body is like making a struct `#:transparent`, and it has all the same
ramifications for equality.
The trouble is that `class/c` creates new wrapper classes, and every
class has its own struct type. Since the default behavior of `equal?` on
structs is to *never* be equal to structs of different types, even
subtypes, an object created from a contracted class can never be
`equal?` to an object created from the same class without contracts.
The solution is to add a `prop:equal+hash` property to `object%` itself
that emulates the default behavior of `equal?`, but sees through class
contract wrappers. Since struct type properties are inherited by
subtypes, this property will be present on all objects, and it only
needs to be attached once.
fixes#2279
Mainly, this improves `make-keyword-procedure`: when applied to a single
argument, it now uses `procedure-rename` to ensure the resulting
procedure has the appropriate name. A couple other changes also guard
against the case where a lambda expression has no inferred name and no
source locations information, which would lead to the source locations
in the implementation being used, instead.
Previously, all init arg contracts’ first order checks were always
checked, but a typo meant all but one of the projections was always
dropped! This fixes that, and it removes a little nearby dead code while
we’re at it.
In some cases, 0 results will be represented by a NULL results-array
pointer. Fix the interpreter to detect a single result completion
through a count of 1 instead of a NULL result-array pointer.
Also, remove a bug extra push operation in the JIT-generated code for
`begin0`. (Other features of the JIT-generated code compensated for
the extra push in cases where the bytecode compiler did't optimize
away the `begin0`, so it turns out not to have caused a problem, but
that's a surprising and fragile set of coincidences.)
Closes#2571
order (like it does with the argument and result contracts), but ensuring
that the pre and post conditions come before the arguments (if possible)
closes#2560
so that it collects the pre/post conditions into sorted order with the
arguments (based on the dependencies), but then discards that
information and always evaluates the pre and post conditions after the
argument/result contract checks
improve accuracy of tanh function
using the implementation of https://www.math.utah.edu/~beebe/software/ieee/tanh.pdf
by changing from (/ (- 1 exp2z) (+ 1 exp2z)) to (- 1 (/ 2 (+ 1 exp2z)) the accuracy after rounding is increased (I was comparing with bftanh) and removes the fluctuations around z=18.35
using the polynomial for z ϵ(1.290e-8 to 0.549) seems to increase the accuracy after rounding even further
see comparison: http://pasterack.org/pastes/48436
especially the fact that (< (tanh 18.36)(tanh 18.37)) ;=> #t was tripping me up
the two extra conditions (z . < . 1.29e-8) and (z . < . 0.549) are optional to solve this
- Propagate disappeared uses from any pattern stx, not only those
attached to forms that themselves have a disappearing use.
- Fix for new local-apply-transformer handling of scopes.
This commit fixes an issue with the fix for contracted bindings in
signatures implemented in commit 5fb75e9f82. While the previous fix
worked in simple cases, it introduced a problem: although signatures
that define contracted bindings were able to refer to other bindings
in the signature in the binding contracts, but anyone doing so was
at the mercy of the exporting unit’s definition order. For example,
given a signature
(define-signature a^
[(contracted
[ctc contract?]
[val ctc])])
then a unit exporting the signature would cause a
use-before-initialization error if its definition for val appeared above
its definition for ctc.
This limitation did not exist in the units implementation prior to the
introduction of the sets-of-scopes expander in Racket v6.3 (after which
contracted bindings were broken until the aforementioned fix in Racket
v7.2). However, the fact that they worked at all seems semi-accidental:
instead of properly indirecting references to signature bindings within
binding contracts, the contract expressions were simply placed in a
context in which the existing names were bound. However, this meant that
any export that renamed identifiers could cause problems, which the
implementation strategy taken in this commit handles just fine.
When the result of `syntax-make-delta-introducer` adds scopes,
it needs to carry along any shifts that might be relevant.
The new implementation risks adding lots of redundant shifts. In this
case, it might be worth spending extra effort at shift-transfer time
to check whether the shift is redundant.
Closes#2542
Part of e7744efb7d triggered a test failure (that I missed by somehow
running tests incorrectly). It turns out that phase -1 transformer
bindings can be used in phase-0 code via shifting.
This change does not effect the repair for building with
machine-independent bytecode.
This change avoids the stair-step effect that is depicted in the
"current Racket -M" build plot from the January 2019 blog post about
Racket on Chez Scheme.
The stair step in that plot is a result of a combination of effects,
but one key part is that the `.set-transformer!` linklet import (to
support macro definitions) has a reference back to the namespace.
While `.set-transformer!` normally would not be captured in any
closure, `db/private/generic/prepared` creates a thread that causes
the "prefix" part of a closure to be moved to a thread's runstack
before it can be pruned by the GC. The stair-step problem happens only
when running directly from machine-independent form, because that form
is recompiled in a way that doesn't optimize away the unused
`.set-transformer!` import. The change in this commit avoids a
reference to the namespace in some cases where it will not be useful,
which turns out to be sufficient to address the build problem.
A more complete repair would be to change the compiler to pair a
closure prefix on the runstack with a liveness mask. An even more
complete repair is to switch to Racket CS. Racket CS is immune to the
problem, even when running from machine-independent bytecode, because
its closures do not keep extra references (with the tradeoff that
there's less sharing).
To make fasl writing as determinsitic and portable as possible, write
+nan.0 and +nan.f always with a specific bit pattern.
This choice risks losing information that is potentially useful, but
given the way that Racket treats all NaN encodings as equivalent, that
rick seems low.
For example, `#hasheq()` is `eq?` to `(hasheq)` and `(hash-remove
(hasheq 'x 2) 'x)`. Making empty hash table unique avoids some
potential and actual inconsistencies between traditional Racket and
Racket CS, such as in machine-independent bytecode.
Move different handling of serialized syntax data to the schemify
layer instead of te expander, so that the result of compiling in
machine-independent form is the same for traditional Racket and Racket
CS.
The `--recompile-only` flag is intended to help dectect build
problems, especially distribution builds where packages are
supposed to be in built form.
This allows it to cooperate better with Typed Racket, particularly
regarding the `Any` type. The guard and use of `#:authentic` also
check that it's still a singleton in all cases.
Avoid parsing cross-linklet optimization information until it is
needed. This change also avoids a problem with saving hash codes
that are platform-specific.
Insteda of just consulting `lib-search-dirs` in the host system's
config during cross-build mode, use `lib-dir` if set to arrive at
the expected default when `lib-search-dirs` is not set.
Handle not-this-platform paths that manage to evade the heuristics for
converting paths to and from relative form. Otherwise, building can go
wrong on on Windows when using machine-independent starting files
generated on Unix-like systems.
The `--error-out` and `--error-in` flags are meant to work together to
chain a sequence of `raco setup` steps where one of them might fail,
but other steps should proceed. The last step in that sequence should
use only `--error-in`, so that it exits with failure if any of the
steps failed.
The `both` target of the toplevel makefile uses `--error-out` and
`--error-in` to let a Racket CS build proceed as long as the
traditional Racket build made it to the last `raco setup` step, which
means that it survives package-build errors.
The Chez Scheme fasl format is not machine-independent when record
types are involved, so use the process that serves compilation to also
serve fasl encoding.
In parallel build mode, if attempting to compile a file triggers a
cycle error that is caught and discarded, don't leave behind a
dependency (that is effectively resolved by the error) in the
parallel-worker manager.
It doesn't do anything, but make it a conforming variant of the
identity function. Also, fill in checking for `compile-linklet`,
and correction documentation errors for `compile-linklet` and
`recompile-linklet`.
Makefile and configure refinements, including targets to let the
distro-build package drive a cross-build from scratch. A cross
build on Mac OS for Windows now works, for example.
The intent was never for the data argument to be optional, but a
mistake in traditional Racket's argument dispatch for `log-message`
made it optional in some cases, so the simplest way forward is to make
it consistently optional. Repair traditional Racket to use `#f`
instead of a random value when the data argument is not provided.
Add options to load a "plug-in" cross compiler, which should be a Chez
Scheme patch file plus declarations for the built-in libraries. Since
loading a patch file replaces the initial compiler, a separate
cross-compiler process is used to load the plug-in.
Adjust build process to be able to generate Racket.exe, etc, for
Racket CS using MinGW. Much of this cross-compilation support can work
for building other platforms, too, but some of the details are filled
in only for generating Windows executables.
When `connect` returns an error immediately, save that error instead
of expecting it to be available later via `getsockopt`. That avoids a
problem on TrueOS, for example.
Some parts of the implementation used for comparison were omitted when
allocation operations are not supported (but comparisons don't
allocate). This problem was unconvered by running the "jitinline.rktl"
tests with RacketCGC.
A recent revision to the way modules are instantiated for handling
runtime paths did not work right for modules from source (i.e., no
bytecode available) that have submodules.
Closes#2486
Avoids internal errors (including unsafe behavior) in an example like
```
#lang racket
(begin-for-syntax
(local-expand
#'(#%plain-module-begin
(begin-for-syntax
(define x 42)))
'module-begin
'()))
(begin-for-syntax
(println x))
```
This example is weird, because it creates an `x` binding that doesn't
survive to the full expansion. Before the repair, the disappearing
binding created trouble for the expanded-to-linklet pass.
The example is weird for a second reason, which is that it uses uses
`local-expand` in a place where it will be triggered by visiting the
module. It turns out that raising a syntax error at that time (from
`#%plain-module-begin`) did not work correctly due to lazy
instantiation of the expansion context.
Closes#2458
Add `syntax-protect` to some macro expansions, especially macros in
contex where unsafe operations are imported, which means that a
combination of `local-expand` and `datum->syntaxa could provide access
to the unsafe bindings absent `syntax-protect`.
Inspired by the way the Chez Scheme number parser works, change the
one in the expander to be faster and probably clearer. This improved
performance brings number parsing almost back in line with the v6.12
parser's performance.
The revised parser is faster because it goes through an input string
just once. The new parser is also more xcomplete; it doesn't rely on a
host-system `number->string` (except for dummy extflonums when
extflonums are not supported).
If you're reading the commit history, beware that the note on commit
be19996953 is incorrect about the change to parsing divide-by-zero
errors. (It explains a change that was edited away before merging.)
This commit really does change the bahvior, though, again as a better
match for v6.12. Specifically, "/0" (with no hashes) always triggers
divide-by-zero in an otherwise well-formed number, even if `#i` is
used.
Speed up JSON parsing (usually around x4 to x8) by avoiding regexp
matching and using more direct byte and character operations. Along
similar lines, compute parsed numbers directly instead of converting
to a string and then using `string->number`.
The revised reader behaves differently only in the case of a bad input
stream, where it may consume more bytes from the stream than the old
one due to eagerly reading bytes instead of tentatively matching
peeked bytes. Also, a UTF-8 decoding error is just `exn:fail` like
other input-parsing errors, and not `exn:fail:contract`.
Related to PR #2472, marks a few other functions as NORETURN.
Namely:
- scheme_signal_error
- scheme_wrong_count
- scheme_wrong_count_m
- scheme_case_lambda_wrong_count
- scheme_wrong_type
- scheme_wrong_contract
- scheme_wrong_field_type
- scheme_wrong_field_contract
- scheme_arg_mismatch
- scheme_contract_error
- scheme_wrong_return_arity
- scheme_unbound_global
Unfortunately static analysis is done per compilation unit, so
although, for example, scheme_wrong_contract calls scheme_raise_exn
and the latter is already marked NORETURN, the analyzer does not know
this. Therefore we need to manually propagate the NORETURN for each
function declaration.
The unsafe-fd->evt interface is based on unsafe-{file-descriptor,socket}->semaphore.
The main differences are that these events are level-triggered, not edge-triggered, and
they do not cooperate with ports created by unsafe-{file-descriptor,socket}->port.
scheme_raise_exn raises an exception and doesn't return.
Static analysis tools find a huge amount of problems with regards
to memory leaks that are actually false positives because the tools
are not aware the function does not return. Marking it as such aids
further inspection of real problems.
The documentation and implementation were confused about whether \D,
\S, and \W match non-ASCII characters. Now they do. The new regexp
implementation (as used in Racket CS) already matched them.
I understand what the idea is in this file, except this code won't
work like the author expected it to. Variables marked for wiping won't
be wiped unless they are marked as volatile. The compiler will simply
remove the code wiping the variables and issue a warning, which is
what brought me to look into this code in the first place.
Make the slow path faster by reducing input- and output-end
coordination. Also, avoid retaining one end just because the other end
is retained.
This change involves adding an indirection for the fast-path buffers
so that management for both ends of a pipe can be centralized
independent of the ports.
Sortof. This is where we especially take advantage of vtable
flexibility. The methods of the vtable are really closures,
because that's far more convenient for custom ports.
Change the internal port representation to an object-with-vtable
representation. The syntax looks similar to the class system of
`racket/class`, but everything is first-order: no class values, no
mixins, etc. Also, the vtable can contain non-procedures (like #f for
"not supported" or a port to mean a direcirection).
Using objects will make port instaces smaller and support a
reorganization to eliminate ad hoc `data`-field extensions. It will
also replace a half-step was was in place for byte input
Along with the conversion, change the way the fast path for writing
works: When possible, expose a shared buffer and index into that
buffer.
Only byte string input ports are really converted, so far. A
compatibility layer maps the old protocol to the new one, so
conversion can continue piecewise.
Show the compile-time value that is not a procedure. While
this runs some risk of exposing details that are meant
to be private to a macro/language, a macro/language can
use an applicable structure to provide a more specific
error message. Meanwhile, showing the value is likely to
help for someone who needs to debug a macro problem.
When the desired reference is not an advertised commit, then try
pulling just a few commits --- at depth 8, 16, and 32 -- from the
"master" branch to check whether the commit can be found that way. If
not, fall back to the exhaustive search that requires a full download.
This should help with the common case that a package reference into
the Racket repo is a few commits behind the current master branch
(because the package server hasn't scanned the repo recently enough).
It's much faster to disover that the commit is within the first 32,
which is almost always is, than to download the entire repository.
Upgrading an auto install to an explicit install runs into trouble if
the auto install is in a wider scope. It doens't seem necessary to
promote already-installed packages for migration, anyway.
- Improve performance by using make-apply-contract, lifting,
fast path for dependent flat contracts.
- The positive blame party now consistently means the *macro def*
and the negative party means the *macro use*. The #:arg? argument
controls blame swapping.
Don't make expansion depend on `(system-type 'vm)`, because expansions
should be VM-inpendent. For example, distribution builds use a single
expansion and finish up from there for different Racket
implementations.
The "extension" module protocol predates the modern FFI and depends on
the C API. Since it's not supported on Racket CS, skip the check for
extension modules.
Skipping the check can reduce load time considerably. We should
consider depracting the extension protocol for traditional Racket.
Don't defer any too-early variable checks to Chez Scheme, because the
schmeify-inserted checks use the right names and include a reference
to the enclosing module.
The `char-numeric?` function was missing some Unicode characters that
have the numeric property, because it was calculated from the wrong
field of UnicodeData.txt.
Change from treating exact 0+1i and 0-1i like the corresponding
inexact values.
Also, change from treating `(atan 0 x)` as exact 0 only when x is
exact. That's consistent with `angle` producing exact 0 for a positive
real number.
The cutoff point for large-magnitude exponents (forcing a +inf,0 or
0.0 result) was wrong for bases below 10, and its did not take into
account the mantissa magnitude for some number forms.
Also, change the parsing of numbers with both `/` and `#` to be more
consistent. A `#` anywhere in the number should trigger an inexact
teratment 0 in the denominator (so inifnity or not-a-number instead of
divide-by-zero), even if `#` is only in the numerator. Meanwhile,
setting `read-decimal-as-inexact` to #f should count `#`s as `0`s and
not trigger inexact treatment.
Infer procedure names based on source locations, and suppress a
procedure name when it has #<void> for its 'inferred-name property.
Threading this information through the Chez Scheme layer involves a
hack, where a name starting with "[" indicates either "no name" or
"inferred from path".
Use "cs/c" to be parallel to the source tree, because making them
different is asking for trouble (e.g., using `configure` without
a separate "build" directory goes wrong).
The rktio/parse.rkt grammar doesn't handle empty argument lists and
was choking on this line, before it even got to my new line adding
rktio_udp_set_receive_buffer.
Fix by following example of using `(void)` instead of `()`. Two notes:
- I forget which variation of C or C++ requires (void) instead of ().
- Strictly speaking, this commit isn't part of the theme of this PR.
If I squash the other commits down to one, maybe I should leave this
separate.
Make a call to a foreign function behave as in traditional Racket: the
arguments are considered reachable un their unwrapped forms until the
foreign function returns.
A missing `unwrap` caused references to structure constructors to be
treated as potentially non-primitive procedures, which significantly
slows down calls to the constructor.
Probably, this started going wrong at a point where original names
were more consistently associated to defined identifier.
Report source name when accessing a variable too early, and allow
multiple returns (based on continuation capture) for the right-hand
side of a `letrec`.
The repair directly implements `letrec` as needed in terms of `let`
and `set!`, instead of relying on Chez Scheme's `letrec`, unless
right-hand sides are simple enough. Implementing `letrec` that way
risks losing Chez Scheme optimizations, but schemify takes care
of many improvements already.
Get more of the benefit of traditional Racket's lazy bytecode
unmarshaling by using an explicit `fasl->s-exp` stap on the serialized
form of syntax objects. This approach also avoids generating pointless
machine code for constructing the serialized form, effectively using
`fasl->s-exp` as an interpreter. The result is significantly smaller
".zo" files for RacketCS and slightly faater load times.
* Add support for space-efficient vector and arrow contracts.
When an eleventh contract would be applied to a function or vector,
switch representation for the wrapper and try eliding redundant
checks. The resulting value keeps a constant number of
chaperone/impersonator wrappers regardless of the number of contracts
applied to it, and won't run any (provably) redundant checks.
This avoids a pathological case where, e.g., a function crosses a
boundary inside a loop, and gets wrapped N times (or worse, 2^N).
The optimization for function contracts currently only applies for
fixed-arity functions and contracts, and only for functions with known
result-arity of 1. These limitations are not fundamental.
Checking specific checks is not as optimized as for regular arrow
contracts yet. (Specifically: arity-specific wrappers and
tail-marks-match support is missing.) Again, not a fundamental
limitation.
Further described in the OOPSLA 2018 Paper: "Collapsible Contracts: Fixing a Pathology of Gradual Typing"
In collaboration with Ben Greenman, Christophe Scholliers, Robby Findler, and Vincent St-Amour.
Recent changes to adapt cm to cross-multi mode also attempted to
improve dependency checking to avoid prematurely committing to
compiling an old dependency, but that improvement was broken.
The multi-cross mode, don't rewrite a machine-indepedent file
by recompiling it to itself. This shouldn't matter, but not
touching files makes the result cleaner.
When a `[case-]lambda` form's only free variables are at the module
level, the Schemified form is a `[case-]lambda` form whose only free
variables are in an enclosing `lambda` for a linklet. Since those are
not completely closed, to make the allocation pattern consistent with
traditional Racket, Chez Scheme needs a hint to allocate the closures
once per linklet instantiation.
When an ephemeron is accessed through a weak mapping from the same key
that is used in the ephemeron, and when the key is not otherwise
reachable, there can be a race between extracting the value from the
ephemeron and performing a GC that reclaims the key. Avoid that race
by supplying the key back to `ephemeron-value`, which ensures that the
key remains reachable until the value is extracted.
In many cases, supplying the key as the second argument would also
work --- since that argument is used as a replacement value when the
key is inaccessible, but the key can't become inaccessible if it's
pending as a replacement value. A separarate optional argument to
`ephemeron-value` seems clearer and more general, though.
Avoid retaining namespaces that are created to gather runtime paths.
If expansion generates a lot of instances with a lot of type
information, for example, this repair can save a lot of space.
If the sub-template inside #(...) is unsyntax-splicing instead
of list, produce the template #((~@! . ????)) instead of calling
(datum->syntax o list->vector o syntax->list). Fixes#2402.
Fix some race conditions involving concurrent setup tasks that are
each trying to generate both machine-independent bytecode and
machine-specific bytecode.
add a function to escape any glob wildcards in a path or string
also add a private `glob-element->filename` function so that, e.g., the pattern
`a\*` matches the file named `a*` (previously, the match would fail and
I think it was impossible to match for only `a*`)
Fix the fallback interpreter (which is used for the "outside" of a
module that is too big to compile) so that it's safe-for-space.
This change is unlikely to repair any immediate problems, but space
safety problems are difficult to detect and avoid when the underling
implementation is not safe-for-space so fixing the interpreter is
likely worthwhie in the long run.
Module definitions and expression need to have a prompt around them to
delimit continuation capture, variable assignment needs to happen at
the right point to ensure that reassignment is guarded and
non-assignment is detected. But avoid the prompt when it's not needed,
such as around function definitions.
Closes#2398
Similar to a255def019, but for side effects potentially
exposed by definition RHS expressions, instead of
expressions not in a definition. Improve that commit and
this one by only forcing variable assignments at non-simple
expressions.
Travis is eliminating its container-based infrastructure
and deprecating the `sudo` keyword.
This commit also updates the example build matrix to use
more recent Racket versions.
Corresponds to https://github.com/greghendershott/travis-racket/pull/29
Discard local-variable names to avoid `gensym` artifacts in the same
way that a more complete compilation would discard the names. This
change does not affect function names, which are preserved through
separate properties.
It most cases, it's more important for `compiler/cm` to reliably
replace a file that might be busy than to make the file update atomic.
To suport that kind of use, `call-with-atomic-output-file` implemented
a fairly reliable, multi-step, non-atomic process for replacing a file
on Windows.
For recompilation of bytecode in machine-independent form, however,
`compiler/cm` now really wants to atomically write a replacement
bytecode file. That's not generally possible on Windows (except on
NTFS with transactions, which are discouraged...), but MoveFileEx work
atomically in some cases and it's likely to work for the cases needed
by `compiler/cm`. Probably.
So, add a mode to `call-with-atomic-output-file` to get "more atomic"
updates on Windows. This mode is enabled by a callback that makes the
caller responsible for deciding what to do with the move fails, such
as waiting a while and trying again. And `compiler/cm` now waits a
while and tries again, up to a limit, which should be good enough for
recompilation.
Enable `raco {setup|make}` to build two sets of compiled files: one
set that is suitable for the current machine, and another set that is
suitable for a different machine or for all machines (i.e.,
machine-independent bytecode).
In the long run, this new `raco setup` mode support cross compilation
where the build machine and target machine have different bytecode
formats --- unlike the current cross-compliation mode, which relies on
there being a single bytecode format in traditional Racket for all
platforms.
In the short run, the new mode enables the faster creation of
Racket-on-Chez distribution builds. The build server can send out
machine-independent bytecode to client machines while using
machine-specific bytecode for itself to drive the build process.
The new compilation mode relies on a somewhat delicate balance of the
`current-compile-target-machine` and `current-compiled-file-roots`
parameters (as reflected by the `-M` and `-R` command-line flags for
Racket) as well as cross-compilation mode (as enabled by the `-C`
command-line flag).
The 'target-machine result from `system-type` reports the
default value of `current-compile-target-machine`.
Also, fill in pieces to make `setup/cross-system` work
for RacketCS, although cross-compilation is still several
steps away.
The new path for recompiling from machine-independent files
trues to read a ".zo" file without holding the recmopilation
lock and without an `exn:fail:filesystem` handler.
Wait until replacement is more assured before deleting an existing
".zo" file.
Also, don't delete a ".zo" file that is later in the
`current-compiled-file-roots` search path than the one being written.
This refinement supports setting up a search path to try
machine-specific compiled files and fall back to machine-independent
files, for example.
Add `-M`/`--compile-any` to `raco setup`, `raco pkg install`, etc., to
build machine-independent bytecode, which is useful in the process of
building distributions.
The `parallel-lock-client` protocol expects a #f back when a
file was meanwhile compiled by another process. So, don't
just forget about a file after it is compiled, in case there
is still a lock request on the way for that file.
Actually, the machine-independent-to-specific part is trivial. The
hard part was making `compiled-expression-recompile` enable
cross-linklet optimization as it recompiles, since that involves
pulling apart metadata and putting it back together afterward.
The `compile-machine-indendent` parameter controls whether `compile`
creates a compiled expression that writes (usually in a ".zo" file) to
a machine-independent form that works for anhy Racket platform and
virtual machine. The parameter can be set through the
`-M`/`--compile-any` command-line flag or the `PLT_COMPILE_ANY`
environment variable.
Loading machine-independent code is too slow for many purposes, but
separating macro expansion from backend compilation seems likely to be
a piece of the puzzle from cross-compilation and faster distribution
builds.
Converting "invalid memory reference" to an `exn:fail:contract` (which
is the default conversion) hides crashes as success when a test
expects an error.
Also, fix a bug that was hiding as an expected excdeption.
The Racket and RacketCS implementations had separate copies of
linklet-directory and linklet-bundle reading and writing. Move the
implementation into the expander layer.
The primitive '#%linklet instance now omits directory and bundle
operations and `read-compiled-linklet`. It intead must provide
`write-linklet-bundle-hash`, `read-linklet-bundle-hash`, and
`linklet-virtual-machine-bytes`.
In particular, when there isn't any redundancy detected, then
just make a single call into the projection and create just a single
class.
This seems to help on at least one of the configurations of
dungeon, which completes in about 6 minutes with this commit
and I gave up waiting after 15 minutes for the version of
racket that didn't have it
Improves the error message for:
```
(define-syntax (like-lambda stx)
(syntax-case stx ()
[(_ e) #'(lambda () e)]))
(like-lambda (define x 1))
```
Based on a report from @pkoronkevich.
When an executable distibution is created, some path become
unavailable at run time, such as the result of `find-links-file`.
Change the contract on those functions and adjust the implementation
to return `#f` in those cases. This is a backward-compatible change in
the sense that uses that now return `#f` would have crashed before
(although it does shift the blame in that case).
Based on an initial patch by Shu-Hung.
Closes#2352
Source mode was a leftover from early iterations of the expander. A
bootstrapping mode that uses replacement `compile-linklet`, etc.,
turned out better.
For consistency with traditional Racket and currently matters on
Windows. The Windows implementation of file-truncate should probably
not move the file position as it does, though.
In GUI-application mode (e.g., running GRacket), a console is allocated
on demand if a program tries to use the original ports. Move that
on-demand handling into rktio, where it's simpler and works for
RacketCS.
One more take on the problem addressed by 990e1f1e30. This adjustment
avoids copying properties from the original form to the identifier
that is preserved in 'origin.
The `get-compiled-file-sha1` function assumed that a ".dep" file is
up-to-date when present. That may not be consistent with all uses,
including in `file-stamp-in-paths` as used by DrRacket for "populate
compiled", and an old file can go wrong with the recent ".dep" format
change. Make `get-compiled-file-sha1` at least check the version on
the ".dep" content before trying to use it.
Relevant to #2354
A function that uses `call-with-immediate-continuation-mark` in tail
position should not be flagged as "preserves marks", because the JIT
needs to bump the mark stack if the function is called in non-tail
position.
Closes#2333
The repair is more precisely a repait to xform, which incorrectly
parsed a C function definition that starts "struct" as a struct
declaration. (The function starts "struct" because the return type is
"struct Scheme_Overflow_Jmp *".) Since the function wasn't recognized,
xform didn't convert it to cooperate with the garbage collector.
Closes#2341
Previously, the following program would print "error writing to
stream port" on program exit.
(define cust (make-custodian))
(define out
(parameterize ((current-custodian cust))
(open-output-file "test.data" #:exists 'truncate)))
(write-string "This needs flushing...\n" out)
(custodian-shutdown-all cust)
(exit 0)
So far, bytecode for traditional Racket has been kept separate from
RacketCS bytecode by using a different "compiled" subdirectory for
RacketCS. That makes sense for development work to allow the
implementations to coexist, but it creates trouble for packaging and
distributions, and it (hopefully) won't seem necessary in the long
run. Treating the different virtual machines like different versions
seems more generally in line with our current infrastructure.
Rearrange the configure scripts so that it will be possible to build
RacketCS from a source distribution and have it installed in the right
place. Also, when building Racket3m just to bootstrap RacketCS, don't
install Racket3m.
Retains a strong link to a place-channel write end when there's at
least one waiting thread. This is symmetic to keeping a strong link to
the read end when the place-channel queue is non-empty. The change
repairs a problem building documentation with places in `racocs
setup`.
Refines 2ef8d60cc6 to avoid characterizing the failure as a `(-> any)`
contract on `hash-ref`, since `hash-ref` doesn't enforce that contract
in general. Go back to an `exn:fail:contract:arity` error, but keep
the specialization of the error message to clarify that it's from
`hash-ref`. Also, bring RacketCS into sync.
Although a `directory-exists?` check is useful for providing better
error messages, it's fundentally a race condition, since an external
process can always remove a directory between the check and a use of
the directory. Because of that limitation of `directory-exists?`, we
normally avoid making it part of a contract. This commit adjust
937aa3cdb1 to follow that convention while preserving the helpful
check and documentation improvements.
Their semantics assume all directory `path-string?` arguments point
to existing directories in the filesystem but they do not actually
check to verify resulting in unhelpful inner exceptions
breaking the functions' semantic abstractions.
Fixed by adding appropriate checks.
Test cases included too.
Documentation updated to reflect the requirement for paths to
refer to existing directories.
Also added note that `generate-stripped-directory` does not
compile or render source files.
When catalog is specified via file:// URL to a local directory with local
package directories that have been stripped in various modes, `raco pkg install`
will incorrectly error out with an incompatible package content error when
a binary strip mode is selected and a file-system error when source strip
mode is selected.
The cause is due to the file:// URL's resolved file path not being passed
along to the helper functions in the various code paths handling the different
strip modes; instead the full original file:// URL is passed along and is
misinterpreted as an actual filesystem path resulting in the observed failure
modes.
The repair is simple; change the relevant argument so the resolved filesystem path
is used instead of the original file:// URL.
Test cases added to test various combinations of strip modes when using
`raco pkg install` and local catalog directory.
When a file descriptor cannot be `dup`ed for a place message, the
error message has to be delayed until the partially copied message is
cleaned up. Various problems with the message-serialization code
caused that delay to work incorrectly or incompletely.
Closes#2305
The repair in 4396b841c0 for internal names exposed a problem with the
way `linklet` handled renaming on export, where it mixed up the names
that should be used internally and externally in errors.
Merge to v7.1
Lots of plumbling was in place to preserve the source name (instead of
the symbol generated to avoid collisions for macro-introduced
definitions), but some small pieces were missing.
Closes#2288
When upgrading native-library versions, a still-relevant
patch got lost. The patch corrects a problem with empty
glyphs getting treated as missing glyphs.
Closesracket/pict#42
The pattern
(ephemeron-value
(hash-ref! intern key
(lambda ()
(make-ephemeron key (wrap key)))))
is wrong, because a GC might happen between the time that the
epehemeron is found in the table (or the time that the key was just
added to the table) and the time that `ephemeron-value` is called to
extract the value. If the key is not otherwise accessible, the value
may no longer be in the ephemeron.
If `make cs` is run without specifying a SCHEME_SRC, then make sure
that `configure` and `make` are re-run in the Chez Scheme checkout,
in case it was updated.
A collection can only invoke certain callabcks (e.g., for DrRacket's
GC icon) when the collection is performed in the main thread. Also,
delay posting GC logging events to receivers that cannot work at
interrupt time.
This reverts commit 41fd4f3a5e.
The problems this change was intended to solve can be solved in other
ways, without loosening guarantees about expansion order. See the
discussion in #2154 for more details.
closes#2154
The procsses-based build was technically broken with the addition of a
"prefetch" thread (some time back) to improve parallelism, because the
`write`-based implementation of messages did not protect again
interleaving by different threads. The problem turns out to be easier
to expose when running with RacketCS.
Formerly, `--scope-dir` would include only the specified directory in
the search path for already installed packages, etc., which means that
it would only work right as a kind of installation scope that is a
step beyond "installation" on the "user"-to-"installation" spectrum.
The `'pkgs-search-dirs` confiugration entry, meanwhile, provides more
control over search ordering in installation scope. Make `--scope-dir`
work more consistently with that search-path configration.
This change also makes "instllation"-scope operations use the search
path more consistently, since some actions used to use the whole
search list while others pruned any prefix before the main
installation directory in the search list.
Add a way to declare an Objective-C method call as blocking in the
sense of the `#:blocking?` argument to `_cprocedure`.
Usingf `with-blocking-tell` should allow the Cocoa backend for
`racket/gui` to wait for events in the main place without blocking
other places.
Fix various problems with the implementation of places, and let
`processor-count` return the actual number of processors. A parallel
build via `raco setup` seems to work but not scale well.
Instead of defining `rktio_fd_t` to be independent of a `rktio_t`,
which isnt quite true, introduce `rktio_fd_detach` and
`rktio_fd_attach` functions to make a transfer explicit, such as when
a file descriptor goes through a place channel.
This adjustment avoids a corner case in cleaning up a file descriptor
from an abandoned channel, where the finalizer might run in a Chez
Scheme thread that is not associated with any place.
When a single-use function is inlined late enough in the optimization
process, and when the body has the only use of some variable computed
by an expression that can be move in place of the use in the inlined
function (but not in the non-inlined function), then it's a problem if
the function binding isn't pruned away early enough. Make sure the
binding to the function is marked as unused after the function is
inlined.
This bug was exposed by a recent change to the "dssl2" package.
Implement place channels and messages, and change `place-enabled?` to
claim that places are enabled, but `processor-count` still reports 1.
The implementation of place channels has an interesting use of
ephemerons --- that is, a use that isn't just solving a key-in-value
problem. Using epehemerons solves the problem of forgetting a place
channel and any thread blocked on the read end when there are no
producers on the write end. Along similar lines, when only the write
end is retained (i.e., no readers), the channel data is forgotten and
writes become a no-op. The read end holds a "read key" and references
the channel data through an ephemeron keyed by a "write key"; the
write end similarly holds a "write key" and uses an ephemeron keyed by
the "read key". This use of an epehemeron implements a reachability
"and": retain the place-channel data only if the read end *and* the
write end are both reachable. (Minor point: a read end also holds onto
the "write key" anytime the channel already has data.)
The Rumble layer provides a primitive `fork-place` operation and
`start-place` hook to tie in place initialization from other layers
(especially "io"), but the rest can live in the "thread" layer,
especially to handle place-channel synchronization.
When a stand-alone executable created by `raco exe` needs to load
modules that start with a `#lang` line, there have been various
obstacles to adding the right run-time support via `++lib`. The
`++lang` flag addresses those problems and makes it easy to indicate
that enough should be embedded to support loading modules with a
specified language.
There are problems in the way that various handlers interact for the
"lang/reader.rkt" versus `(submod "." reader)` search path that
converts a language name to a reader. To accomodate the search in a
standalone executable (that does not provide access to collections in
general), the module name resolver must refrain from raising an
exception for a non-existent submodule path that refers to a
non-existent collection.
There's no place-channel communication yet --- just enough of a
conversion to thread-load storage to make places possible.
In contrast to traditional Racket, where the expander linklet is
instantiated once per place, the flattened expander linklet is
instantiated only once in RacketCS (because it's inlined into a Chez
Scheme library). The expander therefore needs to keep per-place state
separate, and the same for the thread, io, and regexp laters.
In the expander/thread/io/regexp source, place-local state is put in
an unsafe place-local cell. For traditional Racket, a place-local cell
is just a box. For RacketCS, the thread through expander layers are
compiled in a way that maps each cell to a fixed index in a vector
that is stored in a virtual register, so the value is roughly two
pointer indirections away (thread context -> virtual register array ->
place-local vector). Multiple Chez Scheme threads in a place, such as
threads to run futures, share the same place-local vector.
Although `place-enabled?` reports #f, `dynamic-place` from `'#%place`
can create a place as a Chez Scheme thread and load a module there.
An extra scope is needed to separate the bindings of a `letrec` from
the `letrec` body, in case a macro moves right-hand-side expressions
to the body.
Michael Ballantyne and William Hatch reported this problem and its
solution in December 2016, but I forgot to add the repair.
Relevant to #2237
The repair in 7176fc4253 did not make the no-discard decision stick
well enough for some cases. Robby found this bug using the Redex model
and random testing, too.
Using a parameter for the current expansion context means that if a
macro spawns a thread, the thread thinks that it's in an expansion
context. Switching to a raw continuation mark avoids that problem.
Along the way, bring Racket and RacketCS more in line by making both
have an internal notion of "root" prompt tag that can be used to get
all continuation marks independent of any prompts. That's not structly
necessary, since a continuation mark could be combined with a distinct
tag to make the mark always accessible, but it's simpler and more
lightweight to use a root prompt tag.
This improvement removes only a few places where a variable use is
considered possibly too early in the RacketCS implementation, but
the improvement is significant for uses of `input-port?` in "io".
Allow a single argument to comparison functions like `<`, and
support the same arities as the generic version for fixnum and
flonum operations like `fx+` or `fl+`.
RacketCS weak `equal?`-based hash tables didn't retain flonums, and
hashing of hash tables was not properly insensitive to the order of
keys.
Racket, meanwhile, didn't limit work consistently for different kinds
of hash tables, and it didn't keep a counter value odd as intended
(but the counter never gets large enough to appear to be a mapped
pointer, anyway).
Racket did not check for a break when escaping from a break-disabled
context to a break-enabled context. RacketCS didn't check in other
cases, either. Fix those various cases.