Using a will executor to turn a reference from weak to strong still
seems like an ok idea, but it needs to be a regular will executor,
because a custodian-registered value is likely to involve have a
nested self-reference.
More generally, repair the internal `exe-relative-path->complete-path`
function to work when the current directory is not the original
current directory and `racket` is started with a relative path.
Currently, it happens that `exe-relative-path->complete-path` is
called with a potentially different directory only by
`get-lib-search-dirs`.
When the compression format changed to LZ4, which is much faster to
decompress than zlib, the configure script changed to enable
compression by default. Bytecode tends to benefit all around from
compression, but the boot files take 20ms or so longer to load --- not
a lot of time when loading typical amounts of code, but a signficiant
cost for a minimal startup. This commit allows compression to be
controlled separately for boot files, and it configures them as
uncompressed by default.
* Remove irrelevant #ifdefs MZ_USE_JIT
Bonus points - fixes a compiler warning on aarch64 and a typo.
* Fixes a compiler warning on aarch64 for unused current_linklet_native_lambdas
* Simplify conditionals after removing dead store of has_space
The conditional simplification looks good to me. The biggest issue
here was to understand if when `pipe_quote` is true, we can and should
go to the else clause. Actually the more I look at it the more I think
this uncovers and earlier bug where if pipe_quote is true, result and
total_length are left at NULL and 0 respectively after the block.
Change `datum->syntax` so that it limits the transfer of a code
inspector from a source syntax object; the code inspector is kept only
if a macro is being expanded and the macro has the same code inspector
(or, more generally, the weaker of the two code inspectors is
preserved).
This change is a kind of defense-in-depth to prevent the use of
unarmed syntax with `datum->syntax` to access unexported bindings from
the module where a syntax object originates.
The general approach is Ryan's idea. This particular implementation is
a simplification of the general idea, and we'll see whether it's
worakble and sufficient.
The changes in aab63ad3 introduced a dependency on
racket/private/promise, which the analysis was not capable of dropping
due to the use of the `prop:force` property. This caused trouble for the
thread layer, since it introduced a reference to `error`, which is
defined in the io layer. This change adds some additional detection for
struct type properties with guards that accept procedures of particular
arities, which allows `prop:force` to be marked as pure.
Also, a typo in the thread layer’s Makefile meant globals weren’t
actually getting tracked, so this fixes that, too.
`for/fold` is a left fold, which is normally what you want in a
call-by-value language such as Racket, but it makes efficient lazy
iteration difficult. This commit adds a new `for/foldr` iteration form
(along with `for*/` and `/derived` variants) that provides a right fold
operation that offers complete control over precisely how lazy the
iteration ought to be.
In simple microbenchmarks, reimplementing `for/stream` to use
`for/foldr` instead of `for` plus a generator can be almost 40x faster
on large streams.
When `read/recursive` is used, do not inherit parameter values
recorded by an enclosing `read`, and instead look them up again.
This change restores behavior of the old reader.
Closes#2661
When ">" appears in a procedure name, or when other characters appear
that would normally need to be escaped in a symbol, don't add escapes
since `#<....>` isn't readable anyway. This change makes renamed
procedures print in a consistent way with primitive procedures.
Similarly adjust the printing of structure type names.
Closes#2646
Closes#2659 by both recognizing `lib64` as a default path and by
having `--enable-origtree` override inference and specified when
running `configure` through the root makefile.
Swapping the blame before adding #:important context associates the
important party with the negative party for the purposes of picking
“contract violation” versus “broke its own contract” messages in error
reporting. Therefore, only swap after adding the context.
fixes#2531
Instead of limiting the nursery size and performing a full GC every
time a small nursery is full, allow the nursery to be proportional
to the total heap size if generational GC is disabled.
This option allows the user to enable or disable (with
--disable-generations or --enable-generations=no) generations in
3m. Disabling generational collection is, in most cases, a bad
idea, but it may be necessary on a platform where signal handling
doesn't work well enough to support a write barrier that is
implemented with page protection.
Ignore new autoconf variable added in 2.70.
The interesting thing is that debian decided to backport this variable
to their 2.69 release so in some 2.69 autoconf this variable does not
exist but in debian ports 2.69 generates this variable. It is
nonetheless not useful for Racket, so add to ignore list.
When using a built-for-bootstrapping Racket to build Racket CS, the
intermediate module loading module mode should be `--boot` instead of
`--chain`. The repo's top-level makefile takes care of that already,
but not `configure`-generated makefiles as may happen in a build from
a source distribution.
Allows an inaccessible custodian to be GCed, promoting any values that
it manages to its parent custodian. Also repair memory accounting for
custodian boxes.
For values referenced by a custodian, the nature of the custodian's
weak references is slightly different on Racket CS. The reference is
weak enough that the value can be finalized via will (e.g., to close
an unused port), but it's not weak enough to allow weak boxes, weak
hash table keys, or ephemeron keys to be cleared. That's a consequence
of using ordered finalization instead of finalization/weakness levels.
This difference could be avoided at the cost of an extra wrapper for
any finalized value and a discipline of using such wrappers as the
user-visible reference for all custodian-managed values, but semi-weak
references so far appear to be practical and a better compromise.
The use of a will executor for a custodian is a bit of a hack, and it
doesn't want the "keep live until executed" constraint. So, add an
optional internally.
If a late will executor has pending will, then it needs to stay
live until the enclosing place has terminated (and post-custodian
callbacks are run). Otherwise, `ffi/unsafe/alloc` can lose values
that it expects to finalize, and it reports an "internal error".
The late will executor for `register-finalizer` from `ffi/unsafe`
was kept live in traditional Racket, but only as an accident of
custodian shutdown in a terminating place: the shutdown process skips
threads, since that work is technically not necessary. Relying on that
coincidence is asking for trouble, though, so implement retention more
deliberately.
Recognize `(ptr-ref <ptr> _uint8)`, etc., and turn it into a more
direct `(ptr-ref/uint8 <ptr>)`, etc. This improvement speeds PNG
loading by a factor of 10 to 20, for example, because the
implementation expects the pattern to be recognized.
When the number of places approaches the number of available
processing cores, then a spin lock isn't good enough for a small
number of contended hash tables (maybe just one of them). When
contention is discovered, fall back to a mutex-based lock.
When spawning a new subprocess, it's possible that one or more of the
new process's standard input, output, or error descriptors use file
descriptor 0, 1, or 2, even if they don't correspond to any of the
parent process's original standard input, output, or error descriptors.
This can happen if the parent process closes one of its standard
descriptors, and the operating system reuses the file descriptor number
for a new descriptor.
Therefore, be more careful about closing and copying file descriptors in
the child process before calling `exec`. Specifically, move file
descriptors out of the way as needed so they aren't clobbered, and
accommodate cases where multiple standard streams may share the same
file descriptor in the parent process.
fixes#2634
A subcustodian was incorrectly registered as weak for its parent,
which means that an unreferenced custodian could get lost when
shutting down an ancestor.
Register the port, not the file descriptor, especially since a TCP
connection can have ports that share a file descriptor. Also, I think
a weak reference in the custodian doesn't work as intended (visible
through finalization) if the file descriptor is referenced with a
callback that closes over the port.
No `syntax-protect` is needed for `define/private`, etc., because no
new identifiers or expressions are introduced. Adding extra dye packs
can interfere with other macros that pull apart syntax (although maybe
the macros shouldn't do that without using `syntax-disarm`).
A custodian doesn't provide any order on shutting down the objects
that it manages (I was confused about some past experiments), so
avoid that assumption.
Getting the current CPU time is relatively expensive, so get it only
on thread swaps where a thread used its full quantum or 1/100 swaps
otherwise. This approximation should work because thread-specific CPU
times are rarely requested, and they make the most sense for threads
that don't constantly swap out due to synchronization.
Formerly, an expression like `(arity-at-least-value 7)` could crash,
because the `arity-at-least-value` accessor is created in unsafe mode,
and the slow path to accessor errors attempted to use the accessor to
provoke an error message. Instead of using a potentially unsafe
accessor, have the slow path raise an error explicitly with
`raise-argument-arror`. That change has the added benefit of making
error messages mach traditional Racket (at least for structure types
that are not declared as "authentic").
The problem was exposed by tests added in 55dcdf5538.
Instead of a separate hash table mapping continuations to
linklet-instance names, use a continuation mark. That's faster,
because capturing a continuation means copying part of it on continue.
Currently, Check Syntax has trouble correlating `require` forms and
references to imports that go through a macro-introduced rename
transformer. For example, there's no binding arrow from the final
`starting` to the `racket/list` in
#lang racket/base
(require (for-syntax racket/base))
(define-syntax-rule (define-as-first mod starting)
(begin
(require (only-in mod
[first initial]))
(define-syntax starting (make-rename-transformer #'initial))
starting))
(define-as-first racket/list starting)
starting
But change the last two `starting`s to `initial`, and the binding
arrows work.
Until a general repair is in place for Check Syntax, this commit
adjusts 38d612dba6 to use the original export name for an immediate
binding, which acts as a hint to the current Check Syntax
implemenration.
Note that the source-distribution client must have a
"build/ChezScheme" checkout created, maybe by building as a 'cs
variant. A pruned version of that checkout is then included with other
sources. The resulting source distributon then works for building
either Racket variant.
Adapt the configure scripts and makefiles to use a "ChezScheme"
directory that is bundled with sources.
Some expressions like (date-day) gave usually an arity error, but when they
were inlined by the JIT the arity check was wrong, so they produce a segfault
or a nonsensical result.
Provide a way to build Chez Scheme from source using Racket. In the
short run, this lets us distribute source that ultimately depends only
on a C compiler (since a variant of Racket can be built from source
using just a C compiler).
- change an 'an' to 'a'
- remove 'immutable' where expecting either mutable or immutable (don't
bother to specify which, because `vector-common.rkt` doesn't bother)
- remove extra ','
The `poll` system call doesn't work right for fifos, so switch
back to `select`, but use a new strategy to size fd_set buffers
instead of trying to use `getdtablesize` (because the result
of `getdtablesize` can change dynamically on Mac OS).
Also, add a check for input at the rktio level when trying to read
from devices other than regular files. Otherwise, Racket CS (which
doesn't have some redundant polling that is in traditional Racket)
sees spurious EOFs for unconnected fifos.
Closes#2577
* Remove value store in ready_pos but unread
* Move declaration of ready_pos to where it is used
* Make discard of return value of tcp_check_accept explicit
* Split declaration and var assignment to comply with xform
Making `equal?` do the right thing on classes turned out to be easy---it
just involved adding a straightforward `prop:equal+hash` property to the
`class` struct—but making it work properly for *objects* was the tricky
part. The trouble is that `equal?` on objects that don’t implement the
`equal<%>` interface is just ordinary structure equality, which can be
relevant if objects are inspectable. Writing `(inspect #f)` in a class
body is like making a struct `#:transparent`, and it has all the same
ramifications for equality.
The trouble is that `class/c` creates new wrapper classes, and every
class has its own struct type. Since the default behavior of `equal?` on
structs is to *never* be equal to structs of different types, even
subtypes, an object created from a contracted class can never be
`equal?` to an object created from the same class without contracts.
The solution is to add a `prop:equal+hash` property to `object%` itself
that emulates the default behavior of `equal?`, but sees through class
contract wrappers. Since struct type properties are inherited by
subtypes, this property will be present on all objects, and it only
needs to be attached once.
fixes#2279
Mainly, this improves `make-keyword-procedure`: when applied to a single
argument, it now uses `procedure-rename` to ensure the resulting
procedure has the appropriate name. A couple other changes also guard
against the case where a lambda expression has no inferred name and no
source locations information, which would lead to the source locations
in the implementation being used, instead.
Previously, all init arg contracts’ first order checks were always
checked, but a typo meant all but one of the projections was always
dropped! This fixes that, and it removes a little nearby dead code while
we’re at it.
In some cases, 0 results will be represented by a NULL results-array
pointer. Fix the interpreter to detect a single result completion
through a count of 1 instead of a NULL result-array pointer.
Also, remove a bug extra push operation in the JIT-generated code for
`begin0`. (Other features of the JIT-generated code compensated for
the extra push in cases where the bytecode compiler did't optimize
away the `begin0`, so it turns out not to have caused a problem, but
that's a surprising and fragile set of coincidences.)
Closes#2571
order (like it does with the argument and result contracts), but ensuring
that the pre and post conditions come before the arguments (if possible)
closes#2560
so that it collects the pre/post conditions into sorted order with the
arguments (based on the dependencies), but then discards that
information and always evaluates the pre and post conditions after the
argument/result contract checks
improve accuracy of tanh function
using the implementation of https://www.math.utah.edu/~beebe/software/ieee/tanh.pdf
by changing from (/ (- 1 exp2z) (+ 1 exp2z)) to (- 1 (/ 2 (+ 1 exp2z)) the accuracy after rounding is increased (I was comparing with bftanh) and removes the fluctuations around z=18.35
using the polynomial for z ϵ(1.290e-8 to 0.549) seems to increase the accuracy after rounding even further
see comparison: http://pasterack.org/pastes/48436
especially the fact that (< (tanh 18.36)(tanh 18.37)) ;=> #t was tripping me up
the two extra conditions (z . < . 1.29e-8) and (z . < . 0.549) are optional to solve this
- Propagate disappeared uses from any pattern stx, not only those
attached to forms that themselves have a disappearing use.
- Fix for new local-apply-transformer handling of scopes.
This commit fixes an issue with the fix for contracted bindings in
signatures implemented in commit 5fb75e9f82. While the previous fix
worked in simple cases, it introduced a problem: although signatures
that define contracted bindings were able to refer to other bindings
in the signature in the binding contracts, but anyone doing so was
at the mercy of the exporting unit’s definition order. For example,
given a signature
(define-signature a^
[(contracted
[ctc contract?]
[val ctc])])
then a unit exporting the signature would cause a
use-before-initialization error if its definition for val appeared above
its definition for ctc.
This limitation did not exist in the units implementation prior to the
introduction of the sets-of-scopes expander in Racket v6.3 (after which
contracted bindings were broken until the aforementioned fix in Racket
v7.2). However, the fact that they worked at all seems semi-accidental:
instead of properly indirecting references to signature bindings within
binding contracts, the contract expressions were simply placed in a
context in which the existing names were bound. However, this meant that
any export that renamed identifiers could cause problems, which the
implementation strategy taken in this commit handles just fine.
When the result of `syntax-make-delta-introducer` adds scopes,
it needs to carry along any shifts that might be relevant.
The new implementation risks adding lots of redundant shifts. In this
case, it might be worth spending extra effort at shift-transfer time
to check whether the shift is redundant.
Closes#2542
Part of e7744efb7d triggered a test failure (that I missed by somehow
running tests incorrectly). It turns out that phase -1 transformer
bindings can be used in phase-0 code via shifting.
This change does not effect the repair for building with
machine-independent bytecode.
This change avoids the stair-step effect that is depicted in the
"current Racket -M" build plot from the January 2019 blog post about
Racket on Chez Scheme.
The stair step in that plot is a result of a combination of effects,
but one key part is that the `.set-transformer!` linklet import (to
support macro definitions) has a reference back to the namespace.
While `.set-transformer!` normally would not be captured in any
closure, `db/private/generic/prepared` creates a thread that causes
the "prefix" part of a closure to be moved to a thread's runstack
before it can be pruned by the GC. The stair-step problem happens only
when running directly from machine-independent form, because that form
is recompiled in a way that doesn't optimize away the unused
`.set-transformer!` import. The change in this commit avoids a
reference to the namespace in some cases where it will not be useful,
which turns out to be sufficient to address the build problem.
A more complete repair would be to change the compiler to pair a
closure prefix on the runstack with a liveness mask. An even more
complete repair is to switch to Racket CS. Racket CS is immune to the
problem, even when running from machine-independent bytecode, because
its closures do not keep extra references (with the tradeoff that
there's less sharing).
To make fasl writing as determinsitic and portable as possible, write
+nan.0 and +nan.f always with a specific bit pattern.
This choice risks losing information that is potentially useful, but
given the way that Racket treats all NaN encodings as equivalent, that
rick seems low.
For example, `#hasheq()` is `eq?` to `(hasheq)` and `(hash-remove
(hasheq 'x 2) 'x)`. Making empty hash table unique avoids some
potential and actual inconsistencies between traditional Racket and
Racket CS, such as in machine-independent bytecode.
Move different handling of serialized syntax data to the schemify
layer instead of te expander, so that the result of compiling in
machine-independent form is the same for traditional Racket and Racket
CS.
The `--recompile-only` flag is intended to help dectect build
problems, especially distribution builds where packages are
supposed to be in built form.
This allows it to cooperate better with Typed Racket, particularly
regarding the `Any` type. The guard and use of `#:authentic` also
check that it's still a singleton in all cases.
Avoid parsing cross-linklet optimization information until it is
needed. This change also avoids a problem with saving hash codes
that are platform-specific.
Insteda of just consulting `lib-search-dirs` in the host system's
config during cross-build mode, use `lib-dir` if set to arrive at
the expected default when `lib-search-dirs` is not set.
Handle not-this-platform paths that manage to evade the heuristics for
converting paths to and from relative form. Otherwise, building can go
wrong on on Windows when using machine-independent starting files
generated on Unix-like systems.
The `--error-out` and `--error-in` flags are meant to work together to
chain a sequence of `raco setup` steps where one of them might fail,
but other steps should proceed. The last step in that sequence should
use only `--error-in`, so that it exits with failure if any of the
steps failed.
The `both` target of the toplevel makefile uses `--error-out` and
`--error-in` to let a Racket CS build proceed as long as the
traditional Racket build made it to the last `raco setup` step, which
means that it survives package-build errors.
The Chez Scheme fasl format is not machine-independent when record
types are involved, so use the process that serves compilation to also
serve fasl encoding.
In parallel build mode, if attempting to compile a file triggers a
cycle error that is caught and discarded, don't leave behind a
dependency (that is effectively resolved by the error) in the
parallel-worker manager.
It doesn't do anything, but make it a conforming variant of the
identity function. Also, fill in checking for `compile-linklet`,
and correction documentation errors for `compile-linklet` and
`recompile-linklet`.
Makefile and configure refinements, including targets to let the
distro-build package drive a cross-build from scratch. A cross
build on Mac OS for Windows now works, for example.
The intent was never for the data argument to be optional, but a
mistake in traditional Racket's argument dispatch for `log-message`
made it optional in some cases, so the simplest way forward is to make
it consistently optional. Repair traditional Racket to use `#f`
instead of a random value when the data argument is not provided.
Add options to load a "plug-in" cross compiler, which should be a Chez
Scheme patch file plus declarations for the built-in libraries. Since
loading a patch file replaces the initial compiler, a separate
cross-compiler process is used to load the plug-in.
Adjust build process to be able to generate Racket.exe, etc, for
Racket CS using MinGW. Much of this cross-compilation support can work
for building other platforms, too, but some of the details are filled
in only for generating Windows executables.
When `connect` returns an error immediately, save that error instead
of expecting it to be available later via `getsockopt`. That avoids a
problem on TrueOS, for example.
Some parts of the implementation used for comparison were omitted when
allocation operations are not supported (but comparisons don't
allocate). This problem was unconvered by running the "jitinline.rktl"
tests with RacketCGC.
A recent revision to the way modules are instantiated for handling
runtime paths did not work right for modules from source (i.e., no
bytecode available) that have submodules.
Closes#2486
Avoids internal errors (including unsafe behavior) in an example like
```
#lang racket
(begin-for-syntax
(local-expand
#'(#%plain-module-begin
(begin-for-syntax
(define x 42)))
'module-begin
'()))
(begin-for-syntax
(println x))
```
This example is weird, because it creates an `x` binding that doesn't
survive to the full expansion. Before the repair, the disappearing
binding created trouble for the expanded-to-linklet pass.
The example is weird for a second reason, which is that it uses uses
`local-expand` in a place where it will be triggered by visiting the
module. It turns out that raising a syntax error at that time (from
`#%plain-module-begin`) did not work correctly due to lazy
instantiation of the expansion context.
Closes#2458
Add `syntax-protect` to some macro expansions, especially macros in
contex where unsafe operations are imported, which means that a
combination of `local-expand` and `datum->syntaxa could provide access
to the unsafe bindings absent `syntax-protect`.
Inspired by the way the Chez Scheme number parser works, change the
one in the expander to be faster and probably clearer. This improved
performance brings number parsing almost back in line with the v6.12
parser's performance.
The revised parser is faster because it goes through an input string
just once. The new parser is also more xcomplete; it doesn't rely on a
host-system `number->string` (except for dummy extflonums when
extflonums are not supported).
If you're reading the commit history, beware that the note on commit
be19996953 is incorrect about the change to parsing divide-by-zero
errors. (It explains a change that was edited away before merging.)
This commit really does change the bahvior, though, again as a better
match for v6.12. Specifically, "/0" (with no hashes) always triggers
divide-by-zero in an otherwise well-formed number, even if `#i` is
used.
Speed up JSON parsing (usually around x4 to x8) by avoiding regexp
matching and using more direct byte and character operations. Along
similar lines, compute parsed numbers directly instead of converting
to a string and then using `string->number`.
The revised reader behaves differently only in the case of a bad input
stream, where it may consume more bytes from the stream than the old
one due to eagerly reading bytes instead of tentatively matching
peeked bytes. Also, a UTF-8 decoding error is just `exn:fail` like
other input-parsing errors, and not `exn:fail:contract`.
Related to PR #2472, marks a few other functions as NORETURN.
Namely:
- scheme_signal_error
- scheme_wrong_count
- scheme_wrong_count_m
- scheme_case_lambda_wrong_count
- scheme_wrong_type
- scheme_wrong_contract
- scheme_wrong_field_type
- scheme_wrong_field_contract
- scheme_arg_mismatch
- scheme_contract_error
- scheme_wrong_return_arity
- scheme_unbound_global
Unfortunately static analysis is done per compilation unit, so
although, for example, scheme_wrong_contract calls scheme_raise_exn
and the latter is already marked NORETURN, the analyzer does not know
this. Therefore we need to manually propagate the NORETURN for each
function declaration.
The unsafe-fd->evt interface is based on unsafe-{file-descriptor,socket}->semaphore.
The main differences are that these events are level-triggered, not edge-triggered, and
they do not cooperate with ports created by unsafe-{file-descriptor,socket}->port.
scheme_raise_exn raises an exception and doesn't return.
Static analysis tools find a huge amount of problems with regards
to memory leaks that are actually false positives because the tools
are not aware the function does not return. Marking it as such aids
further inspection of real problems.
The documentation and implementation were confused about whether \D,
\S, and \W match non-ASCII characters. Now they do. The new regexp
implementation (as used in Racket CS) already matched them.
I understand what the idea is in this file, except this code won't
work like the author expected it to. Variables marked for wiping won't
be wiped unless they are marked as volatile. The compiler will simply
remove the code wiping the variables and issue a warning, which is
what brought me to look into this code in the first place.
Make the slow path faster by reducing input- and output-end
coordination. Also, avoid retaining one end just because the other end
is retained.
This change involves adding an indirection for the fast-path buffers
so that management for both ends of a pipe can be centralized
independent of the ports.
Sortof. This is where we especially take advantage of vtable
flexibility. The methods of the vtable are really closures,
because that's far more convenient for custom ports.
Change the internal port representation to an object-with-vtable
representation. The syntax looks similar to the class system of
`racket/class`, but everything is first-order: no class values, no
mixins, etc. Also, the vtable can contain non-procedures (like #f for
"not supported" or a port to mean a direcirection).
Using objects will make port instaces smaller and support a
reorganization to eliminate ad hoc `data`-field extensions. It will
also replace a half-step was was in place for byte input
Along with the conversion, change the way the fast path for writing
works: When possible, expose a shared buffer and index into that
buffer.
Only byte string input ports are really converted, so far. A
compatibility layer maps the old protocol to the new one, so
conversion can continue piecewise.
Show the compile-time value that is not a procedure. While
this runs some risk of exposing details that are meant
to be private to a macro/language, a macro/language can
use an applicable structure to provide a more specific
error message. Meanwhile, showing the value is likely to
help for someone who needs to debug a macro problem.
When the desired reference is not an advertised commit, then try
pulling just a few commits --- at depth 8, 16, and 32 -- from the
"master" branch to check whether the commit can be found that way. If
not, fall back to the exhaustive search that requires a full download.
This should help with the common case that a package reference into
the Racket repo is a few commits behind the current master branch
(because the package server hasn't scanned the repo recently enough).
It's much faster to disover that the commit is within the first 32,
which is almost always is, than to download the entire repository.
Upgrading an auto install to an explicit install runs into trouble if
the auto install is in a wider scope. It doens't seem necessary to
promote already-installed packages for migration, anyway.
- Improve performance by using make-apply-contract, lifting,
fast path for dependent flat contracts.
- The positive blame party now consistently means the *macro def*
and the negative party means the *macro use*. The #:arg? argument
controls blame swapping.
Don't make expansion depend on `(system-type 'vm)`, because expansions
should be VM-inpendent. For example, distribution builds use a single
expansion and finish up from there for different Racket
implementations.
The "extension" module protocol predates the modern FFI and depends on
the C API. Since it's not supported on Racket CS, skip the check for
extension modules.
Skipping the check can reduce load time considerably. We should
consider depracting the extension protocol for traditional Racket.
Don't defer any too-early variable checks to Chez Scheme, because the
schmeify-inserted checks use the right names and include a reference
to the enclosing module.
The `char-numeric?` function was missing some Unicode characters that
have the numeric property, because it was calculated from the wrong
field of UnicodeData.txt.
Change from treating exact 0+1i and 0-1i like the corresponding
inexact values.
Also, change from treating `(atan 0 x)` as exact 0 only when x is
exact. That's consistent with `angle` producing exact 0 for a positive
real number.
The cutoff point for large-magnitude exponents (forcing a +inf,0 or
0.0 result) was wrong for bases below 10, and its did not take into
account the mantissa magnitude for some number forms.
Also, change the parsing of numbers with both `/` and `#` to be more
consistent. A `#` anywhere in the number should trigger an inexact
teratment 0 in the denominator (so inifnity or not-a-number instead of
divide-by-zero), even if `#` is only in the numerator. Meanwhile,
setting `read-decimal-as-inexact` to #f should count `#`s as `0`s and
not trigger inexact treatment.
Infer procedure names based on source locations, and suppress a
procedure name when it has #<void> for its 'inferred-name property.
Threading this information through the Chez Scheme layer involves a
hack, where a name starting with "[" indicates either "no name" or
"inferred from path".
Use "cs/c" to be parallel to the source tree, because making them
different is asking for trouble (e.g., using `configure` without
a separate "build" directory goes wrong).
The rktio/parse.rkt grammar doesn't handle empty argument lists and
was choking on this line, before it even got to my new line adding
rktio_udp_set_receive_buffer.
Fix by following example of using `(void)` instead of `()`. Two notes:
- I forget which variation of C or C++ requires (void) instead of ().
- Strictly speaking, this commit isn't part of the theme of this PR.
If I squash the other commits down to one, maybe I should leave this
separate.
Make a call to a foreign function behave as in traditional Racket: the
arguments are considered reachable un their unwrapped forms until the
foreign function returns.
A missing `unwrap` caused references to structure constructors to be
treated as potentially non-primitive procedures, which significantly
slows down calls to the constructor.
Probably, this started going wrong at a point where original names
were more consistently associated to defined identifier.
Report source name when accessing a variable too early, and allow
multiple returns (based on continuation capture) for the right-hand
side of a `letrec`.
The repair directly implements `letrec` as needed in terms of `let`
and `set!`, instead of relying on Chez Scheme's `letrec`, unless
right-hand sides are simple enough. Implementing `letrec` that way
risks losing Chez Scheme optimizations, but schemify takes care
of many improvements already.
Get more of the benefit of traditional Racket's lazy bytecode
unmarshaling by using an explicit `fasl->s-exp` stap on the serialized
form of syntax objects. This approach also avoids generating pointless
machine code for constructing the serialized form, effectively using
`fasl->s-exp` as an interpreter. The result is significantly smaller
".zo" files for RacketCS and slightly faater load times.
* Add support for space-efficient vector and arrow contracts.
When an eleventh contract would be applied to a function or vector,
switch representation for the wrapper and try eliding redundant
checks. The resulting value keeps a constant number of
chaperone/impersonator wrappers regardless of the number of contracts
applied to it, and won't run any (provably) redundant checks.
This avoids a pathological case where, e.g., a function crosses a
boundary inside a loop, and gets wrapped N times (or worse, 2^N).
The optimization for function contracts currently only applies for
fixed-arity functions and contracts, and only for functions with known
result-arity of 1. These limitations are not fundamental.
Checking specific checks is not as optimized as for regular arrow
contracts yet. (Specifically: arity-specific wrappers and
tail-marks-match support is missing.) Again, not a fundamental
limitation.
Further described in the OOPSLA 2018 Paper: "Collapsible Contracts: Fixing a Pathology of Gradual Typing"
In collaboration with Ben Greenman, Christophe Scholliers, Robby Findler, and Vincent St-Amour.
Recent changes to adapt cm to cross-multi mode also attempted to
improve dependency checking to avoid prematurely committing to
compiling an old dependency, but that improvement was broken.
The multi-cross mode, don't rewrite a machine-indepedent file
by recompiling it to itself. This shouldn't matter, but not
touching files makes the result cleaner.
When a `[case-]lambda` form's only free variables are at the module
level, the Schemified form is a `[case-]lambda` form whose only free
variables are in an enclosing `lambda` for a linklet. Since those are
not completely closed, to make the allocation pattern consistent with
traditional Racket, Chez Scheme needs a hint to allocate the closures
once per linklet instantiation.
When an ephemeron is accessed through a weak mapping from the same key
that is used in the ephemeron, and when the key is not otherwise
reachable, there can be a race between extracting the value from the
ephemeron and performing a GC that reclaims the key. Avoid that race
by supplying the key back to `ephemeron-value`, which ensures that the
key remains reachable until the value is extracted.
In many cases, supplying the key as the second argument would also
work --- since that argument is used as a replacement value when the
key is inaccessible, but the key can't become inaccessible if it's
pending as a replacement value. A separarate optional argument to
`ephemeron-value` seems clearer and more general, though.
Avoid retaining namespaces that are created to gather runtime paths.
If expansion generates a lot of instances with a lot of type
information, for example, this repair can save a lot of space.
If the sub-template inside #(...) is unsyntax-splicing instead
of list, produce the template #((~@! . ????)) instead of calling
(datum->syntax o list->vector o syntax->list). Fixes#2402.
Fix some race conditions involving concurrent setup tasks that are
each trying to generate both machine-independent bytecode and
machine-specific bytecode.
add a function to escape any glob wildcards in a path or string
also add a private `glob-element->filename` function so that, e.g., the pattern
`a\*` matches the file named `a*` (previously, the match would fail and
I think it was impossible to match for only `a*`)
Fix the fallback interpreter (which is used for the "outside" of a
module that is too big to compile) so that it's safe-for-space.
This change is unlikely to repair any immediate problems, but space
safety problems are difficult to detect and avoid when the underling
implementation is not safe-for-space so fixing the interpreter is
likely worthwhie in the long run.
Module definitions and expression need to have a prompt around them to
delimit continuation capture, variable assignment needs to happen at
the right point to ensure that reassignment is guarded and
non-assignment is detected. But avoid the prompt when it's not needed,
such as around function definitions.
Closes#2398
Similar to a255def019, but for side effects potentially
exposed by definition RHS expressions, instead of
expressions not in a definition. Improve that commit and
this one by only forcing variable assignments at non-simple
expressions.
Travis is eliminating its container-based infrastructure
and deprecating the `sudo` keyword.
This commit also updates the example build matrix to use
more recent Racket versions.
Corresponds to https://github.com/greghendershott/travis-racket/pull/29
Discard local-variable names to avoid `gensym` artifacts in the same
way that a more complete compilation would discard the names. This
change does not affect function names, which are preserved through
separate properties.
It most cases, it's more important for `compiler/cm` to reliably
replace a file that might be busy than to make the file update atomic.
To suport that kind of use, `call-with-atomic-output-file` implemented
a fairly reliable, multi-step, non-atomic process for replacing a file
on Windows.
For recompilation of bytecode in machine-independent form, however,
`compiler/cm` now really wants to atomically write a replacement
bytecode file. That's not generally possible on Windows (except on
NTFS with transactions, which are discouraged...), but MoveFileEx work
atomically in some cases and it's likely to work for the cases needed
by `compiler/cm`. Probably.
So, add a mode to `call-with-atomic-output-file` to get "more atomic"
updates on Windows. This mode is enabled by a callback that makes the
caller responsible for deciding what to do with the move fails, such
as waiting a while and trying again. And `compiler/cm` now waits a
while and tries again, up to a limit, which should be good enough for
recompilation.
Enable `raco {setup|make}` to build two sets of compiled files: one
set that is suitable for the current machine, and another set that is
suitable for a different machine or for all machines (i.e.,
machine-independent bytecode).
In the long run, this new `raco setup` mode support cross compilation
where the build machine and target machine have different bytecode
formats --- unlike the current cross-compliation mode, which relies on
there being a single bytecode format in traditional Racket for all
platforms.
In the short run, the new mode enables the faster creation of
Racket-on-Chez distribution builds. The build server can send out
machine-independent bytecode to client machines while using
machine-specific bytecode for itself to drive the build process.
The new compilation mode relies on a somewhat delicate balance of the
`current-compile-target-machine` and `current-compiled-file-roots`
parameters (as reflected by the `-M` and `-R` command-line flags for
Racket) as well as cross-compilation mode (as enabled by the `-C`
command-line flag).
The 'target-machine result from `system-type` reports the
default value of `current-compile-target-machine`.
Also, fill in pieces to make `setup/cross-system` work
for RacketCS, although cross-compilation is still several
steps away.
The new path for recompiling from machine-independent files
trues to read a ".zo" file without holding the recmopilation
lock and without an `exn:fail:filesystem` handler.
Wait until replacement is more assured before deleting an existing
".zo" file.
Also, don't delete a ".zo" file that is later in the
`current-compiled-file-roots` search path than the one being written.
This refinement supports setting up a search path to try
machine-specific compiled files and fall back to machine-independent
files, for example.
Add `-M`/`--compile-any` to `raco setup`, `raco pkg install`, etc., to
build machine-independent bytecode, which is useful in the process of
building distributions.
The `parallel-lock-client` protocol expects a #f back when a
file was meanwhile compiled by another process. So, don't
just forget about a file after it is compiled, in case there
is still a lock request on the way for that file.
Actually, the machine-independent-to-specific part is trivial. The
hard part was making `compiled-expression-recompile` enable
cross-linklet optimization as it recompiles, since that involves
pulling apart metadata and putting it back together afterward.
The `compile-machine-indendent` parameter controls whether `compile`
creates a compiled expression that writes (usually in a ".zo" file) to
a machine-independent form that works for anhy Racket platform and
virtual machine. The parameter can be set through the
`-M`/`--compile-any` command-line flag or the `PLT_COMPILE_ANY`
environment variable.
Loading machine-independent code is too slow for many purposes, but
separating macro expansion from backend compilation seems likely to be
a piece of the puzzle from cross-compilation and faster distribution
builds.
Converting "invalid memory reference" to an `exn:fail:contract` (which
is the default conversion) hides crashes as success when a test
expects an error.
Also, fix a bug that was hiding as an expected excdeption.
The Racket and RacketCS implementations had separate copies of
linklet-directory and linklet-bundle reading and writing. Move the
implementation into the expander layer.
The primitive '#%linklet instance now omits directory and bundle
operations and `read-compiled-linklet`. It intead must provide
`write-linklet-bundle-hash`, `read-linklet-bundle-hash`, and
`linklet-virtual-machine-bytes`.
In particular, when there isn't any redundancy detected, then
just make a single call into the projection and create just a single
class.
This seems to help on at least one of the configurations of
dungeon, which completes in about 6 minutes with this commit
and I gave up waiting after 15 minutes for the version of
racket that didn't have it
Improves the error message for:
```
(define-syntax (like-lambda stx)
(syntax-case stx ()
[(_ e) #'(lambda () e)]))
(like-lambda (define x 1))
```
Based on a report from @pkoronkevich.
When an executable distibution is created, some path become
unavailable at run time, such as the result of `find-links-file`.
Change the contract on those functions and adjust the implementation
to return `#f` in those cases. This is a backward-compatible change in
the sense that uses that now return `#f` would have crashed before
(although it does shift the blame in that case).
Based on an initial patch by Shu-Hung.
Closes#2352
Source mode was a leftover from early iterations of the expander. A
bootstrapping mode that uses replacement `compile-linklet`, etc.,
turned out better.
For consistency with traditional Racket and currently matters on
Windows. The Windows implementation of file-truncate should probably
not move the file position as it does, though.
In GUI-application mode (e.g., running GRacket), a console is allocated
on demand if a program tries to use the original ports. Move that
on-demand handling into rktio, where it's simpler and works for
RacketCS.
One more take on the problem addressed by 990e1f1e30. This adjustment
avoids copying properties from the original form to the identifier
that is preserved in 'origin.
The `get-compiled-file-sha1` function assumed that a ".dep" file is
up-to-date when present. That may not be consistent with all uses,
including in `file-stamp-in-paths` as used by DrRacket for "populate
compiled", and an old file can go wrong with the recent ".dep" format
change. Make `get-compiled-file-sha1` at least check the version on
the ".dep" content before trying to use it.
Relevant to #2354
A function that uses `call-with-immediate-continuation-mark` in tail
position should not be flagged as "preserves marks", because the JIT
needs to bump the mark stack if the function is called in non-tail
position.
Closes#2333
The repair is more precisely a repait to xform, which incorrectly
parsed a C function definition that starts "struct" as a struct
declaration. (The function starts "struct" because the return type is
"struct Scheme_Overflow_Jmp *".) Since the function wasn't recognized,
xform didn't convert it to cooperate with the garbage collector.
Closes#2341
Previously, the following program would print "error writing to
stream port" on program exit.
(define cust (make-custodian))
(define out
(parameterize ((current-custodian cust))
(open-output-file "test.data" #:exists 'truncate)))
(write-string "This needs flushing...\n" out)
(custodian-shutdown-all cust)
(exit 0)
So far, bytecode for traditional Racket has been kept separate from
RacketCS bytecode by using a different "compiled" subdirectory for
RacketCS. That makes sense for development work to allow the
implementations to coexist, but it creates trouble for packaging and
distributions, and it (hopefully) won't seem necessary in the long
run. Treating the different virtual machines like different versions
seems more generally in line with our current infrastructure.
Rearrange the configure scripts so that it will be possible to build
RacketCS from a source distribution and have it installed in the right
place. Also, when building Racket3m just to bootstrap RacketCS, don't
install Racket3m.
Retains a strong link to a place-channel write end when there's at
least one waiting thread. This is symmetic to keeping a strong link to
the read end when the place-channel queue is non-empty. The change
repairs a problem building documentation with places in `racocs
setup`.
Refines 2ef8d60cc6 to avoid characterizing the failure as a `(-> any)`
contract on `hash-ref`, since `hash-ref` doesn't enforce that contract
in general. Go back to an `exn:fail:contract:arity` error, but keep
the specialization of the error message to clarify that it's from
`hash-ref`. Also, bring RacketCS into sync.
Although a `directory-exists?` check is useful for providing better
error messages, it's fundentally a race condition, since an external
process can always remove a directory between the check and a use of
the directory. Because of that limitation of `directory-exists?`, we
normally avoid making it part of a contract. This commit adjust
937aa3cdb1 to follow that convention while preserving the helpful
check and documentation improvements.
Their semantics assume all directory `path-string?` arguments point
to existing directories in the filesystem but they do not actually
check to verify resulting in unhelpful inner exceptions
breaking the functions' semantic abstractions.
Fixed by adding appropriate checks.
Test cases included too.
Documentation updated to reflect the requirement for paths to
refer to existing directories.
Also added note that `generate-stripped-directory` does not
compile or render source files.