The repair in 49a90ba75e reorders two lines in a way that, in
retrospect, seems worrying. I can't construct an example that goes
wrong, so maybe it's fine, but it seems possible (now or with some
future change) that attempting to visit available modules could lead
to the same attempt in the same thread and therefore a loop.
This commit changes the repair to just always take the lock instead of
fixing up the attempted shortcut. There's a tiny extra cost to always
taking the lock, but that extra cost seems like a better choice
overall.
If interrupts are disabled to prevent thread swaps, then don't react
to `end-atomic` by performing a deferred swap. That might happen
with logger callbacks after a GC where a deferred action was
overlooked due to a rare race condition.
Now that Chez Scheme supports libraries in boot files, make
"racket.so" work when it is converted to a boot file. Loading it as a
boot file can save around 20ms on every major GC, since the
"racket.so" code will be in the static generation.
For now, however, `racketcs` is not set up to use "racket.so" in boot
form.
* Relative paths should still be readable.
The resent PR to enable relative serialization resulted in serialzed
objects that weren't actually readable (containing literal path
elements). This PR converts them to bytes.
* Move from serialize to relative path.
Also change path->bytes to path-element->bytes
* Path elements can also be 'up and 'same.
Also merge in the relevant code from racket/fasl.
Use an association list instead of an eq hashtable. This choice is
compatible with assumptions in traditional Racket (i.e., that the
number of mark keys per continuation frame will be small) and cuts
about 1/4 of the time in a benchmark like
(define f
((contract (-> (-> integer? integer?))
(λ () values)
'pos 'neg)))
(time
(for ([x (in-range 1000000)])
(f x)))
* Add allow the binder in prop:serialize to be a procedure.
This procedure is evaluated at serialize time, and is useful if
the deserializer is not known during object-type creation time,
but is during serialize time.
* Add docs+tests.
* Add a history note.
* Add option to create relative paths for 'serialize'
Serialize would previously always create complete byte-string paths.
This adds an optional parameter to serialize (#:relative-to) to enable
relative path creation.
Now when deserialize finds a relative path, it resolves it with
respect to `current-load-relative-directory`.
* Moved fasl's path<->relative-path-elements functions
I moved it into the private/relative-path module, so that serialize
can make use of it.
* Update serialize to use relative-path.
* Add tests.
* And update docs.
When expanding a `(module* _name #f ...)` submodule, accumulate all
module scopes on the `#f` and use the `#f` for a reexpansion.
Attempting to start each time from the enclosing module's scope loses
scopes that were generated from previous expansions. One consequence
of losind a scope is that an original definition may appear to be
macro-introduced, and the defined variable may become inaccessible
in the module's namespace.
Generating the code for a `_fun` type takes hundreds ot thousands
of times as long as in the traditional Racket VM, to cache results
to reduce the cost.
Further correct the implementation of `variable-reference-constant?`
on bindings to primitive variable.
This repair affects method-call ctype caching in `ffi/unsafe/objc`.
Add some logging there to make problems easier to detect. Also,
add and improve linklet-evel performance logging for comparing
the traditional Racket VM to Racket-on-Chez.
When setting up the namespaces that imitate primitive instances,
the "constant" annotation wasn't set. The v6.x expander gets this
wrong, too, for different reasons.
Make schemify inline structure accessors and mutators across linklet
boundaries --- or, in JIT mode, across function boundaries --- by
replacing an accessor or mutator with a `#%record?` test and
`unsafe-struct*-{ref,set!}` operation.
When linklets are compiled in JIT mode and a called procedure is to be
compiled on demand, consult a cache of compiled fragments (by default,
"jit.sqlite" in the addons directory) and either use an existing
compiled fragment or add to the cache after compiling.
Results for this initial implementation suggests that the idea is
workable. With the cache, starting a JIT-mode program a second time is
almost as fast as non-JIT mode (i.e., directly loading machine code).
Some refinements are needed: limiting the size of the JIT-fragment
cache, better contention handling, and better inlining of structure
operations in JIT mode (which may be useful to cross-linklet
optimization in non-JIT mode, too).
To avoid recording absolute paths from a build environment in bytecode
files, the bytecode writer converts paths to relative form based on
`current-write-relative-directory`. For paths that cannot be made
relative in that way and that are in source locations in syntax
objects, the printer in v6.x converted those paths to strings that
drop most of the path.
The v7 expander serializes syntax objects as part of `compile` instead
of `write`, so it can't truncate paths in the traditional way. To help
out the expander, the core `write` function for compiled code now
allows `srcloc` values --- as long as the source field is a path,
string, byte string, symbol, or #f. (Constraining the source field
avoids various problems, including problems that could be created by
cyclic values.) As the core `write` for compiled code prints a path,
it truncates a source path in the traditional way.
The expander doesn't constrain source locations in syntax objects to
have path, string, etc., source values. It can serialize syntax
objects with non-path source values at `compile` time, so there's no
loss of functionality.
The end result is to fix abolute paths that were getting stored in the
bytecode for compiled packages, since that's no good for installing
packages in built form (which happens, for example, during a
distribution build).
Although SHA-1 hashing functions are available from `openssl`
libraries, a fast crytopgraphic hash is useful for many purposes below
the layer where the OpenSSL library has been opened. And SHA-1 is
reasonably easy to add to rktio.
Meanwhile, provide an equally convenient SHA-2 function to discourage
bad security practices (i.e., using SHA-1 where SHA-2 should be
preferred).
Applying jitify to a linklet now generates fragments of code that
are not nested. The drawback of this approach is that calling
a nested function needs an extra indirection, and the closure
has an extra slot. The advantage is that the fragments can be
separately compiled and fasled, which could enable a cache of
compiled fragments.
Use a `(let ([<name> ....]) <name>)` wrapper to communicate
an 'inferred-name property from correlated objects to
Chez Scheme. This stategy relies on a Chez Scheme patch to
make the wrapper work consistently.