To make the API consistent for MSVC versus MinGW builds, make
a functional formerly required for embedding on 32-bit Windows
always available and required for all Windows variants.
Make the old-generation marking process incremental
on request, where `(collect-garbage 'incremental)`
makes a request.
Only the marking phase of an old-generation collection
is incremental, so far. In exchange for slower
minor collections and a larger heap, you get a major
collection pause time that is roughly halved. So, this
is a step forward, but not good enough for most purposes
that need incremental collection.
An incremental-mode request sticks until the next
major GC. The idea is that any program that could
benefit from incremental collection will have
some sort of periodic task where it can naturally
request incremental mode. (In particular, that
request belongs in the program, not in some external
flag to the runtime system.) Otherwise, the
system should revert to non-incremental mode, given
that incremental mode is slower overall and can
use much more memory --- usually within a factor of
two, but the factor can be much worse due to
fragmentation.
After some expansions, a expression with the syntax property 'inferred-name of
'x is converted to one with ('x . 'x), so it's not useful to get the name of a
procedure. So we simplify the syntax property 'inferred-name to handle
these cases.
In the common case of a minor GC without a generation 1/2
or a major GC without compaction, a single pass suffices
to both mark and update references.
This change reduces overall GC time by 10%-25% on typical
programs.
Also, strengthen the checking that `#:permissive?` (off by default)
performs for `untar` and `untgz` to disallow a link whose target is an
absolute path or has an up-directory element.
The symbol is used as the "who" field in the error message.
Also fix lazy-require of runtime-report.rkt in residual.rkt; don't
load until syntax-parse actually needs to produce an error report.
(Previously was loaded to create handler whenever syntax-parse code ran.)
Using the GitHub API for GitHub sources can run afoul of API
limits. Since we now support the Git protocol generall, use
that for GitHub sources, too.
Set the `PLT_USE_GITHUB_API` environment variable to use the
GitHub API, instead.
When `compile` is used on a top-level definition, do not
create a binding in the current namespace, but arrange for
a suitable binding to be in place for the target namespace.
Closes#1036
The `prop:expansion-contexts` property can control the expansion
of a rename transformer in much the same that conditionals on
`(syntax-local-context)` can control the expansion of other
transformers.
These avoid one layer of currying and are more efficient, getting
about a 1.3x speed up on this program:
#lang racket/base
(module server racket/base
(require racket/contract/base)
(provide
(contract-out
[f (-> integer? boolean? char? void?)]))
(define (f i b c) (void)))
(require (submod "." server))
(time
(for ([x (in-range 10000000)])
(f 1 #t #\x)))
Adjust installation tools to support cross-installation (i.e.,
installation for a platform other than the current one) as triggered
by "system.rktd" in "lib" having different information than the
running Racket executable.
Support PNG-encoded icons in ".ico" files and executables.
For executables, instead of supporting only new icons that match the
sizes and encodings of existing icons in an executable, support
arbitrary replacement icons in an executable.
The improved funcitonality relies on a new library (currently
private) for general updates to a Windows executable's
resources.
A `module-suffixes` entry in a collection's "info.rkt" adds a
file suffix that is meant to be recognized globally (i.e., in
all collections) by all Racket tools.
The new fields are reported by `compiler/module-suffix` library, which
is (so far) used by `raco setup`.
Note that if package A includes files with a suffix that is registered
by package B, then A should have a dependency on B, but `raco setup`
cannot currently detect that such a dependency is needed. That
dependency is likely to happen, anyway, since package A is likely
using libraries form package B.
When `place` expands, the body of the `place` form is placed into a
`(module* place-body-<n> #f ....)` submodule.
The `place` form previously placed its body in a lifted function,
where the function's exported name was based on
`(current-inexact-milliseconds)`. The generated submodules have
deterministic names, so that compilation is deterministic, and
submodule names don't collide (unlike exported function names) when
multiple `place`-using module are imported into some other module.
Also, using a submodule avoids the problem that the clock doesn't
change fast enough on Windows.
Changes:
- Allow unit contracts to import and export the same signature.
- Add "invoke" contracts that will wrap the result of invoking a unit contract,
no wrapping occurs when a body contract is not specified
- Improve error messages
- Support for init-depend clauses in unit contracts.
- Fix documentation to refelct the above
- Overhaul of unit related tests
Handling init-depend clauses in unit contracts is a rather large and somewhat
non-backwards-compatible change to unit contracts. Unit contracts must now
specify at least as many initialization dependencies as the unit value being
contracted, but may include more. These new dependencies are now actually
specified in the unit wrapper so that they will be checked by compound-unit
expressions.
This commit also adds more information to the first-order-check
error messages. If a unit imports tagged signatures, previously the errror
message did not specify which tag was missing from the unit contract. Now
the tag is printed along with the signature name.
Documentation has been edited to reflect the changes to unit/c contracts
made by this commit.
Additionally this commit overhauls all tests for units and unit contracts.
Test cases now actually check that expected error messages are triggered when
checking contract, syntax, and runtime errors. Test forms now expand into uses
of rackunit's check-exn form so only test failures are reported and all tests in
a file are run on every run of the test file.
Progress toward making the bytecode compiler deterministic, so that a
fresh `make base` always produces exactly the same bytecode from the
same sources. Most changes involve avoiding hash-table order
dependencies and adjusting scope identity. The namespace used to load
a reader extension is also better defined. Plus many other little
changes.
The identity of a scope that is unmarshaled from a bytecode file now
incorporates the hash of the file, and the relative order of scopes is
preserved in a bytecode file. This combination allows compilation to
start with modules that loaded and compiled in different orders
(including delayed loading of bytecode fragments within one file).
Formerly, a reader extension triggered by `#lang` or `#reader` was
loaded in whatever namespace happens to be current. That's
unpredictable and can pollute a module build at the level of bytecode.
To help make builds deterministic, reader extensions are now loaded in
a root namespace of the current namespace.
Deterministic compilation in general relies on deterministic macros.
The two most common ways for a macro to be non-deterministic are by
using `gensym` (use `generate-temporaries`, instead) and by using an
unsorted hash-table traversal (don't do that).
At this point, bytecode generation is unlikely to be completely
deterministic, since I uncovered non-determinism mostly by iterating
attempts over the base collections. For now, the intent is not to
provide guarantees outside of the compilation of the base collections
--- but "more deterministic" is likely to be useful in the short run,
and we can improve further in the long run.
Nested splicing forms would lead to an "ambigious binding" error
when the nested forms bind the same name, such as in
(splicing-let ([a 1])
(splicing-let ([a 2])
(define x a)))
The problem is that splicing is implemented by adding a scope to
everything in the form's body, but removing it back off the
identifiers of a definition (so the `x` above ends up with no new
scopes). Meanwhile, a splicing form expands to a set of definitions,
where the locally bound identifier keeps the extra scope (unlike
definitions from the body). A local identifier for a nested splicing
form would then keep the inner scope but lose the outer scope, while
a local identifier from the outer splicing form would keep the outer
scope but no have the inner one --- leading to ambiguity.
The solution in this commit is to annotate a local identifier for a
splicing form with a property that says "intended to be local", so the
nested definition will keep the scope for the outer splicing form as
well as the inner one. It's not clear that this is the right approach,
but it's the best idea I have for now.
1st is a small grammatical mistake
2nd is in a section about ->* yet mistakenly -> is referred to
3rd is about recontract-out yet contract-out is mentioned instead
4th clarifies return value for value-contract
5th replaces free-identifier? with free-identifier=?
Genereating a use-site scope, instead of a macro-introduction scope,
prevents the scope's presense from triggering a #f result from
`syntax-original?`.
When a package "p" is clone-linked and the repo for "p" changes to be
a multi-package repository (e.g., with "p-lib", "p-doc", and "p"), a
`raco update` would get confused. Unofrtunately, a plain `raco pkg
update p` can't work in that case, because the clone link would still
be a pathless repo URL; the repairs make `raco pkg update --lookup
--clone ..../p` work as is should.
Related: fix inference of package names in the early check for whether
a package is installed.
While `#:in-original-place? #t` provides one way to serialize
foreign calls, it acts as a single lock and requires expensive
context switches. Using an explicit lock can be more efficient
for serializing calls across different places.
For example, running "plot.scrbl" takes 70 seconds on my machine
in the original place and using `#:lock-name` in any place,
while it took 162 seconds in a non-main place with Cairo+Pango
serialization via `#:in-original-place? #t`.
Internally, the named lock combines compare-and-swap with a
place channel. That strategy gives good performance in the case
of no contention, and it cooperates properly with the Racket
scheduler where there is contention.
- Coalesce repeated use of the same predicate.
- Fix scoring of Exact patterns, and scoring generally.
- Use `OrderedAnd` where needed.
- Guarantee that `and` patterns match in order.
- Thread bound variable information properly in GSeq compilation.
- Warn when variables are used non-linearly with `...`
(making this behave properly was not backwards compatible).
Closes#952, which now runs in <1ms and make it a test case.
Also add margin note about `?` patterns and multiple calls.
Adds a sealing and unsealing function to attach (or detach)
seals onto a class via impersonator properties. Since these
properties override, they do not accumulate wrappers.
Calling seal multiple times will still accumulate multiple seal
values inside the property.
A sealed class cannot be instantiated and a subclass may not
add class members that match any of the sealed names in its
sealed parent.
These functions are intended for use by TR's `sealing->/c`
contract, but are parameterized over checking functions and
could be used for other purposes.
An empty string provided to `raco pkg config --set catalogs` would
trigger an error previously. Instead, turn it into a `#f` in the
configuration file, which is replaced by the default search sequence.
Use the given readtable more consistently to parse
delimiters in the top-level form. This change particularly
addresses problems with trying to restore the original
`(` when parsing a hash table, but allowing nested
forms to still use a different `(` mapping.
I found I wanted this to make a define/stub macro that errors giving the defined identifier:
(define-syntax (define/stub stx)
(syntax-case stx ()
[(_ header)
(let-values ([(id mk-rhs body) (normalize-definition/mk-rhs stx #'lambda #t #t #f)])
#`(define #,id #,(mk-rhs #`(error '#,id "TODO: stub"))))]))
Closes#508.
When a structure type has `prop:inpersonator-of`, follow it
when attemptng to access imperonator properties.
This change fixes a problem with `impersonate-procedure` as
reported by Scott Moore.
When repoting an error during compilation, always show the path to the
module being compiled. That path was sometimes available in the error
message anyway, due to source locations for syntax errors, but often
there would be no path due to run-time errors in macros, a lack of
source locations on macro-introduced forms, etc.
The `raco setup` improvements rely on new machinery in `compiler/cm`,
and `raco make` inherits that machinery.
The parallel and non-parallel variants of `raco setup` reported
excpetions in slightly different formats, and now they're consistent.
The initial report of an exception now always shows an evaluation
context, while the summary's repeat of the error omits the context.
When using `compound-unit/infer` and similar, check the `link` clause
against each unit's static information for initialization dependencies.
Also, propagate dependency information in `define-compount-unit`.
Unlike `collapse-module-path`, it makes sense for
`collapse-module-path-index` to convert a relative module path index
to a plain module path. In other words, `collapse-module-path-index`
can convert a module path index to a module path.
Part of the clarification is duplicating information about numbers
and character in the documentation of `eqv?`. Since those two type
are the only special cases of `eqv?`, the duplication seems helpful
and managable.
be equal?-based contracts instead of = based contracts.
Before this change, the contract (or/c 1 2 +nan.0) was the same
contract as (or/c 1 2), because +nan.0 was the same contract as
the predicate (lambda (x) (= x +nan.0)), which is the same as
(lambda (x) #f). Now, +nan.0 and +nan.f are the only numbers
that are treated as equal?-based contracts, but this means that
(or/c 1 2 +nan.0) actually accepts +nan.0.
For GC purposes, if a "prefix" (a closure frame that caprues
top-level or module-level bindings) may refer to syntax objects
that are not used by any reachable closure, in which case the
syntax object can be dropped. This pruning of syntax objects
uses the infrastructure already in place to prune variables.
Syntax objects were not included in the original pruning
implementation, because they are unlikely to create
finalization cycles in the way that global-variable
references can. A syntax object can retain a namespace's
table of module imports, however, which can be substantial
and worth releasing of a closure is only held, say, for
a low-level finalization action.
The handling of `for-template` imports by `namespace-attach-module`
didn't match the docs. The actual handling was to refrain from
attaching instances of a phase-0 module if the instance was reachable
only through a `for-template`. The rationale had to do with such
modules instances being created only through instantiation of
phase-1 modules, and phase-1 module instances aren't attached;
it doesn't work well that way, though, when different modules
are attached with intervening `namespace-require`s on the target
namespace.
The change includes a documentation correction. Previously and still,
only modules at the same phase as the attached module (as opposed to
the same phase or less) are instantiated in the target namespace.
Closes PR 14938
Document and and exploit that any fragment in the Git or GitHub URL
for a package source must name a branch or tag (as opposed to a
commit) to work with clone linking.
If a file or directory delete fails, try adjusting the file or directory
permissions to allow writes, then try deleting again. This process should
provide a more Unix-like experience and make programs behave more
consistently.
A new `current-force-delete-permissions` parameter provides access to
the raw native behavior.
Instead of introducing a subtype of `file-dependency` to imply one new
option, add a subtype that has an options table for easier
extensibility. (Thanks to Sam for pointing out that I shouldn't make
this mistake again.)
If module M in package P imports module N from package Q,
and if N has a `lazy-require` for a module in R that is
triggered during the compilation of M, then P doesn't really
depend on R; P depends on Q, and Q depends on R, and P
shoudn't necessarily know anything about Q. At the same time,
a change to the file in R means that M must be recompiled.
So, continue to track the compilation dependency, but mark
it as "indirect" so that the package-dependency checker can
ignore the dependency.
If the clone directory's checkout includes a target commit, then
use the clone directory directly for staging (i.e., for checking
dependencies and collisions). That way, changes made locally are
used for metadata checks.
The implemented default for `raco pkg update` actually depended on the
way that a package is installed, and it's difficult to reason about or
to implement the default that is suggested by the documentation.
Meanwhile, `search-ask` seems the most sensible always in interactive
mode (now that we have a way to specify batch mode).
If "sqlite3.dll" is installed as a foreign library but shouldn't
be, then `raco setup` cannot simply deleet the file, because
starting `raco setup` opened the DLL. To avoid that problem,
rename the file to start with "raco-setup-delete-", then attempt to
delete the renamed file; the delete won't work, but the file
will be moved out of the way, and a future `raco setup` can
clean up.
The prefix "raco-setup-delete-" thus becomes special on Windows for
the directories that hold foreign libraries, shared files, and
man pages, because `raco setup` will try to delete any file
that starts with "raco-setup-delete-".
It's all very ugly, but I don't have a better idea for the
problems that I keep hitting.
Restore (but in a hopefully better way) a step that installs native
libraries before trying a full `raco setup`, since the libraries
may be needed for the setup proces --- especially on Windows.
* `raco pkg show typed-racket` now shows just the "typed-racket" pkg.
* `raco pkg show --rx typed-racket` shows all packages that match the
regular expression "typed-racket".
* `raco pkg show` now only shows the first 8 characters of checksums
unless you provide the `--full-checksum` argument.
Packages that are installed as other than a link are not meant to be
edited, but work can get lost if a package is edited and then removed
or updated. Avoid that work loss by moving removed or updated packages
to a trash folder.
By default, the trash folder holds up to 512 packages for up to 48
hours. To disable the trash folder (for a given scope), use
raco pkg config --set max-trash-packages 0
(I expect that some variant of Greenspun's rule predicted the eventual
inclusion of "backup" management in the package system.)