In e59f888, new GCs no longer inherit the `avoid_collection` value set by
PLTDISABLEGC, and so that setting is lost as soon as the master place spawns
(due to `GC_switch_out_master_gc`).
When multiple-binding `let-values` form is split into a single-binding
form on the grounds that the right-hand side will definitely error,
the optimizer's effect clocks were advance incorrectly.
Closes#1552
Instead of duplicating a context line, show "[repeats <n> times]".
This improvement particularly helps avoid showing less context
now that `for/list` creates a non-tail recursion to build up
the result list.
Specifically, when it sees these contracts:
(and/c real? negative?)
(and/c real? positive?)
(and/c real? (not/c positive?))
(and/c real? (not/c negative?))
it generates the corresponding use of >=/c, <=/c, </c, or >/c, but
those contracts have also been adjusted to report their names as
(and/c real? ...).
This mostly is an improvement for contract-stronger, but also make it
so that (between/c -inf.0 +inf.0) just uses the real? predicate
directly, instead of a more complex function
Although "macOS" is the correct name for Apple's current desktop OS,
we've decided to go with "Mac OS" to cover all of Apple's Unix-like
desktop OS versions. The label "Mac OS" is more readable, clear in
context (i.e., unlikely to be confused with the Mac OSes that
proceeded Mac OS X), and as likely to match Apple's future OS names
as anything.
For example, an `unsafe-unbox` call should not be moved past the
call to an unknown function that might change a box's content.
Thanks to Sergey Pinaev for the report.
Using `--disable-jit` causes futures to be disabled, and places are
always disabled for CGC; in that case thread-local variables are
not needed. Meanwhile, the 3m build still has places, so a "gmp.c"
compiled without thread-local support is broken.
The objective of lookup_constant_proc and the first part of
optimize_for_inline was to find out if the value of an expression was a
procedure and get it to analyze its properties or try to inline it. Both
were called together in a few places, because each one had some special
cases that were missing in the other.
So, move the lookup and special cases from optimize_for_inline to
lookup_constant_proc, and keep only the code relevant to inlinig in
optimize_for_inline.
If an OS-level thread other than a Racket thread logs a message, then
the message needs to be queued instead of handled immediately.
If multiple places are running, then the right handler thread is not
clear, so just queue to the main place's thread.
Closesracket/gui#66Closesracket/drracket#77
Implement POSIX.1-2001/pax and GNU extensions for long paths and links
in `untar` and `tar`. Add a `#:format` argument to `tar` to select
among POSIX.1-2001/pax, GNU, or error encoding for long paths.
it doesn't apply it the second time (since we know that the
only difference for indy blame is in the negative position
and we know that flat contracts never assign negative blame)
This commit combined with the two previous (2b9d855 and 003e8c7) do
not seem to have a significant effect on the performance of ->i
contract checking. In particular, I see a 50% slowdown between the
version before and the version after these commits on the third `time`
expression below, but no significant difference on the first two.
(without the improvement to flat-contract?, these commits are
a significant slowdown to `g`)
#lang racket
(require profile)
(define f
(contract (->i ([y () integer?]
[x (y) integer?])
(values [a () integer?]
[b (a) integer?]))
values
'pos 'neg))
(define g
(contract (->i ([y () (<=/c 10)]
[x (y) (>=/c y)])
(values [a () (<=/c 10)]
[b (a) (>=/c a)]))
values
'pos 'neg))
(define (slow-predicate n)
(cond
[(zero? n) #t]
[else (slow-predicate (- n 1))]))
(define h
(contract (->i ([y () slow-predicate]
[x (y) slow-predicate])
(values [a () slow-predicate]
[b (a) slow-predicate]))
values
'pos 'neg))
(time
(for ([x (in-range 100000)])
(f 1 2) (f 1 2) (f 1 2)
(f 1 2) (f 1 2) (f 1 2)
(f 1 2) (f 1 2) (f 1 2)))
(time
(for ([x (in-range 100000)])
(g 1 2) (g 1 2) (g 1 2)
(g 1 2) (g 1 2) (g 1 2)
(g 1 2) (g 1 2) (g 1 2)))
(time
(for ([x (in-range 10000)])
(h 50000 50000)))
When the main interpreter loop is called for an application where the
argument array coincides with the current runstack pointer, then the
protocol is that the callee gets to modify that space --- and it
should modify that space as arguments become unused. The interpreter
was always copying arguments to a fresh space, though.
Both function have a similar purpose and implementation, so merge them to consider
all the special cases for both uses.
In particular, detect that:
(if x (error 'e) (void)) is single-valued
(with-continuation-mark <chaperone-key> <val> <omittable>) is not tail sensitive.
Also, as ensure_single_value was checking also that the expression was has not a
continuation mark in tail position, it added in some cases an unnecessary
wrapper. Now ensure_single_value checks only that the expression produces
a single vale and a new function ensure_single_value_noncm checks both
properties like the old function.
Adjust list and stream handling as sequences so that during the body
(for ([i (in-list l)])
....)
then `i` and its cons cell in `l` are not implicitly retained while
the body is evaluated. A `for .... in-stream` similarly avoids
retaining the stream whose head is being used in the loop body.
The `map`, `for-each`, `andmap`, and `ormap` functions are similarly
updated.
The `make-do-sequence` protocol allows an optional extra result so
that new sequence types could have the same properties. It's not clear
that using `make-do-sequence` is any more useful than creating the new
sequence as a stream, but it was easier to expose the new
functionality than to hide it.
Making this work required a repair to the optimizer, which would
incorrectly move an `if` expression in a way that could affect
space complexity, as well as a few repairs to the run-time system
(especially in the vicinity of the built-in `map`, which we should
just get rid of eventually, anyway).
Compile a `for[*]/list` form to behave more like `map` by `cons`ing
onto a recursive call, instead of accumulating a list to reverse.
This style of compilation requires a different strategy than before.
A form like
(for*/fold ([v 0]) ([i (in-range M)]
[j (in-range N)])
j)
compiles as nested loops, like
(let i-loop ([v 0] [i 0])
(if (unsafe-fx< i M)
(i-loop (let j-loop ([v v] [j 0])
(if (unsafe-fx< j N)
(j-loop (SEL v j) (unsafe-fx+ j 1))
v))
(unsafe-fx+ i 1))
v))
instead of mutually recursive loops, like
(let i-loop ([v 0] [i 0])
(if (unsafe-fx< i M)
(let j-loop ([v v] [j 0])
(if (unsafe-fx< j N)
(j-loop (SEL v j) (unsafe-fx+ j 1))
(i-loop v (unsafe-fx+ i 1))))
v))
The former runs slightly faster. It's difficult to say why, for
certain, but the reason may be that the JIT can generate more direct
jumps for self-recursion than mutual recursion. (In the case of mutual
recursion, the JIT has to generate one function or the other to get a
known address to jump to.)
Nested loops con't work for `for/list`, though, since each `cons`
needs to be wrapped around the whole continuation of the computation.
So, the `for` compiler adapts, depending on the initial form. (With a
base, CPS-like approach to support `for/list`, it's easy to use the
nested mode when it works by just not fully CPSing.)
Forms that use `#:break` or `#:final` use the mutual-recursion
approach, because `#:break` and #:final` are easier and faster that
way. Internallt, that simplies the imoplementation. Externally, a
`for` loop with `#:break` or `#:final` can be slightly faster than
before.