The main changes are the addition of a new way to specify projections,
using the #:val-first-projection keyword argument; the main goal of the
new API is that more information can be supplied to the projection on
the server side of the contract boundary. In addition, the arrow contracts
will now, internally and in some cases (see define-module-boundary-contract)
return functions that accept one additional argument than the original
function did and then insert the name of the negative party into call sites.
The rough idea is that if you have a program like this one:
#lang racket/base
(define (f x) (+ x 1))
(provide (contract-out [f (-> integer? integer?)]))
then the contract system produces something much closer to this:
#lang racket/base
(define (f x) (+ x 1))
(provide (rename-out [external-f f]))
(define (external-f neg-party x)
(check-integer neg-party x)
(define ans (f x))
(check-integer neg-party ans)
ans)
(define local-blame-information ...)
(define (check-integer neg-party v)
(unless (integer? v)
(raise-blame-error local-blame-information
#:missing-party neg-party
...)))
where 'raise-blame-error' can now cope with blame objects that don't
have negative part information in them (when the #:missing-party
argument is supplied, essentially just reassembling the original blame
record at that point)
Then, on the client side, if we had
(f x)
it gets transformed into
(f '<<my-modules-name>> x)
and if we had
(let ([g f])
...)
you get a slow path:
(let ([g (lambda (x) (f '<<my-modules-name>> x))])
...)
(where we don't create a wrapper lambda, of course, we actually create
a procedure chaperone)
The performance improvements seem pretty good (see below for the
precise programs that I ran and the numbers I got):
first order contract microbenchmark: 6x speedup
higher-order contract microbenchmark: 1.5x speedup
first-order TR contract interop micro benchmark: 6x speedup
This also improves the memory use for DrRacket by about 3% (from about
236.8 mb to 231.3 mb), presumably because more of the data structures
created by the contract system can be created only once, on the server
side.
Be aware, however, that not all combinators are using the new
projections and the code that translates between the two APIs
is slow.
Also there are a bunch of other changes that got made while I was
doing this. Most notably, the change above were not designed for the
performance improvements to arrow contracts, but to make it possible
to implement class/c contracts with a (hopefully) negligible space
overhead. That work is not yet finished, but this commit includes some
changes to the class system to prepare the way for those changes.
Unfortuantely, these changes slow down 'send' on microbenchmarks by
about 24%. The change comes from the addition of an extra 'if' (and
predicate test and possibly the extra let expresison) that's inserted
into the expansion. I'm not happy about that (I was shooting for 10%)
but I'm not sure that we can do much about it (except perhaps double
check my measurements, see below).
Other misc changes:
- improve arity mismatch error messages by including the arity of the
given procedure (closes PR 14220)
- Adjust case-> so it generates less code. This seems to reduce the
size of the math library .zo files by about 3% (from 10440k to
10156k for me)
- speeds up the contract tests a bit
- move recontract out into racket/contract (also, document it)
- added define-module-boundary-contract
- streamline TR's any-wrap/c and adjust it to use
#:val-first-projection instead of #:projection
- adjust a bunch of contracts to print more nicely
- and, of course, Rackety
--------------------------------
The precise programs that I tried and their timings (the raw data
for the performance claims above):
#lang racket/base
(module m racket/base
(require racket/contract/base)
(define (f x) x)
(provide (contract-out [f (-> any/c any/c)])))
(require 'm)
(time
(for ([x (in-range 100000)])
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)))
pre-push timings:
cpu time: 3553 real time: 3552 gc time: 52
cpu time: 3548 real time: 3552 gc time: 52
cpu time: 3525 real time: 3525 gc time: 54
cpu time: 3547 real time: 3547 gc time: 47
post-push timings:
cpu time: 515 real time: 515 gc time: 15
cpu time: 522 real time: 522 gc time: 17
cpu time: 560 real time: 560 gc time: 19
cpu time: 514 real time: 515 gc time: 19
cpu time: 507 real time: 507 gc time: 17
A second order example using vectors (note that vector/c isn't yet
updated to the #:val-first-projection, so it will be way slower)
#lang racket/base
(module m racket/base
(require racket/contract/base)
(define (f x) (vector-ref x 0))
(provide (contract-out [f (-> (vectorof any/c) any/c)])))
(require 'm)
(define v (vector void))
(time
(for ([x (in-range 10000)])
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)
(f v) (f v) (f v) (f v) (f v) (f v) (f v) (f v)))
pre-push timings:
cpu time: 744 real time: 745 gc time: 20
cpu time: 679 real time: 679 gc time: 18
cpu time: 695 real time: 695 gc time: 23
cpu time: 743 real time: 742 gc time: 21
cpu time: 780 real time: 786 gc time: 21
cpu time: 723 real time: 726 gc time: 25
post-push timings:
cpu time: 448 real time: 448 gc time: 18
cpu time: 470 real time: 469 gc time: 19
cpu time: 466 real time: 465 gc time: 16
cpu time: 457 real time: 456 gc time: 15
cpu time: 465 real time: 466 gc time: 24
Using contracts in TR
#lang racket/base
(module m typed/racket/base
(: f (Any -> Any))
(define (f x) x)
(provide f))
(require 'm)
(time
(for ([x (in-range 10000)])
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)
(f 1) (f 2) (f 3) (f 4) (f 5) (f 6) (f 7) (f 8)))
pre-push timings:
cpu time: 357 real time: 357 gc time: 6
cpu time: 446 real time: 447 gc time: 4
cpu time: 361 real time: 359 gc time: 4
cpu time: 366 real time: 366 gc time: 5
cpu time: 368 real time: 367 gc time: 6
post-push timings
cpu time: 63 real time: 63 gc time: 7
cpu time: 64 real time: 63 gc time: 8
cpu time: 63 real time: 64 gc time: 8
cpu time: 58 real time: 58 gc time: 8
cpu time: 59 real time: 59 gc time: 7
Slowdown for 'send':
#lang racket/base
(require racket/class)
(define c% (class object% (define/public (m x) x) (super-new)))
(define o (new c%))
(time
(for ([x (in-range 100000)])
(send o m 1) (send o m 2) (send o m 3) (send o m 4)
(send o m 5) (send o m 6) (send o m 7) (send o m 8)
(send o m 1) (send o m 2) (send o m 3) (send o m 4)
(send o m 5) (send o m 6) (send o m 7) (send o m 8)
(send o m 1) (send o m 2) (send o m 3) (send o m 4)
(send o m 5) (send o m 6) (send o m 7) (send o m 8)
(send o m 1) (send o m 2) (send o m 3) (send o m 4)
(send o m 5) (send o m 6) (send o m 7) (send o m 8)
(send o m 1) (send o m 2) (send o m 3) (send o m 4)
(send o m 5) (send o m 6) (send o m 7) (send o m 8)))
timings pre-push:
cpu time: 251 real time: 251 gc time: 0
cpu time: 275 real time: 275 gc time: 0
cpu time: 250 real time: 250 gc time: 0
cpu time: 246 real time: 246 gc time: 0
cpu time: 247 real time: 246 gc time: 0
timings post-push:
cpu time: 303 real time: 302 gc time: 0
cpu time: 333 real time: 333 gc time: 0
cpu time: 315 real time: 315 gc time: 0
cpu time: 317 real time: 317 gc time: 0
cpu time: 311 real time: 310 gc time: 0
pkgs/scribble-pkgs/scribble-doc/scribblings/scribble/manual.scrbl
* Changed parameter name to #:escape-id.
pkgs/scribble-pkgs/scribble-lib/scribble/comment-reader.rkt
When a GC is needed for the shared space, a GC is triggered
in all places, and the places wait until each other place
has completed. However, the places also need to wait until
all other places are ready to *start* a GC; otherwise, a
place may be modifying a shared record while some other place
traverses it for a GC.
Closes PR 14229
Merge to v6.0
Track the transformation of a text-drawing context and reset it
when the current transform changes.
There was already an update on an existing layout for a given
character, but not an update for the context used to create layouts.
Merge to v6.0
closes PR 14170
Instead of checking free pvars of first template and using that to choose
which template to use, just try first and on failure try second.
Note: ?? still only covers absent pvars; non-#f non-stx value still
causes error; ellipsis rep mismatch still causes error; etc.
Allows some `raco` tools to work (such as `raco pkg`) when loading
information about other installed tools fails, so that problems can
be more easily corrected using working tools (other than `raco setup`,
which is a special case, anyway, for bootstrapping).
Closes PR 14221
Merge to v6.0
related to PR 14222
This doesn't completely fix that PR because the
implementation says that bytes are allowed and
the docs don't allow that. My guess is that the
implementation is wrong, not the docs, btu I'm not
sure and this may be a backwards compatibility
kind of thing, too. Also, path->collects-relative
should probably be considered (it doesn't allow
bytes? as an argument).
- edits to the docs
- adjust the #:i-th argument interpretation to allow
any natural number (using modulo for finite enumerations)
- remove unnecessary gui dependency of a few test files
Please include on the release branch