Before, they would create a crippled version in that mode, but
that is no longer necessary (it may never have been necessary,
but it certainly hasn't been necessary in a while)
In an ultimately vain attempt to make plotting fast but still allow
plots with bounds that have very few flonums in them, Plot used to try
to detect when each axis's bounds had enough flonums in it and switch to
flonum-only math. There are two problems with this:
* It was trying to determine whether *future computations* would have
enough precision, which is very difficult to know for sure, so it
only *usually* worked. (Example fail: contour-intervals3d with 0 and
+max.0 endpoints. Half the isobands weren't drawn.)
* It caused extra conversions between flonums and exact rationals,
which are very slow.
Here's the new approach:
1. Always send exact rationals to user functions (e.g. the lambda
arguments to `function' and `contour-intervals3d').
2. Immediately convert user-given plot coordinates to exact
rationals (if rational; e.g. not +nan.0)
3. Always represent normalized coordinates (e.g. in [0,1]^2 for 2D
plots and [-1/2,1/2]^3 for 3D plots) using flonums.
4. Reduce the number of exact operations as much as possible.
IOW, plot coordinates, which can be incredibly huge or tiny and don't
always fit in flonums, are always exact internally. Normalized, view,
and device coordinates, which are used only internally, always have
bounds with plenty of flonums in them, so Plot uses flonums.
Doing #4 accounts for most of the changes; e.g. to the marching
squares and marching cubes implementations, and axial plane clipping.
Most plots' speeds seem to be unaffected by the changes. Surfaces and
contour intervals are sometimes faster. Isosurfaces are a tad slower
in some cases and faster in others. Points are about 50% slower,
partly undoing the speedup from a recent commit. Plots with axis
transforms can be much slower (4x), when long, straight lines are
subdivided many times. Plots with bounds outside flonum range seem
to be about 3x faster.
The `(letrec ([x x]) x)` pattern for getting the undefined value will
cease to work in a near-future version of Racket (ater
v6.0.1). Instead, the initial refernce to `x` will produce a "variable
used before its definition" error.
For libraries that need an "undefined" value, this new library
provides `undefined`, and the library will continue to work in future
Racket versions.
The library also provides a `check-no-undefined` operation that
normally should be wrapped around an expression to keep it from
producing `undefined`. For example, the class system initializes
object fields with `undefined`, but it could (and will, eventually)
wrap each access to a field within `class` to check that the field's
value is not `undefined`.
This makes plotting to a file the same (well, with slight differences)
regardless of file format. Scaling and other PS options are controlled
by a `plot-ps-setup' parameter. Not sure how useful that is, yet, so
it's undocumented.
The old depth sort hack split up groups of identical points in order
to sort them. Now the BSP tree build keeps them grouped, only splitting
groups of points as needed. Drawing parameters like color and pen
width are set only once for each group instead of once for each point.
The speedup is from reducing the number of calls to set them.
Previously, the BSP type had a "negative" and "positive" subtree, and
a collection of unsorted "zero" (on the plane) shapes. Rather than "zero"
being a subtree (which would sort the shapes), it's rolled into "positive",
with corresponding changes to the BSP walk.
BSP tree build now supports scenes without polygons, though it does split
on polygonal planes first. When all such splits fail (usually because there
are no polygons, but might be because all the shapes lie on one polygon's
plane), it tries splitting on planes containing line segments. When that
fails (usually because there are no line segments), it tries splitting
disjointly on all remaining vertexes, then splitting at the midpoint of
the longest axis.
Upshot: it'll draw a double helix created by two `parametric3d' renderers
properly.
The new method finds bounding planes for polygons using their
(approximate) normals and works for any number of them instead of
just two. The algorithm is O(n^2) where n is the number of polygons,
so it is currently applied only when n <= 5; i.e. near the leaves.
errors and treats them as failing to satisfy the property
This bug disables the occurs check and, thanks to the now
tighter contracts on the uh function, this can cause it to
signal an error. Here's the example:
(term (unify (x → (list int)) ((list x) → x)))
found by in-order enumeration (so naturally it had ||
variables in it, not 'x's originally)
This leads to this trace in the uh function:
(uh ((x → (list int)) ((list x) → x) ·) ·)
(uh (x (list x) ((list int) x ·)) ·)
(uh ((list int) (list x) ·) (x (list x) ·))
(uh (int x ·) (x (list x) ·))
(uh (x int ·) (x (list x) ·))
(uh · (x int (int (list int) ·)))
whereby the third step sets up a problem (that's where the
occurs check should have aborted the program) but instead, a
few steps later when we get a substitution for 'x', it
causes the second argument to not have the form of
variables-for-types substitution.
I don't think this can happen when the occurs check is in place
because the variable elimination step makes sure that we don't
set ourselves up to change the domain of anything in the second
argument to 'uh' (except possibly when the occurs check fails,
of course).
Here's an example that shows this:
(unify (q → q) (y → int)))
(uh ((q → q) (y → int) ·) ·)
(uh (q y (q int ·)) ·)
(uh (y int ·) (q y ·))
(uh · (y int (q int ·)))
In the second to last step we don't have (q int ·), we have
(y int ·) because the variable elimination step changes the
q to a y.
Invoking a non-composable, empty continuation during the right-hand
side of a variable definition skips the definition --- while continuing
the module body. The compiler assumes, however, that variable references
later in the module do not need a check that the variable is undefined.
Fix that mismatch by changing `module` to double-check that defined
variables are really defined before continuing the module body.
(The check and associated prompt are skipped in simple cases, such as
function definitions.)
A better choice is probably to move the prompt to the right-hand
side of a definition, both in a module and at the top level. That's
a much different language, though, so we should consider the point again
in some future variant of Racket.
Closes PR 14427
to be simpler and clearer
This caused bug #6's counterexample to change
because the unification algorithm no longer calls
∨ so we have to find a different way to call it
The following should look as expected now:
* Intersecting or adjacent surfaces, isosurfaces, lines and paths,
even if they're sampled at different intervals.
* Rectangles that aren't spaced regularly on the x,y plane (e.g.
sideways histograms; the rectangles I plot to debug Dr. Bayes).
In general, the rendering should be robust enough to draw arbitrary
3D scenes, and it still produces high-quality PDFs.
Drawing 3D plots still has the same time complexity and seems to be
about as fast as before. Thanks to the fact that sorting objects by
view distance is an O(n) BSP tree walk, rotating 3D plots in
DrRacket is probably a little faster.
This update also adds direct support for connected lines, and
includes a fix that makes contour lines easy to see, even when
plotted on top of a non-contour-interval surface. (This wasn't
possible before.) Further, the 3D plot area no longer requires a
"grid center" vertex by which to sort grid-aligned shapes, making
the rendering API friendlier, and closer to ready for public use.
Commit 92b0e86ed1 turns
out to have been the wrong approach, because there was no mysterious
performance problem with TR. Instead, TR's `define` expanded to new
code that interacted poorly with a macro used in the math docs.
I've kept the refactoring from that commit because I think it still
makes the code clearer, but I've removed the now extraneous comment.
After commit 3d177e454e
running the main `math.scrbl` file would show peak memory
usage of around 600-700MB when before it was around 400MB.
The proximal cause appears to be the expansion of TR
definitions, which added an extra `begin` in some cases,
combined with redefinitions at the top-level. I don't
know the core cause yet.
Thanks to Matthew for pointing out the issue and to
Vincent for helping with debugging.