http-proxy/ contains a suite of almost useful (but mostly useless) servers.
These can be used to test http-client, and url.rkt
git proxy is not tested yet -- I really wouldn’t know how
This patch adds https and git proxying through HTTP’s `CONNECT` method.
**Sanity Checks Needed:**
1. Is the git protocol proxying necessary?
It might be overkill, and I haven’t overly tested it since `raco pkg
install` uses https as its transport anyway
2. If anyone is better clued up on HTTP `CONNECT` best practice, then
please check the headers that I pass (in `http-client.rkt`)
3. Is HTTP `CONNECT` the only/best way to proxy HTTPS? It is what *curl*
uses (which might be a good indicator)
4. Will the ports be closed properly? (does anyone see a fid leak?)
- how do I test for that? Open (and allegedly close) 1024 tunnels?
5. The `abandon-p` definitions in `http-conn-CONNECT-tunnel` could
probably be reduced, but they’re defined as they are to allow me to
put debugging hooks in
6. No tests or documentation yet
7. I edited this with *vim*, and therefore the indentation is a la vim.
I looked at doing a global reindent (of git-checkout) and so much
changed that I abandoned that as an idea. It indentation is too
“off-style” then feel free to change it :-)
**git-checkout.rkt:**
- `initial-connect` now tries to use a git proxy (see `url.rkt`, below)
when *transport*=`git`
- (if *transport*=`https`, then `url.rkt`’s standard proxying will be
used)
**http-client.rkt:**
- `http-conn-open!` can now be passed a
`(list/c base-ssl?/c input-port? output-port? (-> port? void?))` to
describe:
- maybe a negotiated ssl context
- two tunnel (or other arbitrary) ports to use instead of newly
`...-connect`ed ports
- an abandon function for those ports
- `http-conn-send!` has a function `print-to` which curries
`(fprintf to)`, but allows a hook for an `eprintf` for debugging
- **added `http-conn-CONNECT-tunnel`:** this opens an new `http-conn`
and arranges for CONNECT tunneling to `target-host` and `target-port`
- factored contracts into `base-ssl?/c` and `base-ssl?-tnl/c`
- added contract for `http-conn-CONNECT-tunnel`
**url.rkt:**
- `proxiable-url-schemes`: now includes `https` and `git`
- `env->c-p-s-entries`: the environment variable “parser” now takes a
rest-list of lists of environment variables, and the scheme that these
variables proxy is garnered from the variables’ names. As before
there are:
- `plt_http_proxy` and `http_proxy`
- `plt_https_proxy` and `https_proxy`
- `plt_git_proxy` and `git_proxy`
during the previous iteration of obtaining the proxy variables at
startup, we discussed the appropriate naming conventions for these
variables. This doesn’t seem to deviate from that
- `env->c-p-s-entries`: having a proxy url that isn’t strictly:
`http://hostname:portno` (e.g. having a training slash) generates a
log warning, not an error. It was beginning to bug me
- `proxy-servers-guard`: accepts any one of the `proxiable-url-schemes`
(not just `http`)
- no proxy is agnostic to the URL scheme
- `proxy-tunneled?`: returns false for `http`, which is proxied using an
HTTP proxy. Returns true for other URL schemes -- which go through a
tunnel
- **`make-ports`:** tests whether a tunnel proxy is necessary. If so, it
creates a tunnel and plumbs the connections
- elsewhere, anywhere that tests for proxy, now tests for
`(and proxy (not proxy-tunneled? url))`, because tunneled HTTPS
connections are direct (once they’re through the tunnel, IYSWIM)
Make the optimizer recognize and track `make-struct-property-type`
values, and use that information to recognize `make-struct-type`
calls that will defnitely succeed because a property that hs no
guard is given a value in the list of properties.
Combined with the change to require-keyword expansion, this
change allows the optimizer to inline `f` in
(define (g y)
(f #:x y))
(define (f #:x x)
(list x))
because the `make-struct-type` that appears between `g` and `f`
is determined to have no side-effect that would prevent `f` from
having its expected value.
Make the definition of a function with a required keyword expand in a
way that allows the optimizer to recognize it as a form that has no
errors or externally visible side effects.
The old expansion of
(define (f #:x x) ...)
included
(define lifted-constructor (make-required ....))
(define f (lifted-constructor (lambda ....) ....))
where `make-required` calls `make-struct-type` and returns just the
constructor.
The new expansion instead has
(define-values (_ lifted-constructor _ _ _)
(make-struct-type ....))
(define f (lifted-constructor (lambda ....) ....))
In other words, `make-required` is inlined by macro expansion,
so that the optimizer will be able to see it and eventually
conclude that no side effects have taken place.
When a module defines and exports an identifier at two phases,
and when another module imports both of them at the same phase,
an error was not reported as it should have been.
With this option, FFI calls always block until scheme_check_foreign_work
is called by the program embedding Racket.
This is needed for embedding Racket into contexts where you do not
control the event loop, need Racket to make FFI calls, and those FFI
calls must occur on a thread within the event loop. A good example of
this is with OpenGL FFI calls that require the current thread to hold
the OpenGL/EGL context.
An important point about this is that scheme_check_foreign_work will
only execute a single FFI call. So if this is used for OpenGL rendering,
you'll want to run it a lot.
Some expressions are omittable only when the arguments have certain types.
In this case the application is marked with APPN_FLAG_OMITTABLE instead of relaying on the flags of the primitive.
The optimizer can't use this flag to move the expression inside a lamba or across a potential continuation capture, unlike other omittable expressions. They can be moved
only in more restricted conditions.
For example, in this program
#lang racket/base
(define n 10000)
(define m 10000)
(time
(define xs (build-list n (lambda (x) 0)))
(length xs)
(define ws (list->vector xs)) ; <-- omittable
(for ([i (in-range m)])
(vector-ref ws 0))) ; <-- ws is used once
If the optimizer moves the expression in the definition of ws inside the recursive
lambda that is created by the for, then the code is equivalent to:
#lang racket/base
(define n 10000)
(define m 10000)
(time
(define xs (build-list n (lambda (x) 0)))
(length xs)
(for ([i (in-range m)])
(vector-ref (list->vector xs) 0))) ; <-- moved here
And the new code is O(n*m) instead of O(n+m). This example is a minimized version
of the function kde from the plot package, where n=m and the bug changed the run
time from linear to quadratic.
The application of some procedures are omittables when the arguments have
certain properties. Check the arity of the procedure before marking the application as omittable.
The only case that appears to be relevant is the expression (-).
The relevant predicates are almost disjoint. The superposition
is solved with predicate_implies and predicate_implies_not.
This is also valid considering the equivalence classes modulo
eqv? and equal?. So if the optimizer knows that two expressions
X and Y have different relevant types, then it can reduce
(equal? X Y) ==> (begin X Y #f).
Changes signatures in `syntax/modcode` to accept `path-string?` arguments
instead of `path?`.
Before, the docs listed `path-string?` but the contracts used `path?`.
Now they agree.
The optimizer now makes more choices based on imported structure-type
info that thet validator needs to reconstruct, so pass that
information all the way through.