GC: toward incremental collection
Make the old-generation marking process incremental on request, where `(collect-garbage 'incremental)` makes a request. Only the marking phase of an old-generation collection is incremental, so far. In exchange for slower minor collections and a larger heap, you get a major collection pause time that is roughly halved. So, this is a step forward, but not good enough for most purposes that need incremental collection. An incremental-mode request sticks until the next major GC. The idea is that any program that could benefit from incremental collection will have some sort of periodic task where it can naturally request incremental mode. (In particular, that request belongs in the program, not in some external flag to the runtime system.) Otherwise, the system should revert to non-incremental mode, given that incremental mode is slower overall and can use much more memory --- usually within a factor of two, but the factor can be much worse due to fragmentation.
This commit is contained in:
parent
7db0c3b1d4
commit
c50c23c134
|
@ -12,7 +12,7 @@
|
|||
|
||||
(define collection 'multi)
|
||||
|
||||
(define version "6.3.0.1")
|
||||
(define version "6.3.0.2")
|
||||
|
||||
(define deps `("racket-lib"
|
||||
["racket" #:version ,version]))
|
||||
|
|
|
@ -287,27 +287,45 @@ collection mode, the text has the format
|
|||
]}
|
||||
|
||||
|
||||
@defproc[(collect-garbage [request (or/c 'major 'minor) 'major]) void?]{
|
||||
@defproc[(collect-garbage [request (or/c 'major 'minor 'incremental) 'major]) void?]{
|
||||
|
||||
Forces an immediate @tech{garbage collection} (unless garbage
|
||||
collection is disabled by setting @envvar{PLTDISABLEGC}). Some
|
||||
effectively unreachable data may remain uncollected, because the
|
||||
collector cannot prove that it is unreachable.
|
||||
Requests an immediate @tech{garbage collection} or requests a
|
||||
garbage-collection mode, depending on @racket[request]:
|
||||
|
||||
The @racket[collect-garbage] procedure provides some control over the
|
||||
timing of collections, but garbage will obviously be collected even if
|
||||
this procedure is never called (unless garbage collection is disabled).
|
||||
@itemlist[
|
||||
|
||||
If @racket[request] is @racket['major], then a major collection is
|
||||
run. If @racket[request] is @racket['minor], then either a minor
|
||||
collection is run or no collection is run (and no
|
||||
collection occurs when @racket[(system-type 'gc)] returns
|
||||
@racket['cgc] or when a normally scheduled minor collection would
|
||||
cause a major collection); minor collections triggered by
|
||||
@racket[collect-garbage] do not cause major collections to run any
|
||||
sooner than they would have otherwise.
|
||||
@item{@racket['major] --- Forces a ``major'' collection, which
|
||||
inspects all memory. Some effectively unreachable data may
|
||||
remain uncollected, because the collector cannot prove that it
|
||||
is unreachable.
|
||||
|
||||
@history[#:changed "6.3" @elem{Added the @racket[request] argument.}]}
|
||||
This mode of @racket[collect-garbage] procedure provides some
|
||||
control over the timing of collections, but garbage will
|
||||
obviously be collected even if this procedure is never
|
||||
called---unless garbage collection is disabled by setting
|
||||
@envvar{PLTDISABLEGC}.}
|
||||
|
||||
@item{@racket['minor] --- Requests a ``minor'' collection, which
|
||||
mostly inspects only recent allocations. If minor collection is
|
||||
not supported (e.g., when @racket[(system-type 'gc)] returns
|
||||
@racket['cgc]) or if the next collection must be a major
|
||||
collection, no collection is performed. More generally, minor collections
|
||||
triggered by @racket[(collect-garbage 'minor)] do not cause
|
||||
major collections any sooner than they would occur otherwise.}
|
||||
|
||||
@item{@racket['incremental] --- Requests that each minor
|
||||
collection performs incremental work toward a major collection.
|
||||
This incremental-mode request expires at the next major
|
||||
collection.
|
||||
|
||||
The intent of incremental mode is to significantly reduce pause
|
||||
times due to major collections, but incremental mode typically
|
||||
implies longer minor-collection times and higher memory use.}
|
||||
|
||||
]
|
||||
|
||||
@history[#:changed "6.3" @elem{Added the @racket[request] argument.}
|
||||
#:changed "6.3.0.2" @elem{Added @racket['incremental] mode.}]}
|
||||
|
||||
|
||||
@defproc[(current-memory-use [cust custodian? #f]) exact-nonnegative-integer?]{
|
||||
|
|
|
@ -80,6 +80,17 @@ address GC_resolve returns the same address. A GC_is_marked operation
|
|||
reports whether an object at its old address has been marked; it works
|
||||
only on old addresses.
|
||||
|
||||
Normally, the mark function for an object is called only when the
|
||||
object is first marked in a given GC pass. When generational
|
||||
collection, the mark function can be called for an old-generation
|
||||
object if it potentially changed since a previous collection; still,
|
||||
the mark function will be called only once during that collection, so
|
||||
it's as if the object were in the new generation. With incremental
|
||||
collection, however, a macro procedure can be called due to changes
|
||||
even if the object is already incrementally marked as an
|
||||
old-generaiton object. The GC_current_mode() function reports the
|
||||
current mode for a given call to the marking function.
|
||||
|
||||
Memory Kinds
|
||||
------------
|
||||
|
||||
|
|
|
@ -20,6 +20,7 @@ typedef struct GC_Weak_Array {
|
|||
void *replace_val;
|
||||
struct GC_Weak_Array *next;
|
||||
void *data[1]; /* must be the 5th longword! */
|
||||
/* inc_next is after the array */
|
||||
} GC_Weak_Array;
|
||||
|
||||
/* The GC_Weak_Box struct is not externally visible, but
|
||||
|
@ -32,6 +33,7 @@ typedef struct GC_Weak_Box {
|
|||
void **secondary_erase;
|
||||
int soffset, is_late;
|
||||
struct GC_Weak_Box *next;
|
||||
struct GC_Weak_Box *inc_next;
|
||||
} GC_Weak_Box;
|
||||
|
||||
/* The GC_Ephemeron struct is not externally visible, but
|
||||
|
@ -43,6 +45,7 @@ typedef struct GC_Ephemeron {
|
|||
void *val;
|
||||
/* The rest is up to us: */
|
||||
struct GC_Ephemeron *next;
|
||||
struct GC_Ephemeron *inc_next;
|
||||
} GC_Ephemeron;
|
||||
|
||||
typedef struct GC_Immobile_Box {
|
||||
|
|
|
@ -119,7 +119,7 @@ void GC_set_finalizer(void *p, int tagged, int level, void (*f)(void *p, void *d
|
|||
data = gc->park[1];
|
||||
gc->park[0] = NULL;
|
||||
gc->park[1] = NULL;
|
||||
|
||||
|
||||
fnl->p = p;
|
||||
fnl->f = f;
|
||||
fnl->data = data;
|
||||
|
@ -174,9 +174,14 @@ static void reset_finalizer_tree(GCTYPE *gc)
|
|||
|
||||
for (; fnl; fnl = next) {
|
||||
next = fnl->next;
|
||||
if (is_in_gen_half(fnl, gc)
|
||||
|| is_in_gen_half(fnl->f, gc)
|
||||
|| is_in_gen_half(fnl->data, gc))
|
||||
/* Checking both `fnl` and `fnl->p` is redundant, since
|
||||
`fnl` is always allocated after `fnl->p`, but check
|
||||
both just in case the order of allocation somehow
|
||||
changes in the future. */
|
||||
if (is_in_generation_half(gc, fnl)
|
||||
|| is_in_generation_half(gc, fnl->f)
|
||||
|| is_in_generation_half(gc, fnl->p)
|
||||
|| is_in_generation_half(gc, fnl->data))
|
||||
add_finalizer(fnl, 1, gc);
|
||||
else
|
||||
add_finalizer(fnl, 0, gc);
|
||||
|
|
|
@ -168,6 +168,10 @@ GC2_EXTERN void GC_enable_collection(int on);
|
|||
/*
|
||||
Performs an immediate (full) collection. */
|
||||
|
||||
GC2_EXTERN void GC_request_incremental_mode(void);
|
||||
/*
|
||||
Requests incremental mode; lasts until the next major collection. */
|
||||
|
||||
GC2_EXTERN void GC_free_all(void);
|
||||
/*
|
||||
Releases all memory, removes all signal handlers, etc.
|
||||
|
@ -372,9 +376,33 @@ GC2_EXTERN int GC_is_marked2(const void *p, struct NewGC *gc);
|
|||
/*
|
||||
Reports whether p has been marked. */
|
||||
|
||||
GC2_EXTERN int GC_current_mode(struct NewGC *gc);
|
||||
# define GC_CURRENT_MODE_MINOR 0
|
||||
# define GC_CURRENT_MODE_MAJOR 1
|
||||
# define GC_CURRENT_MODE_INCREMENTAL 2
|
||||
# define GC_CURRENT_MODE_BACKPOINTER_REMARK 3
|
||||
# define GC_CURRENT_MODE_ACCOUNTING 4
|
||||
/*
|
||||
The mode during a mark or fixup function callback.
|
||||
The GC_CURRENT_MODE_BACKPOINTER_REMARK mode corresponds
|
||||
to re-traversing an old-generation object that was
|
||||
formerly marked but has been mutated. */
|
||||
|
||||
GC2_EXTERN int GC_is_partial(struct NewGC *gc);
|
||||
/*
|
||||
Reports whether the current GC is a non-full collection. */
|
||||
Reports whether the current GC is a non-full collection
|
||||
or accounting pass. GC_current_mode() is better. */
|
||||
|
||||
GC2_EXTERN int GC_started_incremental(struct NewGC *gc);
|
||||
/*
|
||||
Reports whether the current GC uses incremental collection. */
|
||||
|
||||
GC2_EXTERN void *GC_malloc_for_incremental(size_t amt);
|
||||
/*
|
||||
Use only when GC_started_incremental(); allocates
|
||||
atomic memory that will be released at the end of the
|
||||
next full collection, which ends the current
|
||||
incremental pass. */
|
||||
|
||||
GC2_EXTERN void GC_mark_no_recur(struct NewGC *gc, int enable);
|
||||
GC2_EXTERN void GC_retract_only_mark_stack_entry(void *pf, struct NewGC *gc);
|
||||
|
|
|
@ -148,6 +148,8 @@ inline static int custodian_to_owner_set(NewGC *gc,Scheme_Custodian *cust)
|
|||
{
|
||||
int i;
|
||||
|
||||
GC_ASSERT(SAME_TYPE(SCHEME_TYPE(cust), scheme_custodian_type));
|
||||
|
||||
if (cust->gc_owner_set)
|
||||
return cust->gc_owner_set;
|
||||
|
||||
|
@ -356,7 +358,7 @@ inline static void BTC_memory_account_mark(NewGC *gc, mpage *page, void *ptr, in
|
|||
if(info->btc_mark == gc->old_btc_mark) {
|
||||
info->btc_mark = gc->new_btc_mark;
|
||||
account_memory(gc, gc->current_mark_owner, gcBYTES_TO_WORDS(page->size), is_a_master_page);
|
||||
push_ptr(gc, TAG_AS_BIG_PAGE_PTR(ptr));
|
||||
push_ptr(gc, TAG_AS_BIG_PAGE_PTR(ptr), 0);
|
||||
}
|
||||
} else {
|
||||
/* medium page */
|
||||
|
@ -366,7 +368,7 @@ inline static void BTC_memory_account_mark(NewGC *gc, mpage *page, void *ptr, in
|
|||
info->btc_mark = gc->new_btc_mark;
|
||||
account_memory(gc, gc->current_mark_owner, info->size, is_a_master_page);
|
||||
ptr = OBJHEAD_TO_OBJPTR(info);
|
||||
push_ptr(gc, ptr);
|
||||
push_ptr(gc, ptr, 0);
|
||||
}
|
||||
}
|
||||
} else {
|
||||
|
@ -375,7 +377,7 @@ inline static void BTC_memory_account_mark(NewGC *gc, mpage *page, void *ptr, in
|
|||
if(info->btc_mark == gc->old_btc_mark) {
|
||||
info->btc_mark = gc->new_btc_mark;
|
||||
account_memory(gc, gc->current_mark_owner, info->size, 0);
|
||||
push_ptr(gc, ptr);
|
||||
push_ptr(gc, ptr, 0);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -464,9 +466,9 @@ static void propagate_accounting_marks(NewGC *gc)
|
|||
{
|
||||
void *p;
|
||||
|
||||
while(pop_ptr(gc, &p) && !gc->kill_propagation_loop) {
|
||||
while(pop_ptr(gc, &p, 0) && !gc->kill_propagation_loop) {
|
||||
/* GCDEBUG((DEBUGOUTF, "btc_account: popped off page %p:%p, ptr %p\n", page, page->addr, p)); */
|
||||
propagate_marks_worker(gc, p);
|
||||
propagate_marks_worker(gc, p, 0);
|
||||
}
|
||||
if(gc->kill_propagation_loop)
|
||||
reset_pointer_stack(gc);
|
||||
|
@ -499,6 +501,8 @@ static void BTC_do_accounting(NewGC *gc)
|
|||
Scheme_Custodian_Reference *box = cur->global_next;
|
||||
int i;
|
||||
|
||||
GC_ASSERT(SAME_TYPE(SCHEME_TYPE(cur), scheme_custodian_type));
|
||||
|
||||
GCDEBUG((DEBUGOUTF, "\nBEGINNING MEMORY ACCOUNTING\n"));
|
||||
gc->doing_memory_accounting = 1;
|
||||
gc->in_unsafe_allocation_mode = 1;
|
||||
|
@ -518,6 +522,7 @@ static void BTC_do_accounting(NewGC *gc)
|
|||
/* start with root: */
|
||||
while (cur->parent && SCHEME_PTR1_VAL(cur->parent)) {
|
||||
cur = SCHEME_PTR1_VAL(cur->parent);
|
||||
GC_ASSERT(SAME_TYPE(SCHEME_TYPE(cur), scheme_custodian_type));
|
||||
}
|
||||
|
||||
/* walk forward for the order we want (blame parents instead of children) */
|
||||
|
@ -537,7 +542,8 @@ static void BTC_do_accounting(NewGC *gc)
|
|||
propagate_accounting_marks(gc);
|
||||
|
||||
last = cur;
|
||||
box = cur->global_next; cur = box ? SCHEME_PTR1_VAL(box) : NULL;
|
||||
box = cur->global_next;
|
||||
cur = box ? SCHEME_PTR1_VAL(box) : NULL;
|
||||
|
||||
owner_table = gc->owner_table;
|
||||
owner_table[owner]->memory_use = add_no_overflow(owner_table[owner]->memory_use,
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -15,6 +15,7 @@ typedef struct mpage {
|
|||
void *addr;
|
||||
void *mmu_src_block;
|
||||
struct mpage *modified_next; /* next in chain of pages for backpointers, marks, etc. */
|
||||
struct mpage *inc_modified_next; /* like modified_next, but for incrementally marked pages */
|
||||
struct mpage *reprotect_next; /* next in a chain of pages that need to be re-protected */
|
||||
#ifdef MZ_GC_BACKTRACE
|
||||
void **backtrace;
|
||||
|
@ -44,9 +45,11 @@ typedef struct mpage {
|
|||
unsigned char size_class :2;
|
||||
unsigned char page_type :3;
|
||||
unsigned char marked_on :1;
|
||||
unsigned char inc_marked_on :1;
|
||||
unsigned char marked_from :1;
|
||||
unsigned char has_new :1;
|
||||
unsigned char mprotected :1;
|
||||
unsigned char reprotect :1; /* in reprotect_next chain already */
|
||||
} mpage;
|
||||
|
||||
typedef struct Gen0 {
|
||||
|
@ -82,6 +85,11 @@ typedef struct MarkSegment {
|
|||
void **top;
|
||||
} MarkSegment;
|
||||
|
||||
typedef struct Inc_Admin_Page {
|
||||
struct Inc_Admin_Page *next;
|
||||
size_t size, pos;
|
||||
} Inc_Admin_Page;
|
||||
|
||||
typedef struct GC_Thread_Info {
|
||||
void *thread;
|
||||
int owner;
|
||||
|
@ -158,11 +166,13 @@ typedef struct NewGC {
|
|||
/* linked list of pages with back pointers to be traversed in a
|
||||
minor collection, etc.: */
|
||||
struct mpage *modified_next;
|
||||
/* pages marked incrementally: */
|
||||
struct mpage *inc_modified_next;
|
||||
/* linked list of pages that need to be given write protection at
|
||||
the end of the GC cycle: */
|
||||
struct mpage *reprotect_next;
|
||||
|
||||
MarkSegment *mark_stack;
|
||||
MarkSegment *mark_stack, *inc_mark_stack;
|
||||
|
||||
/* Finalization */
|
||||
Fnl *run_queue;
|
||||
|
@ -189,6 +199,7 @@ typedef struct NewGC {
|
|||
int avoid_collection;
|
||||
|
||||
unsigned char generations_available :1;
|
||||
unsigned char started_incremental :1; /* must stick with incremental until major GC */
|
||||
unsigned char in_unsafe_allocation_mode :1;
|
||||
unsigned char full_needed_for_finalization :1;
|
||||
unsigned char no_further_modifications :1;
|
||||
|
@ -197,6 +208,11 @@ typedef struct NewGC {
|
|||
unsigned char running_finalizers :1;
|
||||
unsigned char back_pointers :1;
|
||||
unsigned char need_fixup :1;
|
||||
unsigned char check_gen1 :1;
|
||||
unsigned char mark_gen1 :1;
|
||||
unsigned char inc_gen1 :1;
|
||||
unsigned char during_backpointer :1;
|
||||
unsigned char incremental_requested :1;
|
||||
|
||||
/* blame the child */
|
||||
unsigned int doing_memory_accounting :1;
|
||||
|
@ -215,6 +231,10 @@ typedef struct NewGC {
|
|||
uintptr_t number_of_gc_runs;
|
||||
unsigned int since_last_full;
|
||||
uintptr_t last_full_mem_use;
|
||||
uintptr_t inc_mem_use_threshold;
|
||||
|
||||
uintptr_t prop_count;
|
||||
uintptr_t inc_prop_count;
|
||||
|
||||
/* These collect information about memory usage, for use in GC_dump. */
|
||||
uintptr_t peak_memory_use;
|
||||
|
@ -227,7 +247,6 @@ typedef struct NewGC {
|
|||
uintptr_t modified_unprotects;
|
||||
|
||||
/* THREAD_LOCAL variables that need to be saved off */
|
||||
MarkSegment *saved_mark_stack;
|
||||
void *saved_GC_variable_stack;
|
||||
uintptr_t saved_GC_gen0_alloc_page_ptr;
|
||||
uintptr_t saved_GC_gen0_alloc_page_end;
|
||||
|
@ -238,7 +257,9 @@ typedef struct NewGC {
|
|||
int dont_master_gc_until_child_registers; /* :1: */
|
||||
#endif
|
||||
|
||||
struct mpage *thread_local_pages;
|
||||
Inc_Admin_Page *inc_space;
|
||||
|
||||
struct mpage *thread_local_pages;
|
||||
|
||||
/* Callbacks */
|
||||
void (*GC_collect_start_callback)(void);
|
||||
|
@ -268,13 +289,23 @@ typedef struct NewGC {
|
|||
uintptr_t phantom_count;
|
||||
uintptr_t gen0_phantom_count;
|
||||
|
||||
Roots roots;
|
||||
GC_Weak_Array *weak_arrays;
|
||||
GC_Weak_Box *weak_boxes[2];
|
||||
GC_Ephemeron *ephemerons;
|
||||
int num_last_seen_ephemerons;
|
||||
Roots roots;
|
||||
struct MMU *mmu;
|
||||
|
||||
/* The `inc_` variants hold old-generation objects discovered in
|
||||
incremental mode. If incremental mode is started, the plain
|
||||
variants for a minor collection need to be added to the `inc_`
|
||||
variants, since promoted objects from the nursery keep their mark
|
||||
bits. The `bp_` variants are old-generation objects that were
|
||||
marked as (potentially) containing backpointers; they are treated
|
||||
like the normal ones, but not added to `inc_` because they're
|
||||
either already marked or should be added when they're later
|
||||
marked. */
|
||||
GC_Weak_Array *weak_arrays, *inc_weak_arrays, *bp_weak_arrays;
|
||||
GC_Weak_Box *weak_boxes[2], *inc_weak_boxes[2], *bp_weak_boxes[2];
|
||||
GC_Ephemeron *ephemerons, *inc_ephemerons, *bp_ephemerons;
|
||||
int num_last_seen_ephemerons;
|
||||
|
||||
Allocator *saved_allocator;
|
||||
|
||||
#ifdef MZ_USE_PLACES
|
||||
|
|
|
@ -17,6 +17,7 @@
|
|||
weak_box_tag
|
||||
ephemeron_tag
|
||||
is_marked(p)
|
||||
is_in_generation_half(p)
|
||||
Type_Tag
|
||||
*/
|
||||
|
||||
|
@ -30,7 +31,7 @@ static int size_weak_array(void *p, struct NewGC *gc)
|
|||
GC_Weak_Array *a = (GC_Weak_Array *)p;
|
||||
|
||||
return gcBYTES_TO_WORDS(sizeof(GC_Weak_Array)
|
||||
+ ((a->count - 1) * sizeof(void *)));
|
||||
+ ((a->count - 1 + 1) * sizeof(void *)));
|
||||
}
|
||||
|
||||
static int mark_weak_array(void *p, struct NewGC *gc)
|
||||
|
@ -39,8 +40,24 @@ static int mark_weak_array(void *p, struct NewGC *gc)
|
|||
|
||||
gcMARK2(a->replace_val, gc);
|
||||
|
||||
a->next = gc->weak_arrays;
|
||||
gc->weak_arrays = a;
|
||||
if (gc->doing_memory_accounting) {
|
||||
/* skip */
|
||||
} else if (gc->inc_gen1) {
|
||||
/* inc_next field is at the end of the `data` array: */
|
||||
a->data[a->count] = gc->inc_weak_arrays;
|
||||
gc->inc_weak_arrays = a;
|
||||
} else if (gc->during_backpointer) {
|
||||
if (!gc->gc_full) {
|
||||
/* Keep backpointered weak arrays separate, because we
|
||||
should not merge them to the incremental list
|
||||
in incremental mode. */
|
||||
a->next = gc->bp_weak_arrays;
|
||||
gc->bp_weak_arrays = a;
|
||||
}
|
||||
} else {
|
||||
a->next = gc->weak_arrays;
|
||||
gc->weak_arrays = a;
|
||||
}
|
||||
|
||||
#if CHECKS
|
||||
/* For now, weak arrays only used for symbols, keywords, and falses: */
|
||||
|
@ -60,7 +77,7 @@ static int mark_weak_array(void *p, struct NewGC *gc)
|
|||
#endif
|
||||
|
||||
return gcBYTES_TO_WORDS(sizeof(GC_Weak_Array)
|
||||
+ ((a->count - 1) * sizeof(void *)));
|
||||
+ ((a->count - 1 + 1) * sizeof(void *)));
|
||||
}
|
||||
|
||||
static int fixup_weak_array(void *p, struct NewGC *gc)
|
||||
|
@ -93,7 +110,8 @@ void *GC_malloc_weak_array(size_t size_in_bytes, void *replace_val)
|
|||
|
||||
w = (GC_Weak_Array *)GC_malloc_one_tagged(size_in_bytes
|
||||
+ sizeof(GC_Weak_Array)
|
||||
- sizeof(void *));
|
||||
- sizeof(void *)
|
||||
+ sizeof(GC_Weak_Array *));
|
||||
|
||||
replace_val = gc->park[0];
|
||||
gc->park[0] = NULL;
|
||||
|
@ -105,16 +123,52 @@ void *GC_malloc_weak_array(size_t size_in_bytes, void *replace_val)
|
|||
return w;
|
||||
}
|
||||
|
||||
static void rechain_inc_weak_arrays(GC_Weak_Array *w)
|
||||
{
|
||||
for (; w; w = (GC_Weak_Array *)w->data[w->count]) {
|
||||
w->next = (GC_Weak_Array *)w->data[w->count];
|
||||
}
|
||||
}
|
||||
|
||||
static void init_weak_arrays(GCTYPE *gc) {
|
||||
gc->weak_arrays = NULL;
|
||||
if (gc->gc_full) {
|
||||
rechain_inc_weak_arrays(gc->inc_weak_arrays);
|
||||
gc->weak_arrays = gc->inc_weak_arrays;
|
||||
gc->inc_weak_arrays = NULL;
|
||||
} else
|
||||
gc->weak_arrays = NULL;
|
||||
gc->bp_weak_arrays = NULL;
|
||||
}
|
||||
|
||||
static GC_Weak_Array *append_weak_arrays(GC_Weak_Array *wa, GC_Weak_Array *bp_wa, int *_num_gen0)
|
||||
{
|
||||
*_num_gen0 = 0;
|
||||
|
||||
if (wa) {
|
||||
GC_Weak_Array *last_wa = wa;
|
||||
while (last_wa->next) {
|
||||
(*_num_gen0)++;
|
||||
last_wa = last_wa->next;
|
||||
}
|
||||
(*_num_gen0)++;
|
||||
last_wa->next = bp_wa;
|
||||
return wa;
|
||||
} else
|
||||
return bp_wa;
|
||||
}
|
||||
|
||||
static void zero_weak_arrays(GCTYPE *gc, int force_zero)
|
||||
{
|
||||
GC_Weak_Array *wa;
|
||||
int i;
|
||||
int i, num_gen0;
|
||||
|
||||
GC_ASSERT(!gc->bp_weak_arrays || !gc->gc_full);
|
||||
|
||||
wa = append_weak_arrays(gc->weak_arrays, gc->bp_weak_arrays, &num_gen0);
|
||||
|
||||
if (gc->gc_full || !gc->started_incremental)
|
||||
num_gen0 = 0;
|
||||
|
||||
wa = gc->weak_arrays;
|
||||
while (wa) {
|
||||
void **data;
|
||||
|
||||
|
@ -127,16 +181,47 @@ static void zero_weak_arrays(GCTYPE *gc, int force_zero)
|
|||
data[i] = GC_resolve2(p, gc);
|
||||
}
|
||||
|
||||
if (num_gen0 > 0) {
|
||||
if (!is_in_generation_half(gc, wa)) {
|
||||
/* For incremental mode, preserve this weak box
|
||||
in the incremental list for re-checking later. */
|
||||
wa->data[wa->count] = gc->inc_weak_arrays;
|
||||
gc->inc_weak_arrays = wa;
|
||||
}
|
||||
}
|
||||
|
||||
wa = wa->next;
|
||||
num_gen0--;
|
||||
}
|
||||
|
||||
gc->weak_arrays = NULL;
|
||||
gc->bp_weak_arrays = NULL;
|
||||
}
|
||||
|
||||
/******************************************************************************/
|
||||
/* weak boxes */
|
||||
/******************************************************************************/
|
||||
|
||||
#if 0
|
||||
static void check_weak_box_not_already_in_inc_chain(GC_Weak_Box *wb, GC_Weak_Box *wbc)
|
||||
{
|
||||
while (wbc) {
|
||||
GC_ASSERT(wb != wbc);
|
||||
wbc = wbc->inc_next;
|
||||
}
|
||||
}
|
||||
static void check_weak_box_not_already_in_chain(GC_Weak_Box *wb, GC_Weak_Box *wbc)
|
||||
{
|
||||
while (wbc) {
|
||||
GC_ASSERT(wb != wbc);
|
||||
wbc = wbc->next;
|
||||
}
|
||||
}
|
||||
#else
|
||||
static void check_weak_box_not_already_in_inc_chain(GC_Weak_Box *wb, GC_Weak_Box *wbc) { }
|
||||
static void check_weak_box_not_already_in_chain(GC_Weak_Box *wb, GC_Weak_Box *wbc) { }
|
||||
#endif
|
||||
|
||||
static int size_weak_box(void *p, struct NewGC *gc)
|
||||
{
|
||||
return gcBYTES_TO_WORDS(sizeof(GC_Weak_Box));
|
||||
|
@ -145,10 +230,28 @@ static int size_weak_box(void *p, struct NewGC *gc)
|
|||
static int mark_weak_box(void *p, struct NewGC *gc)
|
||||
{
|
||||
GC_Weak_Box *wb = (GC_Weak_Box *)p;
|
||||
|
||||
|
||||
gcMARK2(wb->secondary_erase, gc);
|
||||
|
||||
if (wb->val) {
|
||||
if (gc->doing_memory_accounting) {
|
||||
/* skip */
|
||||
} else if (gc->inc_gen1) {
|
||||
check_weak_box_not_already_in_inc_chain(wb, gc->inc_weak_boxes[wb->is_late]);
|
||||
wb->inc_next = gc->inc_weak_boxes[wb->is_late];
|
||||
gc->inc_weak_boxes[wb->is_late] = wb;
|
||||
} else if (gc->during_backpointer) {
|
||||
if (!gc->gc_full && (wb->val || gc->started_incremental)) {
|
||||
/* Keep backpointered weak arrays separate, because we
|
||||
should not merge them to the incremental list
|
||||
in incremental mode. */
|
||||
check_weak_box_not_already_in_chain(wb, gc->bp_weak_boxes[wb->is_late]);
|
||||
check_weak_box_not_already_in_chain(wb, gc->weak_boxes[wb->is_late]);
|
||||
wb->next = gc->bp_weak_boxes[wb->is_late];
|
||||
gc->bp_weak_boxes[wb->is_late] = wb;
|
||||
}
|
||||
} else if (wb->val || gc->started_incremental) {
|
||||
check_weak_box_not_already_in_chain(wb, gc->weak_boxes[wb->is_late]);
|
||||
check_weak_box_not_already_in_chain(wb, gc->bp_weak_boxes[wb->is_late]);
|
||||
wb->next = gc->weak_boxes[wb->is_late];
|
||||
gc->weak_boxes[wb->is_late] = wb;
|
||||
}
|
||||
|
@ -195,18 +298,64 @@ void *GC_malloc_weak_box(void *p, void **secondary, int soffset, int is_late)
|
|||
return w;
|
||||
}
|
||||
|
||||
static void rechain_inc_weak_boxes(GC_Weak_Box *wb)
|
||||
{
|
||||
for (; wb; wb = wb->inc_next) {
|
||||
wb->next = wb->inc_next;
|
||||
}
|
||||
}
|
||||
|
||||
static void init_weak_boxes(GCTYPE *gc) {
|
||||
gc->weak_boxes[0] = NULL;
|
||||
gc->weak_boxes[1] = NULL;
|
||||
if (gc->gc_full) {
|
||||
rechain_inc_weak_boxes(gc->inc_weak_boxes[0]);
|
||||
rechain_inc_weak_boxes(gc->inc_weak_boxes[1]);
|
||||
gc->weak_boxes[0] = gc->inc_weak_boxes[0];
|
||||
gc->weak_boxes[1] = gc->inc_weak_boxes[1];
|
||||
gc->inc_weak_boxes[0] = NULL;
|
||||
gc->inc_weak_boxes[1] = NULL;
|
||||
} else {
|
||||
gc->weak_boxes[0] = NULL;
|
||||
gc->weak_boxes[1] = NULL;
|
||||
}
|
||||
gc->bp_weak_boxes[0] = NULL;
|
||||
gc->bp_weak_boxes[1] = NULL;
|
||||
}
|
||||
|
||||
static GC_Weak_Box *append_weak_boxes(GC_Weak_Box *wb, GC_Weak_Box *bp_wb, int *_num_gen0)
|
||||
{
|
||||
*_num_gen0 = 0;
|
||||
|
||||
if (wb) {
|
||||
GC_Weak_Box *last_wb = wb;
|
||||
while (last_wb->next) {
|
||||
(*_num_gen0)++;
|
||||
last_wb = last_wb->next;
|
||||
}
|
||||
(*_num_gen0)++;
|
||||
last_wb->next = bp_wb;
|
||||
return wb;
|
||||
} else
|
||||
return bp_wb;
|
||||
}
|
||||
|
||||
static void zero_weak_boxes(GCTYPE *gc, int is_late, int force_zero)
|
||||
{
|
||||
GC_Weak_Box *wb;
|
||||
int num_gen0;
|
||||
|
||||
GC_ASSERT(!gc->bp_weak_boxes[is_late] || !gc->gc_full);
|
||||
|
||||
wb = append_weak_boxes(gc->weak_boxes[is_late],
|
||||
gc->bp_weak_boxes[is_late],
|
||||
&num_gen0);
|
||||
if (gc->gc_full || !gc->started_incremental)
|
||||
num_gen0 = 0;
|
||||
|
||||
wb = gc->weak_boxes[is_late];
|
||||
while (wb) {
|
||||
if (force_zero || !is_marked(gc, wb->val)) {
|
||||
GC_ASSERT(is_marked(gc, wb));
|
||||
if (!wb->val) {
|
||||
/* nothing to do */
|
||||
} else if (force_zero || !is_marked(gc, wb->val)) {
|
||||
wb->val = NULL;
|
||||
if (wb->secondary_erase) {
|
||||
void **p;
|
||||
|
@ -219,18 +368,28 @@ static void zero_weak_boxes(GCTYPE *gc, int is_late, int force_zero)
|
|||
page->mprotected = 0;
|
||||
mmu_write_unprotect_page(gc->mmu, page->addr, APAGE_SIZE, page_mmu_type(page), &page->mmu_src_block);
|
||||
}
|
||||
|
||||
p = (void **)GC_resolve2(wb->secondary_erase, gc);
|
||||
*(p + wb->soffset) = NULL;
|
||||
wb->secondary_erase = NULL;
|
||||
}
|
||||
} else
|
||||
} else {
|
||||
wb->val = GC_resolve2(wb->val, gc);
|
||||
}
|
||||
if (num_gen0 > 0) {
|
||||
if (!is_in_generation_half(gc, wb)) {
|
||||
/* For incremental mode, preserve this weak box
|
||||
in the incremental list for re-checking later. */
|
||||
wb->inc_next = gc->inc_weak_boxes[is_late];
|
||||
gc->inc_weak_boxes[is_late] = wb;
|
||||
}
|
||||
}
|
||||
wb = wb->next;
|
||||
num_gen0--;
|
||||
}
|
||||
|
||||
/* reset, in case we have a second round */
|
||||
gc->weak_boxes[is_late] = NULL;
|
||||
gc->bp_weak_boxes[is_late] = NULL;
|
||||
}
|
||||
|
||||
/******************************************************************************/
|
||||
|
@ -247,8 +406,19 @@ static int mark_ephemeron(void *p, struct NewGC *gc)
|
|||
GC_Ephemeron *eph = (GC_Ephemeron *)p;
|
||||
|
||||
if (eph->val) {
|
||||
eph->next = gc->ephemerons;
|
||||
gc->ephemerons = eph;
|
||||
GC_ASSERT(!gc->doing_memory_accounting);
|
||||
if (gc->inc_gen1) {
|
||||
eph->inc_next = gc->inc_ephemerons;
|
||||
gc->inc_ephemerons = eph;
|
||||
} else if (gc->during_backpointer) {
|
||||
if (!gc->gc_full) {
|
||||
eph->next = gc->bp_ephemerons;
|
||||
gc->bp_ephemerons = eph;
|
||||
}
|
||||
} else {
|
||||
eph->next = gc->ephemerons;
|
||||
gc->ephemerons = eph;
|
||||
}
|
||||
}
|
||||
|
||||
return gcBYTES_TO_WORDS(sizeof(GC_Ephemeron));
|
||||
|
@ -305,31 +475,79 @@ void *GC_malloc_ephemeron(void *k, void *v)
|
|||
return eph;
|
||||
}
|
||||
|
||||
static void rechain_inc_ephemerons(GC_Ephemeron *e)
|
||||
{
|
||||
for (; e; e = e->inc_next) {
|
||||
e->next = e->inc_next;
|
||||
}
|
||||
}
|
||||
|
||||
void init_ephemerons(GCTYPE *gc) {
|
||||
gc->ephemerons = NULL;
|
||||
if (gc->gc_full) {
|
||||
rechain_inc_ephemerons(gc->inc_ephemerons);
|
||||
gc->ephemerons = gc->inc_ephemerons;
|
||||
gc->inc_ephemerons = NULL;
|
||||
} else
|
||||
gc->ephemerons = NULL;
|
||||
gc->bp_ephemerons = NULL;
|
||||
gc->num_last_seen_ephemerons = 0;
|
||||
}
|
||||
|
||||
static int mark_ready_ephemerons(GCTYPE *gc)
|
||||
static int mark_ready_ephemerons(GCTYPE *gc, int inc_gen1)
|
||||
{
|
||||
GC_Ephemeron *waiting = NULL, *next, *eph;
|
||||
int did_one = 0;
|
||||
GC_Ephemeron *waiting, *next, *eph;
|
||||
int did_one = 0, j;
|
||||
|
||||
GC_mark_no_recur(gc, 1);
|
||||
|
||||
for (eph = gc->ephemerons; eph; eph = next) {
|
||||
next = eph->next;
|
||||
if (is_marked(gc, eph->key)) {
|
||||
eph->key = GC_resolve2(eph->key, gc);
|
||||
gcMARK2(eph->val, gc);
|
||||
gc->num_last_seen_ephemerons++;
|
||||
did_one = 1;
|
||||
} else {
|
||||
eph->next = waiting;
|
||||
waiting = eph;
|
||||
for (j = 0; j < (inc_gen1 ? 1 : 2); j++) {
|
||||
if (inc_gen1)
|
||||
eph = gc->inc_ephemerons;
|
||||
else if (j == 0)
|
||||
eph = gc->ephemerons;
|
||||
else
|
||||
eph = gc->bp_ephemerons;
|
||||
|
||||
waiting = NULL;
|
||||
|
||||
for (; eph; eph = next) {
|
||||
if (inc_gen1)
|
||||
next = eph->inc_next;
|
||||
else
|
||||
next = eph->next;
|
||||
if (is_marked(gc, eph->key)) {
|
||||
if (!inc_gen1)
|
||||
eph->key = GC_resolve2(eph->key, gc);
|
||||
gcMARK2(eph->val, gc);
|
||||
gc->num_last_seen_ephemerons++;
|
||||
did_one = 1;
|
||||
if (!inc_gen1 && (j == 0) && !gc->gc_full && gc->started_incremental) {
|
||||
/* Need to preserve the ephemeron in the incremental list,
|
||||
unless it's kept in generation 1/2 nistead of promoted to
|
||||
generation 1. */
|
||||
if (!is_in_generation_half(gc, eph)) {
|
||||
eph->inc_next = gc->inc_ephemerons;
|
||||
gc->inc_ephemerons = eph;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
if (inc_gen1) {
|
||||
/* Ensure that we can write to the page containing the emphemeron: */
|
||||
check_incremental_unprotect(gc, pagemap_find_page(gc->page_maps, eph));
|
||||
eph->inc_next = waiting;
|
||||
} else
|
||||
eph->next = waiting;
|
||||
waiting = eph;
|
||||
}
|
||||
}
|
||||
|
||||
if (inc_gen1)
|
||||
gc->inc_ephemerons = waiting;
|
||||
else if (j == 0)
|
||||
gc->ephemerons = waiting;
|
||||
else
|
||||
gc->bp_ephemerons = waiting;
|
||||
}
|
||||
gc->ephemerons = waiting;
|
||||
|
||||
GC_mark_no_recur(gc, 0);
|
||||
|
||||
|
|
|
@ -358,6 +358,7 @@ typedef struct Thread_Local_Variables {
|
|||
struct Scheme_Bucket_Table *scheme_module_code_cache_;
|
||||
struct Scheme_Object *group_member_cache_;
|
||||
struct Scheme_Prefix *scheme_prefix_finalize_;
|
||||
struct Scheme_Prefix *scheme_inc_prefix_finalize_;
|
||||
struct Scheme_Hash_Table *loaded_extensions_;
|
||||
struct Scheme_Hash_Table *fullpath_loaded_extensions_;
|
||||
Scheme_Sleep_Proc scheme_place_sleep_;
|
||||
|
@ -750,6 +751,7 @@ XFORM_GC_VARIABLE_STACK_THROUGH_THREAD_LOCAL;
|
|||
#define scheme_module_code_cache XOA (scheme_get_thread_local_variables()->scheme_module_code_cache_)
|
||||
#define group_member_cache XOA (scheme_get_thread_local_variables()->group_member_cache_)
|
||||
#define scheme_prefix_finalize XOA (scheme_get_thread_local_variables()->scheme_prefix_finalize_)
|
||||
#define scheme_inc_prefix_finalize XOA (scheme_get_thread_local_variables()->scheme_inc_prefix_finalize_)
|
||||
#define loaded_extensions XOA (scheme_get_thread_local_variables()->loaded_extensions_)
|
||||
#define fullpath_loaded_extensions XOA (scheme_get_thread_local_variables()->fullpath_loaded_extensions_)
|
||||
#define scheme_place_sleep XOA (scheme_get_thread_local_variables()->scheme_place_sleep_)
|
||||
|
|
|
@ -206,6 +206,7 @@ THREAD_LOCAL_DECL(int scheme_continuation_application_count);
|
|||
THREAD_LOCAL_DECL(static int generate_lifts_count);
|
||||
THREAD_LOCAL_DECL(int scheme_overflow_count);
|
||||
THREAD_LOCAL_DECL(Scheme_Prefix *scheme_prefix_finalize);
|
||||
THREAD_LOCAL_DECL(Scheme_Prefix *scheme_inc_prefix_finalize);
|
||||
int scheme_get_overflow_count() { return scheme_overflow_count; }
|
||||
|
||||
/* read-only globals */
|
||||
|
@ -411,6 +412,7 @@ void scheme_init_eval_places()
|
|||
{
|
||||
#ifdef MZ_PRECISE_GC
|
||||
scheme_prefix_finalize = (Scheme_Prefix *)0x1; /* 0x1 acts as a sentenel */
|
||||
scheme_inc_prefix_finalize = (Scheme_Prefix *)0x1;
|
||||
GC_set_post_propagate_hook(mark_pruned_prefixes);
|
||||
#endif
|
||||
#ifdef DEBUG_CHECK_STACK_FRAME_SIZE
|
||||
|
@ -5965,7 +5967,7 @@ Scheme_Object **scheme_push_prefix(Scheme_Env *genv, Resolve_Prefix *rp,
|
|||
pf = scheme_malloc_tagged(sizeof(Scheme_Prefix)
|
||||
+ ((i-mzFLEX_DELTA) * sizeof(Scheme_Object *))
|
||||
+ (tl_map_len * sizeof(int)));
|
||||
pf->so.type = scheme_prefix_type;
|
||||
pf->iso.so.type = scheme_prefix_type;
|
||||
pf->num_slots = i;
|
||||
pf->num_toplevels = rp->num_toplevels;
|
||||
pf->num_stxes = rp->num_stxes;
|
||||
|
@ -6057,10 +6059,22 @@ Scheme_Object **scheme_resume_prefix(Scheme_Object *v)
|
|||
#ifdef MZ_PRECISE_GC
|
||||
static void mark_pruned_prefixes(struct NewGC *gc) XFORM_SKIP_PROC
|
||||
{
|
||||
if (!GC_is_partial(gc)) {
|
||||
if (scheme_inc_prefix_finalize != (Scheme_Prefix *)0x1) {
|
||||
Scheme_Prefix *pf = scheme_inc_prefix_finalize;
|
||||
while (pf->next_final != (Scheme_Prefix *)0x1) {
|
||||
pf = pf->next_final;
|
||||
}
|
||||
pf->next_final = scheme_prefix_finalize;
|
||||
scheme_prefix_finalize = scheme_inc_prefix_finalize;
|
||||
scheme_inc_prefix_finalize = (Scheme_Prefix *)0x1;
|
||||
}
|
||||
}
|
||||
|
||||
if (scheme_prefix_finalize != (Scheme_Prefix *)0x1) {
|
||||
Scheme_Prefix *pf = scheme_prefix_finalize, *next;
|
||||
Scheme_Object *clo;
|
||||
int i, *use_bits, maxpos;
|
||||
int i, *use_bits, maxpos, inc_fixup_mode;
|
||||
|
||||
scheme_prefix_finalize = (Scheme_Prefix *)0x1;
|
||||
while (pf != (Scheme_Prefix *)0x1) {
|
||||
|
@ -6115,23 +6129,32 @@ static void mark_pruned_prefixes(struct NewGC *gc) XFORM_SKIP_PROC
|
|||
/* Fix up closures that reference this prefix: */
|
||||
clo = (Scheme_Object *)GC_resolve2(pf->fixup_chain, gc);
|
||||
pf->fixup_chain = NULL;
|
||||
inc_fixup_mode = SCHEME_PREFIX_FLAGS(pf) & 0x1;
|
||||
while (clo) {
|
||||
Scheme_Object *next;
|
||||
if (inc_fixup_mode) {
|
||||
next = ((Scheme_Object **)clo)[1];
|
||||
clo = ((Scheme_Object **)clo)[0];
|
||||
}
|
||||
if (SCHEME_TYPE(clo) == scheme_closure_type) {
|
||||
Scheme_Closure *cl = (Scheme_Closure *)clo;
|
||||
int closure_size = ((Scheme_Closure_Data *)GC_resolve2(cl->code, gc))->closure_size;
|
||||
next = cl->vals[closure_size - 1];
|
||||
if (!inc_fixup_mode)
|
||||
next = cl->vals[closure_size - 1];
|
||||
cl->vals[closure_size-1] = (Scheme_Object *)pf;
|
||||
} else if (SCHEME_TYPE(clo) == scheme_native_closure_type) {
|
||||
Scheme_Native_Closure *cl = (Scheme_Native_Closure *)clo;
|
||||
int closure_size = ((Scheme_Native_Closure_Data *)GC_resolve2(cl->code, gc))->closure_size;
|
||||
next = cl->vals[closure_size - 1];
|
||||
if (!inc_fixup_mode)
|
||||
next = cl->vals[closure_size - 1];
|
||||
cl->vals[closure_size-1] = (Scheme_Object *)pf;
|
||||
} else {
|
||||
abort();
|
||||
MZ_ASSERT(0);
|
||||
}
|
||||
clo = (Scheme_Object *)GC_resolve2(next, gc);
|
||||
}
|
||||
if (inc_fixup_mode)
|
||||
SCHEME_PREFIX_FLAGS(pf) -= 0x1;
|
||||
|
||||
/* Next */
|
||||
next = pf->next_final;
|
||||
|
|
|
@ -1,2 +1,3 @@
|
|||
/* setup for mzclpf_post.inc */
|
||||
CLOSURE_DATA_TYPE *data;
|
||||
int gc_mode;
|
||||
|
|
|
@ -6,8 +6,11 @@
|
|||
mark_pruned_prefixes() in "eval.c" --- NULLs out unused
|
||||
fields before finally marking the prefix. If the prefix
|
||||
is ever marked through some other reference, then
|
||||
mark_pruned_prefixes() doesn't actually prune. */
|
||||
if (data) {
|
||||
mark_pruned_prefixes() doesn't actually prune.
|
||||
To support incremental collection, we rely on the fact that
|
||||
and old-generation closure cannot point to a new-generation
|
||||
prefix; the prefix is always allocated before the closure. */
|
||||
if (data && (gc_mode != GC_CURRENT_MODE_BACKPOINTER_REMARK)) {
|
||||
/* GLOBAL ASSUMPTION: prefix is at the end of a closure */
|
||||
Scheme_Prefix *pf = (Scheme_Prefix *)c->vals[closure_size - 1];
|
||||
|
||||
|
@ -24,8 +27,13 @@
|
|||
/* We're the first to look at this prefix... */
|
||||
/* Add it to the chain of prefixes to finish after
|
||||
all other marking: */
|
||||
pf->next_final = scheme_prefix_finalize;
|
||||
scheme_prefix_finalize = pf;
|
||||
if (gc_mode == GC_CURRENT_MODE_INCREMENTAL) {
|
||||
pf->next_final = scheme_inc_prefix_finalize;
|
||||
scheme_inc_prefix_finalize = pf;
|
||||
} else {
|
||||
pf->next_final = scheme_prefix_finalize;
|
||||
scheme_prefix_finalize = pf;
|
||||
}
|
||||
#ifdef MZ_GC_BACKTRACE
|
||||
pf->backpointer = (Scheme_Object *)c;
|
||||
#endif
|
||||
|
@ -34,8 +42,19 @@
|
|||
|
||||
/* Add this closure to the chain to be repaired when the
|
||||
prefix is marked (and potentially moved): */
|
||||
c->vals[closure_size - 1] = pf->fixup_chain;
|
||||
pf->fixup_chain = (Scheme_Object *)c;
|
||||
if ((gc_mode == GC_CURRENT_MODE_INCREMENTAL) || (SCHEME_PREFIX_FLAGS(pf) & 0x1)) {
|
||||
/* Can't steal closure slot for this purpose, since the
|
||||
slot is still in use until a full collection finishes */
|
||||
Scheme_Object **pr;
|
||||
pr = (Scheme_Object **)GC_malloc_for_incremental(2 * sizeof(Scheme_Object *));
|
||||
pr[0] = (Scheme_Object *)c;
|
||||
pr[1] = (Scheme_Object *)pf->fixup_chain;
|
||||
pf->fixup_chain = (Scheme_Object *)pr;
|
||||
SCHEME_PREFIX_FLAGS(pf) |= 0x1;
|
||||
} else {
|
||||
c->vals[closure_size - 1] = pf->fixup_chain;
|
||||
pf->fixup_chain = (Scheme_Object *)c;
|
||||
}
|
||||
|
||||
/* Mark just the elements of the prefix that are (newly) used: */
|
||||
if ((uintptr_t)data->tl_map & 0x1) {
|
||||
|
|
|
@ -1,8 +1,18 @@
|
|||
/* setup for mzclpf_post.inc */
|
||||
if (!GC_is_partial(gc) && c->code) {
|
||||
/* setup for mzclpf_post.inc */
|
||||
gc_mode = GC_current_mode(gc);
|
||||
if ((gc_mode != GC_CURRENT_MODE_ACCOUNTING)
|
||||
&& c->code) {
|
||||
data = (CLOSURE_DATA_TYPE *)GC_resolve2(c->code, gc);
|
||||
if (data->tl_map) {
|
||||
if (!GC_is_marked2(c->vals[closure_size - 1], gc)) {
|
||||
/* In GC_CURRENT_MODE_BACKPOINTER_REMARK mode, we can
|
||||
ignore the prefix, because it must be at least
|
||||
as old as the closure.
|
||||
In GC_CURRENT_MODE_MINOR, if the prefix is in an
|
||||
old collection, then GC_is_marked() will return 1;
|
||||
in incremental mode, we'll mark the prefix and
|
||||
effectively disable unused-variable clearing. */
|
||||
if ((gc_mode == GC_CURRENT_MODE_BACKPOINTER_REMARK)
|
||||
|| !GC_is_marked2(c->vals[closure_size - 1], gc)) {
|
||||
/* don't mark last item, which is a prefix */
|
||||
i--;
|
||||
} else
|
||||
|
|
|
@ -2473,7 +2473,7 @@ long_double scheme_long_double_expt(long_double x, long_double y);
|
|||
by any closure, when the prefix is accessed only by closures. */
|
||||
typedef struct Scheme_Prefix
|
||||
{
|
||||
Scheme_Object so; /* scheme_prefix_type */
|
||||
Scheme_Inclhash_Object iso; /* scheme_prefix_type; 0x1 => incremental-mode fixup chain */
|
||||
int num_slots, num_toplevels, num_stxes;
|
||||
#ifdef MZ_PRECISE_GC
|
||||
struct Scheme_Prefix *next_final; /* for special GC handling */
|
||||
|
@ -2486,6 +2486,8 @@ typedef struct Scheme_Prefix
|
|||
/* followed by an array of `int's for tl_map uses */
|
||||
} Scheme_Prefix;
|
||||
|
||||
#define SCHEME_PREFIX_FLAGS(obj) MZ_OPT_HASH_KEY(&(obj)->iso)
|
||||
|
||||
#define PREFIX_TO_USE_BITS(pf) \
|
||||
(int *)((char *)pf + sizeof(Scheme_Prefix) + ((pf->num_slots - mzFLEX_DELTA) * sizeof(Scheme_Object *)))
|
||||
|
||||
|
|
|
@ -13,12 +13,12 @@
|
|||
consistently.)
|
||||
*/
|
||||
|
||||
#define MZSCHEME_VERSION "6.3.0.1"
|
||||
#define MZSCHEME_VERSION "6.3.0.2"
|
||||
|
||||
#define MZSCHEME_VERSION_X 6
|
||||
#define MZSCHEME_VERSION_Y 3
|
||||
#define MZSCHEME_VERSION_Z 0
|
||||
#define MZSCHEME_VERSION_W 1
|
||||
#define MZSCHEME_VERSION_W 2
|
||||
|
||||
#define MZSCHEME_VERSION_MAJOR ((MZSCHEME_VERSION_X * 100) + MZSCHEME_VERSION_Y)
|
||||
#define MZSCHEME_VERSION_MINOR ((MZSCHEME_VERSION_Z * 1000) + MZSCHEME_VERSION_W)
|
||||
|
|
|
@ -243,7 +243,7 @@ THREAD_LOCAL_DECL(struct Scheme_GC_Pre_Post_Callback_Desc *gc_prepost_callback_d
|
|||
|
||||
ROSYM static Scheme_Object *read_symbol, *write_symbol, *execute_symbol, *delete_symbol, *exists_symbol;
|
||||
ROSYM static Scheme_Object *client_symbol, *server_symbol;
|
||||
ROSYM static Scheme_Object *major_symbol, *minor_symbol;
|
||||
ROSYM static Scheme_Object *major_symbol, *minor_symbol, *incremental_symbol;
|
||||
|
||||
THREAD_LOCAL_DECL(static int do_atomic = 0);
|
||||
THREAD_LOCAL_DECL(static int missed_context_switch = 0);
|
||||
|
@ -524,8 +524,10 @@ void scheme_init_thread(Scheme_Env *env)
|
|||
|
||||
REGISTER_SO(major_symbol);
|
||||
REGISTER_SO(minor_symbol);
|
||||
REGISTER_SO(incremental_symbol);
|
||||
major_symbol = scheme_intern_symbol("major");
|
||||
minor_symbol = scheme_intern_symbol("minor");
|
||||
incremental_symbol = scheme_intern_symbol("incremental");
|
||||
|
||||
GLOBAL_PRIM_W_ARITY("dump-memory-stats" , scheme_dump_gc_stats, 0, -1, env);
|
||||
GLOBAL_PRIM_W_ARITY("vector-set-performance-stats!", current_stats , 1, 2, env);
|
||||
|
@ -723,9 +725,13 @@ static Scheme_Object *collect_garbage(int argc, Scheme_Object *argv[])
|
|||
scheme_collect_garbage_minor();
|
||||
} else if ((argc < 1) || SAME_OBJ(major_symbol, argv[0])) {
|
||||
scheme_collect_garbage();
|
||||
} else if ((argc < 1) || SAME_OBJ(incremental_symbol, argv[0])) {
|
||||
#ifdef MZ_PRECISE_GC
|
||||
GC_request_incremental_mode();
|
||||
#endif
|
||||
} else {
|
||||
scheme_wrong_contract("collect-garbage",
|
||||
"(or/c 'major 'minor)",
|
||||
"(or/c 'major 'minor 'incremental)",
|
||||
0, argc, argv);
|
||||
}
|
||||
|
||||
|
|
Loading…
Reference in New Issue
Block a user