
and functionality improvements (including support for measuring coverage), primitive argument-checking fixes, and object-file changes resulting in reduced load times (and some backward incompatibility): - annotations are now preserved in object files for debug only, for profiling only, for both, or not at all, depending on the settings of generate-inspector-information and compile-profile. in particular, when inspector information is not enabled but profiling is, source information does not leak into error messages and inspector output, though it is still available via the profile tools. The mechanics of this involved repurposing the fasl a? parameter to hold an annotation flags value when it is not #f and remaking annotations with new flags if necessary before emitting them. compile.ss, fasl.ss, misc.ms - altered a number of mats to produce correct results even when the 's' directory is profiled. misc.ms, cp0.ms, record.ms - profile-release-counters is now generation-friendly; that is, it doesn't look for dropped code objects in generations that have not been collected since the last call to profile-release-counters. also, it no longer allocates memory when it releases counters. pdhtml.ss, gc.c, gcwrapper.c, globals.h, prim5.c - removed unused entry points S_ifile, S_ofile, and S_iofile alloc.c, externs.h - mats that test loading profile info into the compiler's database to guide optimization now weed out preexisting entries, in case the 's' directory is profiled. 4.ms, mat.ss, misc.ms, primvars.ms - counters for dropped code objects are now released at the start of each mat group. mat.ss - replaced ehc (enable-heap-check) option with hci (heap-check-interval) option that allows heap checks to be performed periodically rather than on each collection. hci=0 is equivalent to ehc=f (disabling heap checks) and hci=1 is equivalent to ehc=t (enabling heap checks every collection), while hci=100 enables heap checks only every 100th collection. allx and bullyx mats use this feature to reduce heap-checking overhead to a more reasonable level. this is particularly important when the 's' directory is profiled, since the amount of static memory to be checked is greatly increased due to the counters. mats/Mf-base, mat.ss, primvars.ms - added a mat that calls #%show-allocation, which was otherwise not being tested. misc.ms - removed a broken primvars mat and updated two others. in each case, the mat was looking for information about primitives in the wrong (i.e., old) place and silently succeeding when it didn't find any primitives to tests. the revised mats (along with a few others) now check to make sure at least one identifier has the information they look for. the removed mat was checking for library information that is now compiled in, so the mat is now unnecessary. the others were (not) doing argument-error checks. fixing these turned up a handful of problems that have also been fixed: a couple of unbound variables in the mat driver, two broken primdata declarations, a tardy argument check by profile-load-data, and a bug in char-ready?, which was requiring an argument rather than defaulting it to the current input port. primdata.ss, pdhtml.ss, io.ms, primdvars.ms, 4.ms, 6.ms, misc.ms, patch* - added initial support for recording coverage information. when the new parameter generate-covin-files is set, the compiler generates .covin files containing the universe of all source objects for which profile forms are present in the expander output. when profiling and generation of covin files are enabled in the 's' directory, the mats optionally generate .covout files for each mat file giving the subset of the universe covered by the mat file, along with an all.covout in each mat output directory aggregating the coverage for the directory and another all.covout in the top-level mat directory aggregating the coverage for all directories. back.ss, compile.ss, cprep.ss, primdata.ss, s/Mf-base, mat.ss, mats/Mf-base, mats/primvars.ms - support for generating covout files is now built in. with-coverage-output gathers and dumps coverage information, and aggregate-coverage-output combines (aggregates) covout files. pdhtml.ss, primdata.ss, compile.ss, mat.ss, mats/Mf-base, primvars.ms - profile-clear now adjusts active coverage trackers to avoid losing coverage information. pdhtml.ss, prim5.c - nested with-coverage calls are now supported. pdhtml.ss - switched to a more compact representation for covin and covout files; reduces disk space (compressed or not) by about a factor of four and read time by about a factor of two with no increase in write time. primdata.ss, pdhtml.ss, cprep.ss, compile.ss, mat.ss, mats/Mf-base - added support for determining coverage for an entire run, including coverage for expressions hit during boot time. 'all' mats now produce run.covout files in each output directory, and 'allx' mats produce an aggregate run.covout file in the mat directory. pdhtml.ss, mat.ss, mats/Mf-base - profile-release-counters now adjusts active coverage trackers to account for the counters that have been released. pdhtml.ss, prim5.c - replaced the artificial "examples" target with a real "build-examples" target so make won't think it always has to mats that depend upon the examples directory having been compiled. mats make clean now runs make clean in the examples directory. mats/Mf-base importing a library from an object file now just visits the object file rather than doing a full load so that the run-time code for the library is not retained. The run-time code is still read because the current fasl format forces the entire file to be read, but not retaining the code can lower heap size and garbage-collection cost, particularly when many object-code libraries are imported. The downside is that the file must be revisited if the run-time code turns out to be required. This change exposed several places where the code was failing to check if a revisit is needed. syntax.ss, 7.ms, 8.ms, misc.ms, root-experr* - fixed typos: was passing unquoted load rather than quoted load to $load-library along one path (where it is loading source code and therefore irrelevant), and was reporting src-path rather than obj-path in a message about failing to define a library. syntax.ss - compile-file and friends now put all recompile information in the first fasl object after the header so the library manager can find it without loading the entire fasl file. The library manager now does so. It also now checks to see if library object files need to be recreated before loading them rather than loading them and possibly recompiling them after discovering they are out of date, since the latter requires loading the full object file even if it's out of date, while the former takes advantage of the ability to extract just recompile information. as well as reducing overhead, this eliminates possibly undesirable side effects, such as creation and registration of out-of-date nongenerative record-type descriptors. because the library manager expects to find recompile information at the front of an object file, it will not find all recompile information if object files are "catted" together. also, compile-file has to hold in memory the object code for all expressions in the file so that it can emit the unified recompile information, rather than writing to the object file incrementally, which can significantly increase the memory required to compile a large file full of individual top-level forms. This does not affect top-level programs, which were already handled as a whole, or a typical library file that contains just a single library form. compile.ss, syntax.ss - the library manager now checks include files before library dependencies when compile-imported-libraries is false (as it already did when compile-imported-libraries is true) in case a source change affects the set of imported libraries. (A library change can affect the set of include files as well, but checking dependencies before include files can cause unneeded libraries to be loaded.) The include-file check is based on recompile-info rather than dependencies, but the library checks are still based on dependencies. syntax.ss - fixed check for binding of scheme-version. (the check prevents premature treatment of recompile-info records as Lexpand forms to be passed to $interpret-backend.) scheme.c - strip-fasl-file now preserves recompile-info when compile-time info is stripped. strip.ss - removed include-req* from library/ct-info and ctdesc records; it is no longer needed now that all recompile information is maintained separately. expand-lang.ss, syntax.ss, compile.ss, cprep.ss, syntax.ss - changed the fasl format and reworked a lot of code in the expander, compiler, fasl writer, and fasl reader to allow the fasl reader to skip past run-time information when it isn't needed and compile-time information when it isn't needed. Skipping past still involves reading and decoding when encrypted, but the fasl reader no longer parses or allocates code and data in the portions to be skipped. Side effects of associating record uids with rtds are also avoided, as are the side effects of interning symbols present only in the skipped data. Skipping past code objects also reduces or eliminates the need to synchronize data and instruction caches. Since the fasl reader no longer returns compile-time (visit) or run-time (revisit) code and data when not needed, the fasl reader no longer wraps these objects in a pair with a 0 or 1 visit or revisit marker. To support this change, the fasl writer generates separate top-level fasl entries (and graphs) for separate forms in the same top-level source form (e.g., begin or library). This reliably breaks eq-ness of shared structure across these forms, which was previously broken only when visit or revisit code was loaded at different times (this is an incompatible change). Because of the change, fasl "groups" are no longer needed, so they are no longer handled. 7.ss, cmacros.ss, compile.ss, expand-lang.ss, strip.ss, externs.h, fasl.c, scheme.c, hash.ms - the change above is surfaced in an optional fasl-read "situation" argument (visit, revisit, or load). The default is load. visit causes it to skip past revisit code and data; revisit causes it to skip past visit code and data; and load causes it not to skip past either. visit-revisit data produced by (eval-when (visit revisit) ---) is never skipped. 7.ss, primdata.ss, io.stex - to improve compile-time and run-time error checking, the Lexpand recompile-info, library/rt-info, library-ct-info, and program-info forms have been replaced with list-structured forms, e.g., (recompile-info ,rcinfo). expand-lang.ss, compile.ss, cprep.ss, interpret.ss, syntax.ss - added visit-compiled-from-port and revisit-compiled-from-port to complement the existing load-compiled-from-port. 7.ss, primdata.ss, 7.ms, system.stex - increased amount read when seeking an lz4-encrypted input file from 32 to 1024 bytes at a time compress-io.c - replaced the fasl a? parameter value #t with an "all" flag value so it's value is consistently a mask. cmacros.ss, fasl.ss, compile.ss - split off profile mats into a separate file misc.ms, profile.ms (new), root-experr*, mats/Mf-base - added coverage percent computations to mat allx/bullyx output mat.ss, mats/Mf-base, primvars.ms - replaced coverage tables with more generic and generally useful source tables, which map source objects to arbitrary values. pdhtml.ss, compile.ss, cprep.ss, primdata.ss, mat.ss, mats/Mf-base, primvars.ms, profile.ms, syntax.stex - reduced profile counting overhead by using calls to fold-left instead of calls to apply and map and by using fixnum operations for profile counts on 64-bit machines. pdhtml.ss - used a critical section to fix a race condition in the calculations of profile counts that sometimes resulted in bogus (including negative) counts, especially when the 's' directory is profiled. pdhtml.ss - added discard flag to declaration for hashtable-size primdata.ss - redesigned the printed representation of source tables and rewrote get-source-table! to read and store incrementally to reduce memory overhead. compile.ss - added generate-covin-files to the set of parameters preserved by compile-file, etc. compile.ss, system.stex - moved covop argument before the undocumented machine and hostop arguments to compile-port and compile-to-port. removed the undocumented ofn argument from compile-to-port; using (port-name ip) instead. compile.ss, primdata.ss, 7.ms, system.stex - compile-port now tries to come up with a file position to supply to make-read, which it can do if the port's positions are character positions (presently string ports) or if the port is positioned at zero. compile.ss - audited the argument-type-error fuzz mat exceptions and fixed a host of problems this turned up (entries follow). added #f as an invalid argument for every type for which #f is indeed invalid to catch places where the maybe- prefix was missing on the argument type. the mat tries hard to determine if the condition raised (if any) as the result of an invalid argument is appropriate and redirects the remainder to the mat-output (.mo) file prefixed with 'Expected error', causing them to show up in the expected error output so developers will be encouraged to audit them in the future. primvars.ms, mat.ss - added an initial symbol? test on machine type names so we produce an invalid machine type error message rather than something confusing like "machine type #f is not supported". compile.ss - fixed declarations for many primitives that were specified as accepting arguments of more general types than they actually accept, such as number -> real for various numeric operations, symbol -> endianness for various bytevector operations, time -> time-utc for time-utc->date, and list -> list-of-string-pairs for default-library-search-handler. also replaced some of the sub-xxxx types with specific types such as sub-symbol -> endianness in utf16->string, but only where they were causing issues with the primvars argument-type-error fuzz mat. (this should be done more generally.) primdata.ss - fixed incorrect who arguments (was map instead of fold-right, current-date instead of time-utc->date); switched to using define-who/set-who! generally. 4.ss, date.ss - append! now checks all arguments before any mutation 5_2.ss - with-source-path now properly supplies itself as who for the string? argument check; callers like load now do their own checks. 7.ss - added missing integer? check to $fold-bytevector-native-ref whose lack could have resulted in a compile-time error. cp0.ss - fixed typo in output-port-buffer-mode error message io.ss - fixed who argument (was fx< rather than fx<?) library.ss - fixed declaration of first source-file-descriptor argument (was sfd, now string) primdata.ss - added missing article 'a' in a few error messages prims.ss - fixed the copy-environment argument-type error message for the list of symbols argument. syntax.ss - the environment procedure now catches exceptions that occur and reraises the exception with itself as who if the condition isn't already a who condition. syntax.ss - updated experr and allx patch files for changes to argument-count fuzz mat and fixes for problems turned up by them. root-experr*, patch* - fixed a couple of issues setting port sizes: string and bytevector output port put handlers don't need room to store the character or byte, so they now set the size to the buffer length rather than one less. binary-file-port-clear-output now sets the index rather than size to zero; setting the size to zero is inappropriate for some types of ports and could result in loss of buffering and even suppression of future output. removed a couple of redundant sets of the size that occur immediately after setting the buffer. io.ss - it is now possible to return from a call to with-profile-tracker multiple times and not double-count (or worse) any counts. pdhtml.ss, profile.ms - read-token now requires a file position when it is handed a source-file descriptor (since the source-file descriptor isn't otherwise useful), and the source-file descriptor argument can no longer be #f. the input file position plays the same role as the input file position in get-datum/annotations. these extra read-token arguments are now documented. read.ss, 6.ms, io.stex - the source-file descriptor argument to get-datum/annotations can no longer be #f. it was already documented that way. read.ss - read-token and do-read now look for the character-positions port flag before asking if the port has port-position, since the latter is slightly more expensive. read.ss - rd-error now reports the current port position if it can be determined when fp isn't already set, i.e., when reading from a port without character positions (presently any non string port) and fp has not been passed in explicitly (to read-token or get-datum/annotations). the port position might not be a character position, but it should be better than nothing. read.ss - added comment noting an invariant for s_profile_release_counters. prim5.c - restored accidentally dropped fasl-write formdef and dropped duplicate fasl-read formdef io.stex - added a 'coverage' target that tests the coverage of the Scheme-code portions of Chez Scheme by the mats. Makefile.in, Makefile-workarea.in - added .PHONY declarations for all of the targets in the top-level and workarea make files, and renamed the create-bintar, create-rpm, and create-pkg targets bintar, rpm, and pkg. Makefile.in, Makefile-workarea.in - added missing --retain-static-relocation command-line argument and updated the date scheme.1.in - removed a few redundant conditional variable settings configure - fixed declaration of condition wait (timeout -> maybe-timeout) primdata.ss original commit: 88501743001393fa82e89c90da9185fc0086fbcb
831 lines
30 KiB
C
831 lines
30 KiB
C
/* gcwrapper.c
|
|
* Copyright 1984-2017 Cisco Systems, Inc.
|
|
*
|
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
|
* you may not use this file except in compliance with the License.
|
|
* You may obtain a copy of the License at
|
|
*
|
|
* http://www.apache.org/licenses/LICENSE-2.0
|
|
*
|
|
* Unless required by applicable law or agreed to in writing, software
|
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
* See the License for the specific language governing permissions and
|
|
* limitations under the License.
|
|
*/
|
|
|
|
#include "system.h"
|
|
|
|
/* locally defined functions */
|
|
static IBOOL memqp PROTO((ptr x, ptr ls));
|
|
static IBOOL remove_first_nomorep PROTO((ptr x, ptr *pls, IBOOL look));
|
|
static void segment_tell PROTO((uptr seg));
|
|
static void check_heap_dirty_msg PROTO((char *msg, ptr *x));
|
|
static IBOOL dirty_listedp PROTO((seginfo *x, IGEN from_g, IGEN to_g));
|
|
static void check_dirty_space PROTO((ISPC s));
|
|
static void check_dirty PROTO((void));
|
|
|
|
static IBOOL checkheap_noisy;
|
|
|
|
void S_gc_init() {
|
|
IGEN g; INT i;
|
|
|
|
S_checkheap = 0; /* 0 for disabled, 1 for enabled */
|
|
S_checkheap_errors = 0; /* count of errors detected by checkheap */
|
|
checkheap_noisy = 0; /* 0 for error output only; 1 for more noisy output */
|
|
S_G.prcgeneration = static_generation;
|
|
|
|
if (S_checkheap) {
|
|
printf(checkheap_noisy ? "NB: check_heap is enabled and noisy\n" : "NB: check_heap_is_enabled\n");
|
|
fflush(stdout);
|
|
}
|
|
|
|
#ifndef WIN32
|
|
for (g = 0; g <= static_generation; g++) {
|
|
S_child_processes[g] = Snil;
|
|
}
|
|
#endif /* WIN32 */
|
|
|
|
if (!S_boot_time) return;
|
|
|
|
for (g = 0; g <= static_generation; g++) {
|
|
S_G.guardians[g] = Snil;
|
|
S_G.locked_objects[g] = Snil;
|
|
S_G.unlocked_objects[g] = Snil;
|
|
}
|
|
S_G.max_nonstatic_generation =
|
|
S_G.new_max_nonstatic_generation =
|
|
S_G.min_free_gen =
|
|
S_G.new_min_free_gen = default_max_nonstatic_generation;
|
|
|
|
for (g = 0; g <= static_generation; g += 1) {
|
|
for (i = 0; i < countof_types; i += 1) {
|
|
S_G.countof[g][i] = 0;
|
|
S_G.bytesof[g][i] = 0;
|
|
}
|
|
S_G.gctimestamp[g] = 0;
|
|
S_G.rtds_with_counts[g] = Snil;
|
|
}
|
|
|
|
S_G.countof[static_generation][countof_oblist] += 1;
|
|
S_G.bytesof[static_generation][countof_oblist] += S_G.oblist_length * sizeof(bucket *);
|
|
|
|
S_protect(&S_G.static_id);
|
|
S_G.static_id = S_intern((const unsigned char *)"static");
|
|
|
|
S_protect(&S_G.countof_names);
|
|
S_G.countof_names = S_vector(countof_types);
|
|
for (i = 0; i < countof_types; i += 1) {
|
|
INITVECTIT(S_G.countof_names, i) = FIX(0);
|
|
S_G.countof_size[i] = 0;
|
|
}
|
|
INITVECTIT(S_G.countof_names, countof_pair) = S_intern((const unsigned char *)"pair");
|
|
S_G.countof_size[countof_pair] = size_pair;
|
|
INITVECTIT(S_G.countof_names, countof_symbol) = S_intern((const unsigned char *)"symbol");
|
|
S_G.countof_size[countof_symbol] = size_symbol;
|
|
INITVECTIT(S_G.countof_names, countof_flonum) = S_intern((const unsigned char *)"flonum");
|
|
S_G.countof_size[countof_flonum] = size_flonum;
|
|
INITVECTIT(S_G.countof_names, countof_closure) = S_intern((const unsigned char *)"procedure");
|
|
S_G.countof_size[countof_closure] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_continuation) = S_intern((const unsigned char *)"continuation");
|
|
S_G.countof_size[countof_continuation] = size_continuation;
|
|
INITVECTIT(S_G.countof_names, countof_bignum) = S_intern((const unsigned char *)"bignum");
|
|
S_G.countof_size[countof_bignum] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_ratnum) = S_intern((const unsigned char *)"ratnum");
|
|
S_G.countof_size[countof_ratnum] = size_ratnum;
|
|
INITVECTIT(S_G.countof_names, countof_inexactnum) = S_intern((const unsigned char *)"inexactnum");
|
|
S_G.countof_size[countof_inexactnum] = size_inexactnum;
|
|
INITVECTIT(S_G.countof_names, countof_exactnum) = S_intern((const unsigned char *)"exactnum");
|
|
S_G.countof_size[countof_exactnum] = size_exactnum;
|
|
INITVECTIT(S_G.countof_names, countof_box) = S_intern((const unsigned char *)"box");
|
|
S_G.countof_size[countof_box] = size_box;
|
|
INITVECTIT(S_G.countof_names, countof_port) = S_intern((const unsigned char *)"port");
|
|
S_G.countof_size[countof_port] = size_port;
|
|
INITVECTIT(S_G.countof_names, countof_code) = S_intern((const unsigned char *)"code");
|
|
S_G.countof_size[countof_code] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_thread) = S_intern((const unsigned char *)"thread");
|
|
S_G.countof_size[countof_thread] = size_thread;
|
|
INITVECTIT(S_G.countof_names, countof_tlc) = S_intern((const unsigned char *)"tlc");
|
|
S_G.countof_size[countof_tlc] = size_tlc;
|
|
INITVECTIT(S_G.countof_names, countof_rtd_counts) = S_intern((const unsigned char *)"rtd-counts");
|
|
S_G.countof_size[countof_rtd_counts] = size_rtd_counts;
|
|
INITVECTIT(S_G.countof_names, countof_stack) = S_intern((const unsigned char *)"stack");
|
|
S_G.countof_size[countof_stack] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_relocation_table) = S_intern((const unsigned char *)"reloc-table");
|
|
S_G.countof_size[countof_relocation_table] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_weakpair) = S_intern((const unsigned char *)"weakpair");
|
|
S_G.countof_size[countof_weakpair] = size_pair;
|
|
INITVECTIT(S_G.countof_names, countof_vector) = S_intern((const unsigned char *)"vector");
|
|
S_G.countof_size[countof_vector] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_string) = S_intern((const unsigned char *)"string");
|
|
S_G.countof_size[countof_string] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_fxvector) = S_intern((const unsigned char *)"fxvector");
|
|
S_G.countof_size[countof_fxvector] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_bytevector) = S_intern((const unsigned char *)"bytevector");
|
|
S_G.countof_size[countof_bytevector] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_locked) = S_intern((const unsigned char *)"locked");
|
|
S_G.countof_size[countof_locked] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_guardian) = S_intern((const unsigned char *)"guardian");
|
|
S_G.countof_size[countof_guardian] = size_guardian_entry;
|
|
INITVECTIT(S_G.countof_names, countof_oblist) = S_intern((const unsigned char *)"oblist");
|
|
S_G.countof_size[countof_guardian] = 0;
|
|
INITVECTIT(S_G.countof_names, countof_ephemeron) = S_intern((const unsigned char *)"ephemron");
|
|
S_G.countof_size[countof_ephemeron] = 0;
|
|
for (i = 0; i < countof_types; i += 1) {
|
|
if (Svector_ref(S_G.countof_names, i) == FIX(0)) {
|
|
fprintf(stderr, "uninitialized countof_name at index %d\n", i);
|
|
S_abnormal_exit();
|
|
}
|
|
}
|
|
}
|
|
|
|
IGEN S_maxgen(void) {
|
|
return S_G.new_max_nonstatic_generation;
|
|
}
|
|
|
|
void S_set_maxgen(IGEN g) {
|
|
if (g < 0 || g >= static_generation) {
|
|
fprintf(stderr, "invalid maxgen %d\n", g);
|
|
S_abnormal_exit();
|
|
}
|
|
if (S_G.new_min_free_gen == S_G.new_max_nonstatic_generation || S_G.new_min_free_gen > g) {
|
|
S_G.new_min_free_gen = g;
|
|
}
|
|
S_G.new_max_nonstatic_generation = g;
|
|
}
|
|
|
|
IGEN S_minfreegen(void) {
|
|
return S_G.new_min_free_gen;
|
|
}
|
|
|
|
void S_set_minfreegen(IGEN g) {
|
|
S_G.new_min_free_gen = g;
|
|
if (S_G.new_max_nonstatic_generation == S_G.max_nonstatic_generation) {
|
|
S_G.min_free_gen = g;
|
|
}
|
|
}
|
|
|
|
static IBOOL memqp(x, ls) ptr x, ls; {
|
|
for (;;) {
|
|
if (ls == Snil) return 0;
|
|
if (Scar(ls) == x) return 1;
|
|
ls = Scdr(ls);
|
|
}
|
|
}
|
|
|
|
static IBOOL remove_first_nomorep(x, pls, look) ptr x, *pls; IBOOL look; {
|
|
ptr ls;
|
|
|
|
for (;;) {
|
|
ls = *pls;
|
|
if (ls == Snil) break;
|
|
if (Scar(ls) == x) {
|
|
ls = Scdr(ls);
|
|
*pls = ls;
|
|
if (look) return !memqp(x, ls);
|
|
break;
|
|
}
|
|
pls = &Scdr(ls);
|
|
}
|
|
|
|
/* must return 0 if we don't look for more */
|
|
return 0;
|
|
}
|
|
|
|
IBOOL Slocked_objectp(x) ptr x; {
|
|
seginfo *si; IGEN g; IBOOL ans; ptr ls;
|
|
|
|
if (IMMEDIATE(x) || (si = MaybeSegInfo(ptr_get_segment(x))) == NULL || (g = si->generation) == static_generation) return 1;
|
|
|
|
tc_mutex_acquire()
|
|
|
|
ans = 0;
|
|
for (ls = S_G.locked_objects[g]; ls != Snil; ls = Scdr(ls)) {
|
|
if (x == Scar(ls)) {
|
|
ans = 1;
|
|
break;
|
|
}
|
|
}
|
|
|
|
tc_mutex_release()
|
|
|
|
return ans;
|
|
}
|
|
|
|
ptr S_locked_objects(void) {
|
|
IGEN g; ptr ans; ptr ls;
|
|
|
|
tc_mutex_acquire()
|
|
|
|
ans = Snil;
|
|
for (g = 0; g <= static_generation; INCRGEN(g)) {
|
|
for (ls = S_G.locked_objects[g]; ls != Snil; ls = Scdr(ls)) {
|
|
ans = Scons(Scar(ls), ans);
|
|
}
|
|
}
|
|
|
|
tc_mutex_release()
|
|
|
|
return ans;
|
|
}
|
|
|
|
void Slock_object(x) ptr x; {
|
|
seginfo *si; IGEN g;
|
|
|
|
tc_mutex_acquire()
|
|
|
|
/* weed out pointers that won't be relocated */
|
|
if (!IMMEDIATE(x) && (si = MaybeSegInfo(ptr_get_segment(x))) != NULL && (g = si->generation) != static_generation) {
|
|
S_pants_down += 1;
|
|
/* add x to locked list. remove from unlocked list */
|
|
S_G.locked_objects[g] = S_cons_in((g == 0 ? space_new : space_impure), g, x, S_G.locked_objects[g]);
|
|
if (S_G.enable_object_counts) {
|
|
if (g != 0) S_G.countof[g][countof_pair] += 1;
|
|
}
|
|
if (si->space & space_locked)
|
|
(void)remove_first_nomorep(x, &S_G.unlocked_objects[g], 0);
|
|
S_pants_down -= 1;
|
|
}
|
|
|
|
tc_mutex_release()
|
|
}
|
|
|
|
void Sunlock_object(x) ptr x; {
|
|
seginfo *si; IGEN g;
|
|
|
|
tc_mutex_acquire()
|
|
|
|
if (!IMMEDIATE(x) && (si = MaybeSegInfo(ptr_get_segment(x))) != NULL && (g = si->generation) != static_generation) {
|
|
S_pants_down += 1;
|
|
/* remove first occurrence of x from locked list. if there are no
|
|
others, add x to unlocked list */
|
|
if (remove_first_nomorep(x, &S_G.locked_objects[g], si->space & space_locked)) {
|
|
S_G.unlocked_objects[g] = S_cons_in((g == 0 ? space_new : space_impure), g, x, S_G.unlocked_objects[g]);
|
|
if (S_G.enable_object_counts) {
|
|
if (g != 0) S_G.countof[g][countof_pair] += 1;
|
|
}
|
|
}
|
|
S_pants_down -= 1;
|
|
}
|
|
|
|
tc_mutex_release()
|
|
}
|
|
|
|
#ifndef WIN32
|
|
void S_register_child_process(INT child) {
|
|
tc_mutex_acquire()
|
|
S_child_processes[0] = Scons(FIX(child), S_child_processes[0]);
|
|
tc_mutex_release()
|
|
}
|
|
#endif /* WIN32 */
|
|
|
|
IBOOL S_enable_object_counts(void) {
|
|
return S_G.enable_object_counts;
|
|
}
|
|
|
|
void S_set_enable_object_counts(IBOOL eoc) {
|
|
S_G.enable_object_counts = eoc;
|
|
}
|
|
|
|
ptr S_object_counts(void) {
|
|
IGEN grtd, g; ptr ls; iptr i; ptr outer_alist;
|
|
|
|
tc_mutex_acquire()
|
|
|
|
outer_alist = Snil;
|
|
|
|
/* add rtds w/nonozero counts to the alist */
|
|
for (grtd = 0; grtd <= static_generation; INCRGEN(grtd)) {
|
|
for (ls = S_G.rtds_with_counts[grtd]; ls != Snil; ls = Scdr(ls)) {
|
|
ptr rtd = Scar(ls);
|
|
ptr counts = RECORDDESCCOUNTS(rtd);
|
|
IGEN g;
|
|
uptr size = size_record_inst(UNFIX(RECORDDESCSIZE(rtd)));
|
|
ptr inner_alist = Snil;
|
|
|
|
S_fixup_counts(counts);
|
|
for (g = 0; g <= static_generation; INCRGEN(g)) {
|
|
uptr count = RTDCOUNTSIT(counts, g); IGEN gcurrent = g;
|
|
if (g == S_G.new_max_nonstatic_generation) {
|
|
while (g < S_G.max_nonstatic_generation) {
|
|
g += 1;
|
|
count += RTDCOUNTSIT(counts, g);
|
|
}
|
|
}
|
|
if (count != 0) inner_alist = Scons(Scons((gcurrent == static_generation ? S_G.static_id : FIX(gcurrent)), Scons(Sunsigned(count), Sunsigned(count * size))), inner_alist);
|
|
}
|
|
if (inner_alist != Snil) outer_alist = Scons(Scons(rtd, inner_alist), outer_alist);
|
|
}
|
|
}
|
|
|
|
/* add primary types w/nonozero counts to the alist */
|
|
for (i = 0 ; i < countof_types; i += 1) {
|
|
ptr inner_alist = Snil;
|
|
for (g = 0; g <= static_generation; INCRGEN(g)) {
|
|
IGEN gcurrent = g;
|
|
uptr count = S_G.countof[g][i];
|
|
uptr bytes = S_G.bytesof[g][i];
|
|
|
|
if (g == S_G.new_max_nonstatic_generation) {
|
|
while (g < S_G.max_nonstatic_generation) {
|
|
g += 1;
|
|
/* NB: S_G.max_nonstatic_generation + 1 <= static_generation, but coverity complains about overrun */
|
|
/* coverity[overrun-buffer-val] */
|
|
count += S_G.countof[g][i];
|
|
/* coverity[overrun-buffer-val] */
|
|
bytes += S_G.bytesof[g][i];
|
|
}
|
|
}
|
|
|
|
if (count != 0) {
|
|
if (bytes == 0) bytes = count * S_G.countof_size[i];
|
|
inner_alist = Scons(Scons((gcurrent == static_generation ? S_G.static_id : FIX(gcurrent)), Scons(Sunsigned(count), Sunsigned(bytes))), inner_alist);
|
|
}
|
|
}
|
|
if (inner_alist != Snil) outer_alist = Scons(Scons(Svector_ref(S_G.countof_names, i), inner_alist), outer_alist);
|
|
}
|
|
|
|
tc_mutex_release()
|
|
|
|
return outer_alist;
|
|
}
|
|
|
|
/* Scompact_heap(). Compact into as few O/S chunks as possible and
|
|
* move objects into static generation
|
|
*/
|
|
void Scompact_heap() {
|
|
ptr tc = get_thread_context();
|
|
S_pants_down += 1;
|
|
S_gc_oce(tc, S_G.max_nonstatic_generation, static_generation);
|
|
S_pants_down -= 1;
|
|
}
|
|
|
|
/* S_check_heap checks for various kinds of heap consistency
|
|
It currently checks for:
|
|
dangling references in space_impure (generation > 0) and space_pure
|
|
extra dirty bits
|
|
missing dirty bits
|
|
|
|
Some additional things it should check for but doesn't:
|
|
correct dirty bytes, following sweep_dirty conventions
|
|
dangling references in in space_code and space_continuation
|
|
dirty bits set for non-impure segments outside of generation zero
|
|
proper chaining of segments of a space and generation:
|
|
chains contain all and only the appropriate segments
|
|
|
|
If noisy is nonzero, additional comments may be included in the output
|
|
*/
|
|
|
|
static void segment_tell(seg) uptr seg; {
|
|
seginfo *si;
|
|
ISPC s, s1;
|
|
static char *spacename[max_space+1] = { alloc_space_names };
|
|
|
|
printf("segment %#tx", (ptrdiff_t)seg);
|
|
if ((si = MaybeSegInfo(seg)) == NULL) {
|
|
printf(" out of heap bounds\n");
|
|
} else {
|
|
printf(" generation=%d", si->generation);
|
|
s = si->space;
|
|
s1 = si->space & ~(space_old|space_locked);
|
|
if (s1 < 0 || s1 > max_space)
|
|
printf(" space-bogus (%d)", s);
|
|
else {
|
|
printf(" space-%s", spacename[s1]);
|
|
if (s & space_old) printf(" oldspace");
|
|
if (s & space_locked) printf(" locked");
|
|
}
|
|
printf("\n");
|
|
}
|
|
fflush(stdout);
|
|
}
|
|
|
|
void S_ptr_tell(ptr p) {
|
|
segment_tell(ptr_get_segment(p));
|
|
}
|
|
|
|
void S_addr_tell(ptr p) {
|
|
segment_tell(addr_get_segment(p));
|
|
}
|
|
|
|
static void check_heap_dirty_msg(msg, x) char *msg; ptr *x; {
|
|
INT d; seginfo *si;
|
|
|
|
si = SegInfo(addr_get_segment(x));
|
|
d = (INT)(((uptr)x >> card_offset_bits) & ((1 << segment_card_offset_bits) - 1));
|
|
printf("%s dirty byte %d found in segment %#tx, card %d at %#tx\n", msg, si->dirty_bytes[d], (ptrdiff_t)(si->number), d, (ptrdiff_t)x);
|
|
printf("from "); segment_tell(addr_get_segment(x));
|
|
printf("to "); segment_tell(addr_get_segment(*x));
|
|
}
|
|
|
|
void S_check_heap(aftergc) IBOOL aftergc; {
|
|
uptr seg; INT d; ISPC s; IGEN g; IDIRTYBYTE dirty; IBOOL found_eos; IGEN pg;
|
|
ptr p, *pp1, *pp2, *nl;
|
|
iptr i;
|
|
uptr empty_segments = 0;
|
|
uptr used_segments = 0;
|
|
uptr static_segments = 0;
|
|
uptr nonstatic_segments = 0;
|
|
|
|
check_dirty();
|
|
|
|
for (i = PARTIAL_CHUNK_POOLS; i >= -1; i -= 1) {
|
|
chunkinfo *chunk = i == -1 ? S_chunks_full : S_chunks[i];
|
|
while (chunk != NULL) {
|
|
seginfo *si = chunk->unused_segs;
|
|
iptr count = 0;
|
|
while(si) {
|
|
count += 1;
|
|
if (si->space != space_empty) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! unused segment has unexpected space\n");
|
|
}
|
|
si = si->next;
|
|
}
|
|
if ((chunk->segs - count) != chunk->nused_segs) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! unexpected used segs count %td with %td total segs and %td segs on the unused list\n",
|
|
(ptrdiff_t)chunk->nused_segs, (ptrdiff_t)chunk->segs, (ptrdiff_t)count);
|
|
}
|
|
used_segments += chunk->nused_segs;
|
|
empty_segments += count;
|
|
chunk = chunk->next;
|
|
}
|
|
}
|
|
|
|
for (s = 0; s <= max_real_space; s += 1) {
|
|
seginfo *si;
|
|
for (g = 0; g <= S_G.max_nonstatic_generation; INCRGEN(g)) {
|
|
for (si = S_G.occupied_segments[s][g]; si != NULL; si = si->next) {
|
|
nonstatic_segments += 1;
|
|
}
|
|
}
|
|
for (si = S_G.occupied_segments[s][static_generation]; si != NULL; si = si->next) {
|
|
static_segments += 1;
|
|
}
|
|
}
|
|
|
|
if (used_segments != nonstatic_segments + static_segments) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! found %#tx used segments and %#tx occupied segments\n",
|
|
(ptrdiff_t)used_segments,
|
|
(ptrdiff_t)(nonstatic_segments + static_segments));
|
|
}
|
|
|
|
if (S_G.number_of_nonstatic_segments != nonstatic_segments) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! S_G.number_of_nonstatic_segments %#tx is different from occupied number %#tx\n",
|
|
(ptrdiff_t)S_G.number_of_nonstatic_segments,
|
|
(ptrdiff_t)nonstatic_segments);
|
|
}
|
|
|
|
if (S_G.number_of_empty_segments != empty_segments) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! S_G.number_of_empty_segments %#tx is different from unused number %#tx\n",
|
|
(ptrdiff_t)S_G.number_of_empty_segments,
|
|
(ptrdiff_t)empty_segments);
|
|
}
|
|
|
|
for (i = PARTIAL_CHUNK_POOLS; i >= -1; i -= 1) {
|
|
chunkinfo *chunk = i == -1 ? S_chunks_full : S_chunks[i];
|
|
while (chunk != NULL) {
|
|
uptr nsegs; seginfo *si;
|
|
for (si = &chunk->sis[0], nsegs = chunk->segs; nsegs != 0; nsegs -= 1, si += 1) {
|
|
seginfo *recorded_si; uptr recorded_seg;
|
|
if ((seg = si->number) != (recorded_seg = (chunk->base + chunk->segs - nsegs))) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! recorded segment number %#tx differs from actual segment number %#tx", (ptrdiff_t)seg, (ptrdiff_t)recorded_seg);
|
|
}
|
|
if ((recorded_si = SegInfo(seg)) != si) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! recorded segment %#tx seginfo %#tx differs from actual seginfo %#tx", (ptrdiff_t)seg, (ptrdiff_t)recorded_si, (ptrdiff_t)si);
|
|
}
|
|
s = si->space;
|
|
g = si->generation;
|
|
|
|
if (s == space_new) {
|
|
if (g != 0) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! unexpected generation %d segment %#tx in space_new\n", g, (ptrdiff_t)seg);
|
|
}
|
|
} else if (s == space_impure || s == space_symbol || s == space_pure || s == space_weakpair /* || s == space_ephemeron */) {
|
|
/* out of date: doesn't handle space_port, space_continuation, space_code, space_pure_typed_object, space_impure_record */
|
|
nl = (ptr *)S_G.next_loc[s][g];
|
|
|
|
/* check for dangling references */
|
|
pp1 = (ptr *)build_ptr(seg, 0);
|
|
pp2 = (ptr *)build_ptr(seg + 1, 0);
|
|
if (pp1 <= nl && nl < pp2) pp2 = nl;
|
|
|
|
while (pp1 != pp2) {
|
|
seginfo *psi; ISPC ps;
|
|
p = *pp1;
|
|
if (p == forward_marker) break;
|
|
if (!IMMEDIATE(p) && (psi = MaybeSegInfo(ptr_get_segment(p))) != NULL && ((ps = psi->space) & space_old || ps == space_empty)) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! dangling reference at %#tx to %#tx\n", (ptrdiff_t)pp1, (ptrdiff_t)p);
|
|
printf("from: "); segment_tell(seg);
|
|
printf("to: "); segment_tell(ptr_get_segment(p));
|
|
}
|
|
pp1 += 1;
|
|
}
|
|
|
|
/* verify that dirty bits are set appropriately */
|
|
/* out of date: doesn't handle space_impure_record, space_port, and maybe others */
|
|
/* also doesn't check the SYMCODE for symbols */
|
|
if (s == space_impure || s == space_symbol || s == space_weakpair /* || s == space_ephemeron */) {
|
|
found_eos = 0;
|
|
pp2 = pp1 = build_ptr(seg, 0);
|
|
for (d = 0; d < cards_per_segment; d += 1) {
|
|
if (found_eos) {
|
|
if (si->dirty_bytes[d] != 0xff) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! Dirty byte set past end-of-segment for segment %#tx, card %d\n", (ptrdiff_t)seg, d);
|
|
segment_tell(seg);
|
|
}
|
|
continue;
|
|
}
|
|
|
|
pp2 += bytes_per_card / sizeof(ptr);
|
|
if (pp1 <= nl && nl < pp2) {
|
|
found_eos = 1;
|
|
pp2 = nl;
|
|
}
|
|
|
|
#ifdef DEBUG
|
|
printf("pp1 = %#tx, pp2 = %#tx, nl = %#tx\n", (ptrdiff_t)pp1, (ptrdiff_t)pp2, (ptrdiff_t)nl);
|
|
fflush(stdout);
|
|
#endif
|
|
|
|
dirty = 0xff;
|
|
while (pp1 != pp2) {
|
|
seginfo *psi;
|
|
p = *pp1;
|
|
|
|
if (p == forward_marker) {
|
|
found_eos = 1;
|
|
break;
|
|
}
|
|
if (!IMMEDIATE(p) && (psi = MaybeSegInfo(ptr_get_segment(p))) != NULL && (pg = psi->generation) < g) {
|
|
if (pg < dirty) dirty = pg;
|
|
if (si->dirty_bytes[d] > pg) {
|
|
S_checkheap_errors += 1;
|
|
check_heap_dirty_msg("!!! INVALID", pp1);
|
|
}
|
|
else if (checkheap_noisy)
|
|
check_heap_dirty_msg("... ", pp1);
|
|
}
|
|
pp1 += 1;
|
|
}
|
|
if (checkheap_noisy && si->dirty_bytes[d] < dirty) {
|
|
/* sweep_dirty won't sweep, and update dirty byte, for
|
|
cards with dirty pointers to segments older than the
|
|
maximum copyied generation, so we can get legitimate
|
|
conservative dirty bytes even after gc */
|
|
printf("... Conservative dirty byte %x (%x) %sfor segment %#tx card %d ",
|
|
si->dirty_bytes[d], dirty,
|
|
(aftergc ? "after gc " : ""),
|
|
(ptrdiff_t)seg, d);
|
|
segment_tell(seg);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
if (aftergc && s != space_empty && !(s & space_locked) && (g == 0 || (s != space_impure && s != space_symbol && s != space_port && s != space_weakpair && s != space_ephemeron && s != space_impure_record))) {
|
|
for (d = 0; d < cards_per_segment; d += 1) {
|
|
if (si->dirty_bytes[d] != 0xff) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! Unnecessary dirty byte %x (%x) after gc for segment %#tx card %d ",
|
|
si->dirty_bytes[d], 0xff, (ptrdiff_t)(si->number), d);
|
|
segment_tell(seg);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
chunk = chunk->next;
|
|
}
|
|
}
|
|
}
|
|
|
|
static IBOOL dirty_listedp(seginfo *x, IGEN from_g, IGEN to_g) {
|
|
seginfo *si = DirtySegments(from_g, to_g);
|
|
while (si != NULL) {
|
|
if (si == x) return 1;
|
|
si = si->dirty_next;
|
|
}
|
|
return 0;
|
|
}
|
|
|
|
static void check_dirty_space(ISPC s) {
|
|
IGEN from_g, to_g, min_to_g; INT d; seginfo *si;
|
|
|
|
for (from_g = 0; from_g <= static_generation; from_g += 1) {
|
|
for (si = S_G.occupied_segments[s][from_g]; si != NULL; si = si->next) {
|
|
if (si->space & space_locked) continue;
|
|
min_to_g = 0xff;
|
|
for (d = 0; d < cards_per_segment; d += 1) {
|
|
to_g = si->dirty_bytes[d];
|
|
if (to_g != 0xff) {
|
|
if (to_g < min_to_g) min_to_g = to_g;
|
|
if (from_g == 0) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): space %d, generation %d segment %#tx card %d is marked dirty\n", s, from_g, (ptrdiff_t)(si->number), d);
|
|
}
|
|
}
|
|
}
|
|
if (min_to_g != si->min_dirty_byte) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): space %d, generation %d segment %#tx min_dirty_byte is %d while actual min is %d\n", s, from_g, (ptrdiff_t)(si->number), si->min_dirty_byte, min_to_g);
|
|
segment_tell(si->number);
|
|
} else if (min_to_g != 0xff) {
|
|
if (!dirty_listedp(si, from_g, min_to_g)) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): space %d, generation %d segment %#tx is marked dirty but not in dirty-segment list\n", s, from_g, (ptrdiff_t)(si->number));
|
|
segment_tell(si->number);
|
|
}
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
static void check_dirty() {
|
|
IGEN from_g, to_g; seginfo *si;
|
|
|
|
for (from_g = 1; from_g <= static_generation; from_g = from_g == S_G.max_nonstatic_generation ? static_generation : from_g + 1) {
|
|
for (to_g = 0; (from_g == static_generation) ? (to_g <= S_G.max_nonstatic_generation) : (to_g < from_g); to_g += 1) {
|
|
si = DirtySegments(from_g, to_g);
|
|
if (from_g > S_G.max_nonstatic_generation && from_g != static_generation) {
|
|
if (si != NULL) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): unexpected nonempty from-generation %d, to-generation %d dirty segment list\n", from_g, to_g);
|
|
}
|
|
} else {
|
|
while (si != NULL) {
|
|
ISPC s = si->space & ~space_locked;
|
|
IGEN g = si->generation;
|
|
IGEN mingval = si->min_dirty_byte;
|
|
if (g != from_g) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): generation %d segment %#tx in %d -> %d dirty list\n", g, (ptrdiff_t)(si->number), from_g, to_g);
|
|
}
|
|
if (mingval != to_g) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): dirty byte = %d for segment %#tx in %d -> %d dirty list\n", mingval, (ptrdiff_t)(si->number), from_g, to_g);
|
|
}
|
|
if (s != space_new && s != space_impure && s != space_symbol && s != space_port && s != space_impure_record && s != space_weakpair && s != space_ephemeron) {
|
|
S_checkheap_errors += 1;
|
|
printf("!!! (check_dirty): unexpected space %d for dirty segment %#tx\n", s, (ptrdiff_t)(si->number));
|
|
}
|
|
si = si->dirty_next;
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
check_dirty_space(space_impure);
|
|
check_dirty_space(space_symbol);
|
|
check_dirty_space(space_port);
|
|
check_dirty_space(space_impure_record);
|
|
check_dirty_space(space_weakpair);
|
|
check_dirty_space(space_ephemeron);
|
|
|
|
fflush(stdout);
|
|
}
|
|
|
|
void S_fixup_counts(ptr counts) {
|
|
IGEN g; U64 timestamp;
|
|
|
|
timestamp = RTDCOUNTSTIMESTAMP(counts);
|
|
for (g = 0; g <= static_generation; INCRGEN(g)) {
|
|
if (timestamp >= S_G.gctimestamp[g]) break;
|
|
RTDCOUNTSIT(counts, g) = 0;
|
|
}
|
|
RTDCOUNTSTIMESTAMP(counts) = S_G.gctimestamp[0];
|
|
}
|
|
|
|
void S_do_gc(IGEN mcg, IGEN tg) {
|
|
ptr tc = get_thread_context();
|
|
ptr code;
|
|
|
|
code = CP(tc);
|
|
if (Sprocedurep(code)) code = CLOSCODE(code);
|
|
Slock_object(code);
|
|
|
|
/* Scheme side grabs mutex before calling S_do_gc */
|
|
S_pants_down += 1;
|
|
|
|
if (S_G.new_max_nonstatic_generation > S_G.max_nonstatic_generation) {
|
|
S_G.min_free_gen = S_G.new_min_free_gen;
|
|
S_G.max_nonstatic_generation = S_G.new_max_nonstatic_generation;
|
|
}
|
|
|
|
if (tg == mcg && mcg == S_G.new_max_nonstatic_generation && mcg < S_G.max_nonstatic_generation) {
|
|
IGEN new_g, old_g, from_g, to_g; ISPC s; seginfo *si, *nextsi, *tail;
|
|
/* reducing max_nonstatic_generation */
|
|
new_g = S_G.new_max_nonstatic_generation;
|
|
old_g = S_G.max_nonstatic_generation;
|
|
/* first, collect everything to old_g */
|
|
S_gc(tc, old_g, old_g);
|
|
/* now transfer old_g info to new_g, and clear old_g info */
|
|
for (s = 0; s <= max_real_space; s += 1) {
|
|
S_G.first_loc[s][new_g] = S_G.first_loc[s][old_g]; S_G.first_loc[s][old_g] = FIX(0);
|
|
S_G.base_loc[s][new_g] = S_G.base_loc[s][old_g]; S_G.base_loc[s][old_g] = FIX(0);
|
|
S_G.next_loc[s][new_g] = S_G.next_loc[s][old_g]; S_G.next_loc[s][old_g] = FIX(0);
|
|
S_G.bytes_left[s][new_g] = S_G.bytes_left[s][old_g]; S_G.bytes_left[s][old_g] = 0;
|
|
S_G.bytes_of_space[s][new_g] = S_G.bytes_of_space[s][old_g]; S_G.bytes_of_space[s][old_g] = 0;
|
|
S_G.occupied_segments[s][new_g] = S_G.occupied_segments[s][old_g]; S_G.occupied_segments[s][old_g] = NULL;
|
|
for (si = S_G.occupied_segments[s][new_g]; si != NULL; si = si->next) {
|
|
si->generation = new_g;
|
|
}
|
|
}
|
|
S_G.guardians[new_g] = S_G.guardians[old_g]; S_G.guardians[old_g] = Snil;
|
|
S_G.locked_objects[new_g] = S_G.locked_objects[old_g]; S_G.locked_objects[old_g] = Snil;
|
|
S_G.unlocked_objects[new_g] = S_G.unlocked_objects[old_g]; S_G.unlocked_objects[old_g] = Snil;
|
|
S_G.buckets_of_generation[new_g] = S_G.buckets_of_generation[old_g]; S_G.buckets_of_generation[old_g] = NULL;
|
|
if (S_G.enable_object_counts) {
|
|
INT i; ptr ls;
|
|
for (i = 0; i < countof_types; i += 1) {
|
|
S_G.countof[new_g][i] = S_G.countof[old_g][i]; S_G.countof[old_g][i] = 0;
|
|
S_G.bytesof[new_g][i] = S_G.bytesof[old_g][i]; S_G.bytesof[old_g][i] = 0;
|
|
}
|
|
S_G.rtds_with_counts[new_g] = S_G.rtds_with_counts[old_g]; S_G.rtds_with_counts[old_g] = Snil;
|
|
for (ls = S_G.rtds_with_counts[new_g]; ls != Snil; ls = Scdr(ls)) {
|
|
ptr counts = RECORDDESCCOUNTS(Scar(ls));
|
|
RTDCOUNTSIT(counts, new_g) = RTDCOUNTSIT(counts, old_g); RTDCOUNTSIT(counts, old_g) = 0;
|
|
}
|
|
for (ls = S_G.rtds_with_counts[static_generation]; ls != Snil; ls = Scdr(ls)) {
|
|
ptr counts = RECORDDESCCOUNTS(Scar(ls));
|
|
RTDCOUNTSIT(counts, new_g) = RTDCOUNTSIT(counts, old_g); RTDCOUNTSIT(counts, old_g) = 0;
|
|
}
|
|
}
|
|
|
|
/* change old_g dirty bytes in static generation to new_g; splice list of old_g
|
|
seginfos onto front of new_g seginfos */
|
|
for (from_g = 1; from_g <= static_generation; INCRGEN(from_g)) {
|
|
for (to_g = 0; (from_g == static_generation) ? (to_g <= S_G.max_nonstatic_generation) : (to_g < from_g); to_g += 1) {
|
|
if ((si = DirtySegments(from_g, to_g)) != NULL) {
|
|
if (from_g == old_g) {
|
|
DirtySegments(from_g, to_g) = NULL;
|
|
DirtySegments(new_g, to_g) = si;
|
|
si->dirty_prev = &DirtySegments(new_g, to_g);
|
|
} else if (from_g == static_generation) {
|
|
if (to_g == old_g) {
|
|
DirtySegments(from_g, to_g) = NULL;
|
|
tail = DirtySegments(from_g, new_g);
|
|
DirtySegments(from_g, new_g) = si;
|
|
si->dirty_prev = &DirtySegments(from_g, new_g);
|
|
for (;;) {
|
|
INT d;
|
|
si->min_dirty_byte = new_g;
|
|
for (d = 0; d < cards_per_segment; d += 1) {
|
|
if (si->dirty_bytes[d] == old_g) si->dirty_bytes[d] = new_g;
|
|
}
|
|
nextsi = si->dirty_next;
|
|
if (nextsi == NULL) break;
|
|
si = nextsi;
|
|
}
|
|
if (tail != NULL) tail->dirty_prev = &si->dirty_next;
|
|
si->dirty_next = tail;
|
|
} else {
|
|
do {
|
|
INT d;
|
|
for (d = 0; d < cards_per_segment; d += 1) {
|
|
if (si->dirty_bytes[d] == old_g) si->dirty_bytes[d] = new_g;
|
|
}
|
|
si = si->dirty_next;
|
|
} while (si != NULL);
|
|
}
|
|
} else {
|
|
S_error_abort("S_do_gc(gc): unexpected nonempty dirty segment list");
|
|
}
|
|
}
|
|
}
|
|
}
|
|
|
|
/* tell profile_release_counters to scan only through new_g */
|
|
if (S_G.prcgeneration == old_g) S_G.prcgeneration = new_g;
|
|
|
|
/* finally reset max_nonstatic_generation */
|
|
S_G.min_free_gen = S_G.new_min_free_gen;
|
|
S_G.max_nonstatic_generation = new_g;
|
|
} else {
|
|
S_gc(tc, mcg, tg);
|
|
}
|
|
S_pants_down -= 1;
|
|
|
|
/* eagerly give collecting thread, the only one guaranteed to be
|
|
active, a fresh allocation area. the other threads have to trap
|
|
to get_more_room if and when they awake and try to allocate */
|
|
S_reset_allocation_pointer(tc);
|
|
|
|
Sunlock_object(code);
|
|
}
|
|
|
|
|
|
void S_gc(ptr tc, IGEN mcg, IGEN tg) {
|
|
if (tg == static_generation || S_G.enable_object_counts)
|
|
S_gc_oce(tc, mcg, tg);
|
|
else
|
|
S_gc_ocd(tc, mcg, tg);
|
|
}
|