We now generate a qcow2 image to prevent hitting Hydra's output size
limit. Also updated /root/user-data -> /etc/ec2-metadata/user-data.
http://hydra.nixos.org/build/33843133
(cherry picked from commit 0d3738cdcc)
Commit 9f358f809d removed
$SSL_CERT_FILE, which is fine for binaries linking against the current
OpenSSL package, but not old binaries (e.g. those installed via
nix-env). So let's keep $SSL_CERT_FILE for a while longer.
Systemd 229 sets kernel.core_pattern to "|/bin/false" by default,
unless systemd-coredump is enabled. Revert back to the default of
writing "core" in the current directory.
(cherry picked from commit 54ca7e9f75)
This module adds support for defining a flexget service.
Due to flexget insisting on being able to write all over where it finds
its configuration file, we use a ExecStartPre hook to copy the generated
configuration file into place under the user's home. It's fairly ugly
and I'm very open to suggestions
When iodined tries to start before any interface other than loopback has an ip, iodined fails.
Wait for ip-up.target
The above is because of the following:
in iodined's code: src/common.c line 157
the flag AI_ADDRCONFIG is passed as a flag to getaddrinfo.
Iodine uses the function
get_addr(char *host,
int port,
int addr_family,
int flags,
struct sockaddr_storage *out);
to get address information via getaddrinfo().
Within get_addr, the flag AI_ADDRCONFIG is forced.
What this flag does, is cause getaddrinfo to return
"Name or service not known" as an error explicitly if no ip
has been assigned to the computer.
see getaddrinfo(3)
Wait for an ip before starting iodined.
(cherry picked from commit 927aaecbcb)
Assigning the channelMap by the function attrset argument at the
top-level of the test expression file may reference a different
architecture than we need for the tests.
So if we get the pkgs attribute by auto-calling, this will lead to test
failure because we have a different architecture for the test than for
the browser.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
(cherry picked from commit e047d79279)
This has been the case before e45c211, but it turns out that it's very
useful to override the channel packages so we can run tests with
different Chromium build options.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
(cherry picked from commit 3bd71b135b)
The docker service is socket activated by default; thus,
`waitForUnit("docker.service")` before any docker command causes the
unit test to time out.
Instead, do `waitForUnit("sockets.target")` to ensure that sockets are
setup before running docker commands.
(cherry picked from commit ece457c62f)
This patch fixes https://github.com/NixOS/nixpkgs/issues/12927.
It would be great to configure good rate-limiting defaults for this via
/proc/sys/net/ipv4/icmp_ratelimit and /proc/sys/net/ipv6/icmp/ratelimit,
too, but I didn't since I don't know what a "good default" would be.
(cherry picked from commit a0ab4587b7)
- fix `enable` option description
using `mkEnableOption longDescription` is incorrect; override
`description` instead
- additional details for proper usage of the service, including
an example of the recommended configuration
- clarify `localAddress` option description
- clarify `localPort` option description
- clarify `customResolver` option description
(cherry picked from commit a0663e3709)
This is failing because it exceeds the hydra-queue-runner size limit.
http://hydra.nixos.org/build/33303819
(cherry picked from commit 3135af2511)
Signed-off-by: Domen Kožar <domen@dev.si>
Probably not many people care about i686-linux any more, but building
all these images is fairly expensive (e.g. in the worst case, every
Nixpkgs commit would trigger a few gigabytes of uploads to S3).
(cherry picked from commit daa093bf3c)
Signed-off-by: Domen Kožar <domen@dev.si>
This folds adding hydra-build-products into the actual ISO generation,
preventing an unnecessary download of the ISO.
(cherry picked from commit 10293b87a9)
Signed-off-by: Domen Kožar <domen@dev.si>
Previously this was done in three derivations (one to build the raw
disk image, one to convert to OVA, one to add a hydra-build-products
file). Now it's done in one step to reduce the amount of copying
to/from S3. In particular, not uploading the raw disk image prevents
us from hitting hydra-queue-runner's size limit of 2 GiB.
(cherry picked from commit 5cc7bcda30)
Signed-off-by: Domen Kožar <domen@dev.si>
-gitlab-sidekiq was being started with a misspelled argument name
which caused the mailer queue to never run and never send mail
(cherry picked from commit 10198b586e)
Aliases are not the same as programs. They won't work in subshells.
It's better to just use which as it's only 88K.
(cherry picked from commit 73ba0ae2de)
Signed-off-by: Domen Kožar <domen@dev.si>
Accidentally broken by 4fede53c09
("nixos manuals: bring back package references").
Without this fix, grafana won't start:
$ systemctl status grafana
...
systemd[1]: Starting Grafana Service Daemon...
systemd[1]: Started Grafana Service Daemon.
grafana[666]: 2016/03/06 19:57:32 [log.go:75 Fatal()] [E] Failed to detect generated css or javascript files in static root (%!s(MISSING)), have you executed default grunt task?
systemd[1]: grafana.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: grafana.service: Unit entered failed state.
systemd[1]: grafana.service: Failed with result 'exit-code'.
(cherry picked from commit d99033beb9)
This splits a few NixOS tests (namely Chromium, VirtualBox and the
networking tests) into several subtests that are exposed via attributes.
The networking tests were already split up but they didn't expose an
attribute set of available tests but used a function attribute to
specify the resulting test instead.
A new function callSubTests in nixos/release.nix is now responsible for
gathering subtests, which is also used for the installer and boot tests.
The latter is now placed in a tests.boot.* namespace rather than
"polluting" the tests attribute set with its subtest.
As @bobvanderlinden suggests in #13585:
"Looks like that cleans things up quite a bit! Just one aesthetics note,
the boot tests could now be renamed from boot.bootBiosCdrom to
boot.biosCdrom in nixos/tests/boot.nix:L33.
That makes them more consistent with the other tests."
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This makes it easier to test just a specific channel rather than to
force testing all builds down the users/testers throat. Especially this
makes it easier to test NixOS channel upgrades only against the Chromium
stable channel instead of just removing the beta/dev channels from the
tests entirely (as done in 69ec09f38a).
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Should clean up a lot of these redundant lines for various sub-tests.
Note that the tests.boot* are now called tests.boot.boot*, but otherwise
all the test attribute names should stay the same.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @edolstra
Cc: @wkennington
Cc: @bobvanderlinden
So far the networking test expression only generated a single test
depending on the passed "test" attribute. This makes it difficult to
autodiscover the subtests with our shiny new callSubTests function.
This change essentially doesn't change the behaviour of the subtests but
rather exposes them as an attribute set instead of relying on a
particular input argument.
The useNetworkd argument still exists however.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @wkennington
This should de-clutter the various redundant lines of callTest's on
subtests so that every main test file should have only one line with a
callSubTests instead.
Overrides work the same way as callTest, except that if the system
attribute is explicitly specified we do not generate attributes for all
available systems.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Cc: @edolstra
Now subtests are separate derivations, because the individual tests do
not depend on state from previous test runs.
This has the advantage that it's easier to run individiual tests and
it's also easier to pinpoint individual tests that randomly fail.
I ran all of these tests locally and they still succeed.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Both Qt and GTK load plugins from the active profiles
automatically, so it is sufficient to install input methods
system-wide. Overriding the plugin paths may interfere with correct
operation of other plugins.
Updates gitlab to the current stable version and fixes a lot of features that
were broken, at least with the current version and our configuration.
Quite a lot of sweat and tears has gone into testing nearly all features and
reading/patching the Gitlab source as we're about to deploy gitlab for our
whole company.
Things to note:
* The gitlab config is now written as a nix attribute set and will be
converted to JSON. Gitlab uses YAML but JSON is a subset of YAML.
The `extraConfig` opition is also an attribute set that will be merged
with the default config. This way *all* Gitlab options are supported.
* Some paths like uploads and configs are hardcoded in rails (at least
after my study of the Gitlab source). This is why they are linked from
the Gitlab root to /run/gitlab and then linked to the configurable
`statePath`.
* Backup & restore should work out of the box from another Gitlab instance.
* gitlab-git-http-server has been replaced by gitlab-workhorse upstream.
Push & pull over HTTPS works perfectly. Communication to gitlab is done
over unix sockets. An HTTP server is required to proxy requests to
gitlab-workhorse over another unix socket at
`/run/gitlab/gitlab-workhorse.socket`.
* The user & group running gitlab are now configurable. These can even be
changed for live instances.
* The initial email address & password of the root user can be configured.
Fixes#8598.
NetworkManager needs an additional avahi-user to use link-local
IPv4 (and probably IPv6) addresses. avahi-autoipd also needs to be
patched to the right path.
Add a module to make options to pam_oath module configurable.
These are:
- enable - enable the OATH pam module
- window - number of OTPs to check
- digits - length of the OTP (adds support for two-factor auth)
- usersFile - filename to store OATH credentials in
It looks like now queue is not immediately cleared from cancelled jobs.
Instead, files like "c00001" are left alongside "d00001-001", and
cleanup happens at some later point of time. Also, all new jobs are
assigned consecutive numbers now (00002, 00003 etc.). So when
original d00001 file is finally cleaned, it breaks the test. Fixed
by checking for any "d*" file inside the queue and cleaning it by
ourselves to ensure that each job works correctly.