Introduction
About Hydra
Hydra is a tool for continuous integration testing and software release that uses a purely functional language to describe build jobs and their dependencies. Continuous integration is a simple technique to improve the quality of the software development process. An automated system continuously or periodically checks out the source code of a project, builds it, runs tests, and produces reports for the developers. Thus, various errors that might accidentally be committed into the code base are automatically caught. Such a system allows more in-depth testing than what developers could feasibly do manually:
- Portability testing : The software may need to be built and tested on many different platforms. It is infeasible for each developer to do this before every commit.
- Likewise, many projects have very large test sets (e.g., regression tests in a compiler, or stress tests in a DBMS) that can take hours or days to run to completion.
- Many kinds of static and dynamic analyses can be performed as part of the tests, such as code coverage runs and static analyses.
- It may also be necessary to build many different variants of the software. For instance, it may be necessary to verify that the component builds with various versions of a compiler.
- Developers typically use incremental building to test their changes (since a full build may take too long), but this is unreliable with many build management tools (such as Make), i.e., the result of the incremental build might differ from a full build.
- It ensures that the software can be built from the sources under revision control. Users of version management systems such as CVS and Subversion often forget to place source files under revision control.
- The machines on which the continuous integration system runs ideally provides a clean, well-defined build environment. If this environment is administered through proper SCM techniques, then builds produced by the system can be reproduced. In contrast, developer work environments are typically not under any kind of SCM control.
- In large projects, developers often work on a particular component of the project, and do not build and test the composition of those components (again since this is likely to take too long). To prevent the phenomenon of ``big bang integration'', where components are only tested together near the end of the development process, it is important to test components together as soon as possible (hence continuous integration ).
- It allows software to be released by automatically creating packages that users can download and install. To do this manually represents an often prohibitive amount of work, as one may want to produce releases for many different platforms: e.g., installers for Windows and Mac OS X, RPM or Debian packages for certain Linux distributions, and so on.
In its simplest form, a continuous integration tool sits in a loop building and releasing software components from a version management system. For each component, it performs the following tasks:
- It obtains the latest version of the component's source code from the version management system.
- It runs the component's build process (which presumably includes the execution of the component's test set).
- It presents the results of the build (such as error logs and releases) to the developers, e.g., by producing a web page.
Examples of continuous integration tools include Jenkins, CruiseControl Tinderbox, Sisyphus, Anthill and BuildBot. These tools have various limitations.
- They do not manage the build environment . The build environment consists of the dependencies necessary to perform a build action, e.g., compilers, libraries, etc. Setting up the environment is typically done manually, and without proper SCM control (so it may be hard to reproduce a build at a later time). Manual management of the environment scales poorly in the number of configurations that must be supported. For instance, suppose that we want to build a component that requires a certain compiler X. We then have to go to each machine and install X. If we later need a newer version of X, the process must be repeated all over again. An ever worse problem occurs if there are conflicting, mutually exclusive versions of the dependencies. Thus, simply installing the latest version is not an option. Of course, we can install these components in different directories and manually pass the appropriate paths to the build processes of the various components. But this is a rather tiresome and error-prone process.
- They do not easily support variability in software systems . A system may have a great deal of build-time variability: optional functionality, whether to build a debug or production version, different versions of dependencies, and so on. (For instance, the Linux kernel now has over 2,600 build-time configuration switches.) It is therefore important that a continuous integration tool can easily select and test different instances from the configuration space of the system to reveal problems, such as erroneous interactions between features. In a continuous integration setting, it is also useful to test different combinations of versions of subsystems, e.g., the head revision of a component against stable releases of its dependencies, and vice versa, as this can reveal various integration problems.
Hydra, is a continuous integration tool that solves these problems. It is built on top of the Nix package manager, which has a purely functional language for describing package build actions and their dependencies. This allows the build environment for projects to be produced automatically and deterministically, and variability in components to be expressed naturally using functions; and as such is an ideal fit for a continuous build system.
About Us
Hydra is the successor of the Nix Buildfarm, which was developed in tandem with the Nix software deployment system. Nix was originally developed at the Department of Information and Computing Sciences, Utrecht University by the TraCE project (2003-2008). The project was funded by the Software Engineering Research Program Jacquard to improve the support for variability in software systems. Funding for the development of Nix and Hydra is now provided by the NIRICT LaQuSo Build Farm project.
About this Manual
This manual tells you how to install the Hydra buildfarm software on your own server and how to operate that server using its web interface.
License
Hydra is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
Hydra is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
Hydra at nixos.org
The nixos.org
installation of Hydra runs at
http://hydra.nixos.org/
. That installation
is used to build software components from the Nix,
NixOS, GNU,
Stratego/XT, and related projects.
If you are one of the developers on those projects, it is likely that you will be using the NixOS Hydra server in some way. If you need to administer automatic builds for your project, you should pull the right strings to get an account on the server. This manual will tell you how to set up new projects and build jobs within those projects and write a release.nix file to describe the build process of your project to Hydra. You can skip the next chapter.
If your project does not yet have automatic builds within the NixOS Hydra server, it may actually be eligible. We are in the process of setting up a large buildfarm that should be able to support open source and academic software projects. Get in touch.
Hydra on your own buildfarm
If you need to run your own Hydra installation, installation chapter explains how to download and install the system on your own server.
Installation
This chapter explains how to install Hydra on your own build farm server.
Prerequisites
To install and use Hydra you need to have installed the following dependencies:
-
Nix
-
PostgreSQL
-
many Perl packages, notably Catalyst, EmailSender, and NixPerl (see the Hydra expression in Nixpkgs for the complete list)
At the moment, Hydra runs only on GNU/Linux (i686-linux and x86_64_linux).
For small projects, Hydra can be run on any reasonably modern machine. For individual projects you can even run Hydra on a laptop. However, the charm of a buildfarm server is usually that it operates without disturbing the developer's working environment and can serve releases over the internet. In conjunction you should typically have your source code administered in a version management system, such as subversion. Therefore, you will probably want to install a server that is connected to the internet. To scale up to large and/or many projects, you will need at least a considerable amount of diskspace to store builds. Since Hydra can schedule multiple simultaneous build jobs, it can be useful to have a multi-core machine, and/or attach multiple build machines in a network to the central Hydra server.
Of course we think it is a good idea to use the NixOS GNU/Linux distribution for your buildfarm server. But this is not a requirement. The Nix software deployment system can be installed on any GNU/Linux distribution in parallel to the regular package management system. Thus, you can use Hydra on a Debian, Fedora, SuSE, or Ubuntu system.
Getting Nix
If your server runs NixOS you are all set to continue with installation of Hydra. Otherwise you first need to install Nix. The latest stable version can be found one the Nix web site, along with a manual, which includes installation instructions.
Installation
The latest development snapshot of Hydra can be installed by visiting
the URL
http://hydra.nixos.org/view/hydra/unstable
and using the one-click install available at one of the build pages. You
can also install Hydra through the channel by performing the following
commands:
nix-channel --add http://hydra.nixos.org/jobset/hydra/master/channel/latest
nix-channel --update
nix-env -i hydra
Command completion should reveal a number of command-line tools from
Hydra, such as hydra-queue-runner
.
Creating the database
Hydra stores its results in a PostgreSQL database.
To setup a PostgreSQL database with hydra as database name and user name, issue the following commands on the PostgreSQL server:
createuser -S -D -R -P hydra
createdb -O hydra hydra
Note that $prefix is the location of Hydra in the nix store.
Hydra uses an environment variable to know which database should be
used, and a variable which point to a location that holds some state. To
set these variables for a PostgreSQL database, add the following to the
file ~/.profile
of the user running the Hydra services.
export HYDRA_DBI="dbi:Pg:dbname=hydra;host=dbserver.example.org;user=hydra;"
export HYDRA_DATA=/var/lib/hydra
You can provide the username and password in the file ~/.pgpass
, e.g.
dbserver.example.org:*:hydra:hydra:password
Make sure that the HYDRA_DATA directory exists and is writable for the user which will run the Hydra services.
Having set these environment variables, you can now initialise the database by doing:
hydra-init
To create projects, you need to create a user with admin privileges.
This can be done using the command hydra-create-user
:
$ hydra-create-user alice --full-name 'Alice Q. User' \
--email-address 'alice@example.org' --password-prompt --role admin
Additional users can be created through the web interface.
Upgrading
If you're upgrading Hydra from a previous version, you should do the following to perform any necessary database schema migrations:
hydra-init
Getting Started
To start the Hydra web server, execute:
hydra-server
When the server is started, you can browse to http://localhost:3000/ to start configuring your Hydra instance.
The hydra-server
command launches the web server. There are two other
processes that come into play:
- The evaluator is responsible for periodically evaluating job sets, checking out their dependencies off their version control systems (VCS), and queueing new builds if the result of the evaluation changed. It is launched by the hydra-evaluator command.
- The queue runner launches builds (using Nix) as they are queued by the evaluator, scheduling them onto the configured Nix hosts. It is launched using the hydra-queue-runner command.
All three processes must be running for Hydra to be fully functional, though it's possible to temporarily stop any one of them for maintenance purposes, for instance.
Configuration
This chapter is a collection of configuration snippets for different scenarios.
The configuration is parsed by Config::General
which has a pretty
thorough documentation on their file format.
Hydra calls the parser with the following options:
-UseApacheInclude => 1
-IncludeAgain => 1
-IncludeRelative => 1
Including files
hydra.conf
supports Apache-style includes. This is IMPORTANT
because that is how you keep your secrets out of the Nix store.
Hopefully this got your attention 😌
This:
<github_authorization>
NixOS = Bearer gha-secret😱secret😱secret😱
</github_authorization>
should NOT be in hydra.conf
.
hydra.conf
is rendered in the Nix store and is therefore world-readable.
Instead, the above should be written to a file outside the Nix store by other means (manually, using Nixops' secrets feature, etc) and included like so:
Include /run/keys/hydra/github_authorizations.conf
Serving behind reverse proxy
To serve hydra web server behind reverse proxy like nginx or httpd some additional configuration must be made.
Edit your hydra.conf
file in a similar way to this example:
using_frontend_proxy 1
base_uri example.com
base_uri
should be your hydra servers proxied URL. If you are using
Hydra nixos module then setting hydraURL
option should be enough.
If you want to serve Hydra with a prefix path, for example
http://example.com/hydra then you need to configure your reverse
proxy to pass X-Request-Base
to hydra, with prefix path as value. For
example if you are using nginx, then use configuration similar to
following:
server {
listen 433 ssl;
server_name example.com;
.. other configuration ..
location /hydra/ {
proxy_pass http://127.0.0.1:3000;
proxy_redirect http://127.0.0.1:3000 https://example.com/hydra;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Request-Base /hydra;
}
}
Statsd Configuration
By default, Hydra will send stats to statsd at localhost:8125
. Point Hydra to a different server via:
<statsd>
host = alternative.host
port = 18125
</statsd>
hydra-notify's Prometheus service
hydra-notify supports running a Prometheus webserver for metrics. The exporter does not run unless a listen address and port are specified in the hydra configuration file, as below:
<hydra_notify>
<prometheus>
listen_address = 127.0.0.1
port = 9199
</prometheus>
</hydra_notify>
hydra-queue-runner's Prometheus service
hydra-queue-runner supports running a Prometheus webserver for metrics. The
exporter's address defaults to exposing on 127.0.0.1:9198
, but is also
configurable through the hydra configuration file and a command line argument,
as below. A port of :0
will make the exposer choose a random, available port.
queue_runner_metrics_address = 127.0.0.1:9198
# or
queue_runner_metrics_address = [::]:9198
$ hydra-queue-runner --prometheus-address 127.0.0.1:9198
# or
$ hydra-queue-runner --prometheus-address [::]:9198
Using LDAP as authentication backend (optional)
Instead of using Hydra's built-in user management you can optionally use LDAP to manage roles and users.
This is configured by defining the <ldap>
block in the configuration file.
In this block it's possible to configure the authentication plugin in the
<config>
block. All options are directly passed to Catalyst::Authentication::Store::LDAP
.
The documentation for the available settings can be found [here]
(https://metacpan.org/pod/Catalyst::Authentication::Store::LDAP#CONFIGURATION-OPTIONS).
Note that the bind password (if needed) should be supplied as an included file to prevent it from leaking to the Nix store.
Roles can be assigned to users based on their LDAP group membership. For this
to work use_roles = 1 needs to be defined for the authentication plugin.
LDAP groups can then be mapped to Hydra roles using the <role_mapping>
block.
Example configuration:
<ldap>
<config>
<credential>
class = Password
password_field = password
password_type = self_check
</credential>
<store>
class = LDAP
ldap_server = localhost
<ldap_server_options>
timeout = 30
</ldap_server_options>
binddn = "cn=root,dc=example"
include ldap-password.conf
start_tls = 0
<start_tls_options>
verify = none
</start_tls_options>
user_basedn = "ou=users,dc=example"
user_filter = "(&(objectClass=inetOrgPerson)(cn=%s))"
user_scope = one
user_field = cn
<user_search_options>
deref = always
</user_search_options>
# Important for role mappings to work:
use_roles = 1
role_basedn = "ou=groups,dc=example"
role_filter = "(&(objectClass=groupOfNames)(member=%s))"
role_scope = one
role_field = cn
role_value = dn
<role_search_options>
deref = always
</role_search_options>
</config>
<role_mapping>
# Make all users in the hydra_admin group Hydra admins
hydra_admin = admin
# Allow all users in the dev group to restart jobs and cancel builds
dev = restart-jobs
dev = cancel-build
</role_mapping>
</ldap>
Then, place the password to your LDAP server in /var/lib/hydra/ldap-password.conf
:
bindpw = the-ldap-password
Debugging LDAP
Set the debug
parameter under ldap.config.ldap_server_options.debug
:
<ldap>
<config>
<store>
<ldap_server_options>
debug = 2
</ldap_server_options>
</store>
</config>
</ldap>
Legacy LDAP Configuration
Hydra used to load the LDAP configuration from a YAML file in the
HYDRA_LDAP_CONFIG
environment variable. This behavior is deperecated
and will be removed.
When Hydra uses the deprecated YAML file, Hydra applies the following default role mapping:
<ldap>
<role_mapping>
hydra_admin = admin
hydra_bump-to-front = bump-to-front
hydra_cancel-build = cancel-build
hydra_create-projects = create-projects
hydra_restart-jobs = restart-jobs
</role_mapping>
</ldap>
Note that configuring both the LDAP parameters in the hydra.conf and via the environment variable is a fatal error.
Embedding Extra HTML
Embed an analytics widget or other HTML in the <head>
of each HTML document via:
tracker = <script src="...">
Creating and Managing Projects
Once Hydra is installed and running, the next step is to add projects to the build farm. We follow the example of the Patchelf project, a software tool written in C and using the GNU Build System (GNU Autoconf and GNU Automake).
Log in to the web interface of your Hydra installation using the user
name and password you inserted in the database (by default, Hydra's web
server listens on localhost:3000
). Then
follow the "Create Project" link to create a new project.
Project Information
A project definition consists of some general information and a set of job sets. The general information identifies a project, its owner, and current state of activity. Here's what we fill in for the patchelf project:
Identifier: patchelf
The identifier is the identity of the project. It is used in URLs and in the names of build results.
The identifier should be a unique name (it is the primary database key for the project table in the database). If you try to create a project with an already existing identifier you'd get an error message from the database. So try to create the project after entering just the general information to figure out if you have chosen a unique name. Job sets can be added once the project has been created.
Display name: Patchelf
The display name is used in menus.
Description: A tool for modifying ELF binaries
The description is used as short documentation of the nature of the project.
Owner: eelco
The owner of a project can create and edit job sets.
Enabled: Yes
Only if the project is enabled are builds performed.
Once created there should be an entry for the project in the sidebar. Go to the project page for the Patchelf project.
Job Sets
A project can consist of multiple job sets (hereafter jobsets), separate tasks that can be built separately, but may depend on each other (without cyclic dependencies, of course). Go to the Edit page of the Patchelf project and "Add a new jobset" by providing the following "Information":
Identifier: trunk
Description: Trunk
Nix expression: release.nix in input patchelfSrc
This states that in order to build the trunk
jobset, the Nix
expression in the file release.nix
, which can be obtained from input
patchelfSrc
, should be evaluated. (We'll have a look at release.nix
later.)
To realize a job we probably need a number of inputs, which can be declared in the table below. As many inputs as required can be added. For patchelf we declare the following inputs.
patchelfSrc
'Git checkout' https://github.com/NixOS/patchelf
nixpkgs 'Git checkout' https://github.com/NixOS/nixpkgs
officialRelease Boolean false
system String value "i686-linux"
Building Jobs
Build Recipes
Build jobs and build recipes for a jobset are specified in a text file
written in the Nix language. The recipe is
actually called a Nix expression in Nix parlance. By convention this
file is often called release.nix
.
The release.nix
file is typically kept under version control, and the
repository that contains it one of the build inputs of the
corresponding--often called hydraConfig
by convention. The repository
for that file and the actual file name are specified on the web
interface of Hydra under the Setup
tab of the jobset's overview page,
under the Nix expression
heading. See, for example, the jobset overview
page of the PatchELF
project, and the corresponding Nix
file.
Knowledge of the Nix language is recommended, but the example below should already give a good idea of how it works:
let
pkgs = import <nixpkgs> {}; â‘
jobs = rec { â‘¡
tarball = â‘¢
pkgs.releaseTools.sourceTarball { â‘£
name = "hello-tarball";
src = <hello>; ⑤
buildInputs = (with pkgs; [ gettext texLive texinfo ]);
};
build = â‘¥
{ system ? builtins.currentSystem }: ⑦
let pkgs = import <nixpkgs> { inherit system; }; in
pkgs.releaseTools.nixBuild { â‘§
name = "hello";
src = jobs.tarball;
configureFlags = [ "--disable-silent-rules" ];
};
};
in
jobs ⑨
This file shows what a release.nix
file for
GNUÂ Hello would look like.
GNUÂ Hello is representative of many GNU and non-GNU free software
projects:
- it uses the GNU Build System, namely GNUÂ Autoconf, and GNUÂ Automake; for users, it means it can be installed using the usual ./configure && make install procedure ;
- it uses Gettext for internationalization;
- it has a Texinfo manual, which can be rendered as PDF with TeX.
The file defines a jobset consisting of two jobs: tarball
, and
build
. It contains the following elements (referenced from the figure
by numbers):
-
This defines a variable
pkgs
holding the set of packages provided by Nixpkgs.Since
nixpkgs
appears in angle brackets, there must be a build input of that name in the Nix search path. In this case, the web interface should show anixpkgs
build input, which is a checkout of the Nixpkgs source code repository; Hydra then adds this and other build inputs to the Nix search path when evaluatingrelease.nix
. -
This defines a variable holding the two Hydra jobs--an attribute set in Nix.
-
This is the definition of the first job, named
tarball
. The purpose of this job is to produce a usable source code tarball. -
The
tarball
job calls thesourceTarball
function, which (roughly) runsautoreconf && ./configure && make dist
on the checkout. ThebuildInputs
attribute specifies additional software dependencies for the job.The package names used in
buildInputs
--e.g.,texLive
--are the names of the attributes corresponding to these packages in Nixpkgs, specifically in theall-packages.nix
file. See the section entitled "Package Naming" in the Nixpkgs manual for more information. -
The
tarball
jobs expects ahello
build input to be available in the Nix search path. Again, this input is passed by Hydra and is meant to be a checkout of GNUÂ Hello's source code repository. -
This is the definition of the
build
job, whose purpose is to build Hello from the tarball produced above. -
The
build
function takes one parameter,system
, which should be a string defining the Nix system type--e.g.,"x86_64-linux"
. Additionally, it refers tojobs.tarball
, seen above.Hydra inspects the formal argument list of the function (here, the
system
argument) and passes it the corresponding parameter specified as a build input on Hydra's web interface. Here,system
is passed by Hydra when it callsbuild
. Thus, it must be defined as a build input of type string in Hydra, which could take one of several values.The question mark after
system
defines the default value for this argument, and is only useful when debugging locally. -
The
build
job calls thenixBuild
function, which unpacks the tarball, then runs./configure && make && make check && make install
. -
Finally, the set of jobs is returned to Hydra, as a Nix attribute set.
Building from the Command Line
It is often useful to test a build recipe, for instance before it is actually used by Hydra, when testing changes, or when debugging a build issue. Since build recipes for Hydra jobsets are just plain Nix expressions, they can be evaluated using the standard Nix tools.
To evaluate the tarball
jobset of the above example, just
run:
$ nix-build release.nix -A tarball
However, doing this with the example as is will probably yield an error like this:
error: user-thrown exception: file `hello' was not found in the Nix search path (add it using $NIX_PATH or -I)
The error is self-explanatory. Assuming $HOME/src/hello
points to a
checkout of Hello, this can be fixed this way:
$ nix-build -I ~/src release.nix -A tarball
Similarly, the build
jobset can be evaluated:
$ nix-build -I ~/src release.nix -A build
The build
job reuses the result of the tarball
job, rebuilding it
only if it needs to.
Adding More Jobs
The example illustrates how to write the most basic
jobs, tarball
and build
. In practice, much more can be done by using
features readily provided by Nixpkgs or by creating new jobs as
customizations of existing jobs.
For instance, test coverage report for projects compiled with GCC can be
automatically generated using the coverageAnalysis
function provided
by Nixpkgs instead of nixBuild
. Back to our GNUÂ Hello example, we can
define a coverage
job that produces an HTML code coverage report
directly readable from the corresponding Hydra build page:
coverage =
{ system ? builtins.currentSystem }:
let pkgs = import nixpkgs { inherit system; }; in
pkgs.releaseTools.coverageAnalysis {
name = "hello";
src = jobs.tarball;
configureFlags = [ "--disable-silent-rules" ];
};
As can be seen, the only difference compared to build
is the use of
coverageAnalysis
.
Nixpkgs provides many more build tools, including the ability to run
build in virtual machines, which can themselves run another GNU/Linux
distribution, which allows for the creation of packages for these
distributions. Please see the pkgs/build-support/release
directory
of Nixpkgs for more. The NixOS manual also contains information about
whole-system testing in virtual machine.
Now, assume we want to build Hello with an old version of GCC, and with
different configure
flags. A new build_exotic
job can be written
that simply overrides the relevant arguments passed to nixBuild
:
build_exotic =
{ system ? builtins.currentSystem }:
let
pkgs = import nixpkgs { inherit system; };
build = jobs.build { inherit system; };
in
pkgs.lib.overrideDerivation build (attrs: {
buildInputs = [ pkgs.gcc33 ];
preConfigure = "gcc --version";
configureFlags =
attrs.configureFlags ++ [ "--disable-nls" ];
});
The build_exotic
job reuses build
and overrides some of its
arguments: it adds a dependency on GCCÂ 3.3, a pre-configure phase that
runs gcc --version
, and adds the --disable-nls
configure flags.
This customization mechanism is very powerful. For instance, it can be used to change the way Hello and all its dependencies--including the C library and compiler used to build it--are built. See the Nixpkgs manual for more.
Declarative Projects
see this chapter
Email Notifications
Hydra can send email notifications when the status of a build changes. This provides immediate feedback to maintainers or committers when a change causes build failures.
The feature can be turned on by adding the following line to hydra.conf
email_notification = 1
By default, Hydra only sends email notifications if a previously successful
build starts to fail. In order to force Hydra to send an email for each build
(including e.g. successful or cancelled ones), the environment variable
HYDRA_FORCE_SEND_MAIL
can be declared:
services.hydra-dev.extraEnv.HYDRA_FORCE_SEND_MAIL = "1";
SASL Authentication for the email address that's used to send notifications can be configured like this:
EMAIL_SENDER_TRANSPORT_sasl_username=hydra@example.org
EMAIL_SENDER_TRANSPORT_sasl_password=verysecret
EMAIL_SENDER_TRANSPORT_port=587
EMAIL_SENDER_TRANSPORT_ssl=starttls
Further information about these environment variables can be found at the
MetaCPAN documentation of Email::Sender::Manual::QuickStart
.
It's recommended to not put this in services.hydra-dev.extraEnv
as this would
leak the secrets into the Nix store. Instead, it should be written into an
environment file and configured like this:
{ systemd.services.hydra-notify = {
serviceConfig.EnvironmentFile = "/etc/secrets/hydra-mail-cfg";
};
}
The simplest approach to enable Email Notifications is to use the ssmtp
package, which simply hands off the emails to another SMTP server. For
details on how to configure ssmtp, see the documentation for the
networking.defaultMailServer
option. To use ssmtp for the Hydra email
notifications, add it to the path option of the Hydra services in your
/etc/nixos/configuration.nix
file:
systemd.services.hydra-queue-runner.path = [ pkgs.ssmtp ];
systemd.services.hydra-server.path = [ pkgs.ssmtp ];
Gitea Integration
Hydra can notify Git servers (such as GitLab, GitHub or Gitea) about the result of a build from a Git checkout.
This section describes how it can be implemented for gitea
, but the approach for gitlab
is
analogous:
-
Add it to a file which only users in the hydra group can read like this: see including files for more information
<gitea_authorization> your_username=your_token </gitea_authorization>
-
Include the file in your
hydra.conf
like this:{ services.hydra-dev.extraConfig = '' Include /path/to/secret/file ''; }
-
For a jobset with a
Git
-input which points to agitea
-instance, add the following additional inputs:Type Name Value String value
gitea_repo_name
Name of the repository to build String value
gitea_repo_owner
Owner of the repository String value
gitea_status_repo
Name of the Git checkout
inputString value
gitea_http_url
Public URL of gitea
, optional
Hydra Jobs
Derivation Attributes
Hydra stores the following job attributes in its database:
nixName
- the Derivation'sname
attributesystem
- the Derivation'ssystem
attributedrvPath
- the Derivation's path in the Nix storeoutputs
- A JSON dictionary of output names and their store path.
Meta fields
description
-meta.description
, a stringlicense
- a comma separated list of license names frommeta.license
, expected to be a list of attribute sets with an attribute namedshortName
, ex:[ { shortName = "licensename"} ]
.homepage
-meta.homepage
, a stringmaintainers
- a comma separated list of maintainer email addresses frommeta.maintainers
, expected to be a list of attribute sets with an attribute namedemail
, ex:[ { email = "alice@example.com"; } ]
.schedulingPriority
-meta.schedulingPriority
, an integer. Default: 100. Slightly prioritizes this job over other jobs within this jobset.timeout
-meta.timeout
, an integer. Default: 36000. Number of seconds this job must complete within.maxSilent
-meta.maxSilent
, an integer. Default: 7200. Number of seconds of no output on stderr / stdout before considering the job failed.isChannel
-meta.isHydraChannel
, bool. Default: false. Deprecated.
Plugins
This chapter describes all plugins present in Hydra.
Inputs
Hydra supports the following inputs:
- Bazaar input
- Darcs input
- Git input
- Mercurial input
- Path input
Bitbucket pull requests
Create jobs based on open bitbucket pull requests.
Configuration options
bitbucket_authorization.<owner>
Bitbucket status
Sets Bitbucket CI status.
Configuration options
enable_bitbucket_status
bitbucket.username
bitbucket.password
CircleCI Notification
Sets CircleCI status.
Configuration options
circleci.[].jobs
circleci.[].vcstype
circleci.[].token
Compress build logs
Compresses build logs after a build with bzip2.
Configuration options
compress_build_logs
Enable log compression
Example
compress_build_logs = 1
Coverity Scan
Uploads source code to coverity scan.
Configuration options
coverityscan.[].jobs
coverityscan.[].project
coverityscan.[].email
coverityscan.[].token
coverityscan.[].scanurl
Email notification
Sends email notification if build status changes.
Configuration options
email_notification
Gitea status
Sets Gitea CI status
Configuration options
gitea_authorization.<repo-owner>
GitHub pulls
Create jobs based on open GitHub pull requests
Configuration options
github_authorization.<repo-owner>
Github refs
Hydra plugin for retrieving the list of references (branches or tags) from GitHub following a certain naming scheme.
Configuration options
github_endpoint
(defaults to https://api.github.com)github_authorization.<repo-owner>
Github status
Sets GitHub CI status.
Configuration options
githubstatus.[].jobs
Regular expression for jobs to match in the format project:jobset:job
.
This field is required and has no default value.
githubstatus.[].excludeBuildFromContext
Don't include the build's ID in the status.
githubstatus.[].context
Context shown in the status
githubstatus.[].useShortContext
Renames continuous-integration/hydra
to ci/hydra
and removes the PR suffix
from the name. Useful to see the full path in GitHub for long job names.
githubstatus.[].description
Description shown in the status. Defaults to Hydra build #<build-id> of <jobname>
githubstatus.[].inputs
The input which corresponds to the github repo/rev whose status we want to report. Can be repeated.
githubstatus.[].authorization
Verbatim contents of the Authorization header. See
GitHub documentation for
details. This field is only used if github_authorization.<repo-owner>
is not set.
Example
<githubstatus>
jobs = test:pr:build
## This example will match all jobs
#jobs = .*
inputs = src
authorization = Bearer gha-secret😱secret😱secret😱
excludeBuildFromContext = 1
</githubstatus>
GitLab pulls
Create jobs based on open gitlab pull requests.
Configuration options
gitlab_authorization.<projectId>
Gitlab status
Sets Gitlab CI status.
Configuration options
gitlab_authorization.<projectId>
InfluxDB notification
Writes InfluxDB events when a builds finished.
Configuration options
influxdb.url
influxdb.db
RunCommand
Runs a shell command when the build is finished.
See The RunCommand Plugin for more information.
Configuration options:
runcommand.[].job
Regular expression for jobs to match in the format project:jobset:job
.
Defaults to *:*:*
.
runcommand.[].command
Command to run. Can use the $HYDRA_JSON
environment variable to access
information about the build.
Example
<runcommand>
job = myProject:*:*
command = cat $HYDRA_JSON > /tmp/hydra-output
</runcommand>
S3 backup
Upload nars and narinfos to S3 storage.
Configuration options
s3backup.[].jobs
s3backup.[].compression_type
s3backup.[].name
s3backup.[].prefix
Slack notification
Sending Slack notifications about build results.
Configuration options
slack.[].jobs
slack.[].force
slack.[].url
SoTest
Scheduling hardware tests to SoTest controller
This plugin submits tests to a SoTest controller for all builds that contain two products matching the subtypes "sotest-binaries" and "sotest-config".
Build products are declared by the file "nix-support/hydra-build-products" relative to the root of a build, in the following format:
file sotest-binaries /nix/store/…/binaries.zip
file sotest-config /nix/store/…/config.yaml
Configuration options
sotest.[].uri
URL of the controller, defaults to https://opensource.sotest.io
sotest.[].authfile
File containing username:password
sotest.[].priority
Optional priority setting.
Example
<sotest>
uri = https://sotest.example
authfile = /var/lib/hydra/sotest.auth
priority = 1
</sotest>
Declarative Projects
Hydra supports declaratively configuring a project's jobsets. This configuration can be done statically, or generated by a build job.
Note
Hydra will treat the project's declarative input as a static definition if and only if the spec file contains a dictionary of dictionaries. If the value of any key in the spec is not a dictionary, it will treat the spec as a generated declarative spec.
Static, Declarative Projects
Hydra supports declarative projects, where jobsets are configured from a static JSON document in a repository.
To configure a static declarative project, take the following steps:
-
Create a Hydra-fetchable source like a Git repository or local path.
-
In that source, create a file called
spec.json
, and add the specification for all of the jobsets. Each key is jobset and each value is a jobset's specification. For example:{ "nixpkgs": { "enabled": 1, "hidden": false, "description": "Nixpkgs", "nixexprinput": "nixpkgs", "nixexprpath": "pkgs/top-level/release.nix", "checkinterval": 300, "schedulingshares": 100, "enableemail": false, "enable_dynamic_run_command": false, "emailoverride": "", "keepnr": 3, "inputs": { "nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git master", "emailresponsible": false } } }, "nixos": { "enabled": 1, "hidden": false, "description": "NixOS: Small Evaluation", "nixexprinput": "nixpkgs", "nixexprpath": "nixos/release-small.nix", "checkinterval": 300, "schedulingshares": 100, "enableemail": false, "enable_dynamic_run_command": false, "emailoverride": "", "keepnr": 3, "inputs": { "nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git master", "emailresponsible": false } } } }
-
Create a new project, and set the project's declarative input type, declarative input value, and declarative spec file to point to the source and JSON file you created in step 2.
Hydra will create a special jobset named .jobsets
. When the .jobsets
jobset is evaluated, this static specification will be used for
configuring the rest of the project's jobsets.
Generated, Declarative Projects
Hydra also supports generated declarative projects, where jobsets are configured automatically from specification files instead of being managed through the UI. A jobset specification is a JSON object containing the configuration of the jobset, for example:
{
"enabled": 1,
"hidden": false,
"description": "js",
"nixexprinput": "src",
"nixexprpath": "release.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"enable_dynamic_run_command": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
To configure a declarative project, take the following steps:
-
Create a jobset repository in the normal way (e.g. a git repo with a
release.nix
file, any other needed helper files, and taking any kind of hydra input), but without adding it to the UI. The nix expression of this repository should contain a single job, namedjobsets
. The output of thejobsets
job should be a JSON file containing an object of jobset specifications. Each member of the object will become a jobset of the project, configured by the corresponding jobset specification. -
In some hydra-fetchable source (potentially, but not necessarily, the same repo you created in step 1), create a JSON file containing a jobset specification that points to the jobset repository you created in the first step, specifying any needed inputs (e.g. nixpkgs) as necessary.
-
In the project creation/edit page, set declarative input type, declarative input value, and declarative spec file to point to the source and JSON file you created in step 2.
Hydra will create a special jobset named .jobsets
, which whenever
evaluated will go through the steps above in reverse order:
-
Hydra will fetch the input specified by the declarative input type and value.
-
Hydra will use the configuration given in the declarative spec file as the jobset configuration for this evaluation. In addition to any inputs specified in the spec file, hydra will also pass the
declInput
argument corresponding to the input fetched in step 1 and theprojectName
argument containing the project's name. -
As normal, hydra will build the jobs specified in the jobset repository, which in this case is the single
jobsets
job. When that job completes, hydra will read the created jobset specifications and create corresponding jobsets in the project, disabling any jobsets that used to exist but are not present in the current spec.
The RunCommand Plugin
Hydra supports executing a program after certain builds finish. This behavior is disabled by default.
Hydra executes these commands under the hydra-notify
service.
Static Commands
Configure specific commands to execute after the specified matching job finishes.
Configuration
runcommand.[].job
A matcher for jobs to match in the format project:jobset:job
. Defaults to *:*:*
.
Note: This matcher format is not a regular expression.
The *
is a wildcard for that entire section, partial matches are not supported.
runcommand.[].command
Command to run. Can use the $HYDRA_JSON
environment variable to access information about the build.
Example
<runcommand>
job = myProject:*:*
command = cat $HYDRA_JSON > /tmp/hydra-output
</runcommand>
Dynamic Commands
Hydra can optionally run RunCommand hooks defined dynamically by the jobset. In
order to enable dynamic commands, you must enable this feature in your
hydra.conf
, as well as in the parent project and jobset configuration.
Behavior
Hydra will execute any program defined under the runCommandHook
attribute set. These jobs must have a single output named out
, and that output must be an executable file located directly at $out
.
Security Properties
Safely deploying dynamic commands requires careful design of your Hydra jobs. Allowing arbitrary users to define attributes in your top level attribute set will allow that user to execute code on your Hydra.
If a jobset has dynamic commands enabled, you must ensure only trusted users can define top level attributes.
Configuration
dynamicruncommand.enable
Set to 1 to enable dynamic RunCommand program execution.
Example
In your Hydra configuration, specify:
<dynamicruncommand>
enable = 1
</dynamicruncommand>
Then create a job named runCommandHook.example
in your jobset:
{ pkgs, ... }: {
runCommandHook = {
recurseForDerivations = true;
example = pkgs.writeScript "run-me" ''
#!${pkgs.runtimeShell}
${pkgs.jq}/bin/jq . "$HYDRA_JSON"
'';
};
}
After the runcommandHook.example
build finishes that script will execute.
Using the external API
To be able to create integrations with other services, Hydra exposes an external API that you can manage projects with.
The API is accessed over HTTP(s) where all data is sent and received as JSON.
Creating resources requires the caller to be authenticated, while retrieving resources does not.
The API does not have a separate URL structure for it's endpoints.
Instead you request the pages of the web interface as application/json
to use the API.
List projects
To list all the projects
of the Hydra install:
GET /
Accept: application/json
This will give you a list of projects
, where each project
contains
general information and a list of its job sets
.
Example
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org
Note: this response is truncated
GET https://hydra.nixos.org/
HTTP/1.1 200 OK
Content-Type: application/json
[
{
"displayname": "Acoda",
"name": "acoda",
"description": "Acoda is a tool set for automatic data migration along an evolving data model",
"enabled": 0,
"owner": "sander",
"hidden": 1,
"jobsets": [
"trunk"
]
},
{
"displayname": "cabal2nix",
"name": "cabal2nix",
"description": "Convert Cabal files into Nix build instructions",
"enabled": 0,
"owner": "simons@cryp.to",
"hidden": 1,
"jobsets": [
"master"
]
}
]
Get a single project
To get a single project
by identifier:
GET /project/:project-identifier
Accept: application/json
Example
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/project/hydra
GET https://hydra.nixos.org/project/hydra
HTTP/1.1 200 OK
Content-Type: application/json
{
"description": "Hydra, the Nix-based continuous build system",
"hidden": 0,
"displayname": "Hydra",
"jobsets": [
"hydra-master",
"hydra-ant-logger-trunk",
"master",
"build-ng"
],
"name": "hydra",
"enabled": 1,
"owner": "eelco"
}
Get a single job set
To get a single job set
by identifier:
GET /jobset/:project-identifier/:jobset-identifier
Content-Type: application/json
Example
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/jobset/hydra/build-ng
GET https://hydra.nixos.org/jobset/hydra/build-ng
HTTP/1.1 200 OK
Content-Type: application/json
{
"errormsg": "evaluation failed due to signal 9 (Killed)",
"fetcherrormsg": null,
"nixexprpath": "release.nix",
"nixexprinput": "hydraSrc",
"emailoverride": "rob.vermaas@gmail.com, eelco.dolstra@logicblox.com",
"jobsetinputs": {
"officialRelease": {
"jobsetinputalts": [
"false"
]
},
"hydraSrc": {
"jobsetinputalts": [
"https://github.com/NixOS/hydra.git build-ng"
]
},
"nixpkgs": {
"jobsetinputalts": [
"https://github.com/NixOS/nixpkgs.git release-14.12"
]
}
},
"enabled": 0
}
List evaluations
To list the evaluations
of a job set
by identifier:
GET /jobset/:project-identifier/:jobset-identifier/evals
Content-Type: application/json
Example
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/jobset/hydra/build-ng/evals
Note: this response is truncated
GET https://hydra.nixos.org/jobset/hydra/build-ng/evals
HTTP/1.1 200 OK
Content-Type: application/json
{
"evals": [
{
"jobsetevalinputs": {
"nixpkgs": {
"dependency": null,
"type": "git",
"value": null,
"uri": "https://github.com/NixOS/nixpkgs.git",
"revision": "f60e48ce81b6f428d072d3c148f6f2e59f1dfd7a"
},
"hydraSrc": {
"dependency": null,
"type": "git",
"value": null,
"uri": "https://github.com/NixOS/hydra.git",
"revision": "48d6f0de2ab94f728d287b9c9670c4d237e7c0f6"
},
"officialRelease": {
"dependency": null,
"value": "false",
"type": "boolean",
"uri": null,
"revision": null
}
},
"hasnewbuilds": 1,
"builds": [
24670686,
24670684,
24670685,
24670687
],
"id": 1213758
}
],
"first": "?page=1",
"last": "?page=1"
}
Get a single build
To get a single build
by its id:
GET /build/:build-id
Content-Type: application/json
Example
curl -i -H 'Accept: application/json' \
https://hydra.nixos.org/build/24670686
GET /build/24670686
HTTP/1.1 200 OK
Content-Type: application/json
{
"job": "tests.api.x86_64-linux",
"jobsetevals": [
1213758
],
"buildstatus": 0,
"buildmetrics": null,
"project": "hydra",
"system": "x86_64-linux",
"priority": 100,
"releasename": null,
"starttime": 1439402853,
"nixname": "vm-test-run-unnamed",
"timestamp": 1439388618,
"id": 24670686,
"stoptime": 1439403403,
"jobset": "build-ng",
"buildoutputs": {
"out": {
"path": "/nix/store/lzrxkjc35mhp8w7r8h82g0ljyizfchma-vm-test-run-unnamed"
}
},
"buildproducts": {
"1": {
"path": "/nix/store/lzrxkjc35mhp8w7r8h82g0ljyizfchma-vm-test-run-unnamed",
"defaultpath": "log.html",
"type": "report",
"sha256hash": null,
"filesize": null,
"name": "",
"subtype": "testlog"
}
},
"finished": 1
}
Webhooks
Hydra can be notified by github's webhook to trigger a new evaluation when a
jobset has a github repo in its input.
To set up a github webhook go to https://github.com/<yourhandle>/<yourrepo>/settings
and in the Webhooks
tab
click on Add webhook
.
- In
Payload URL
fill inhttps://<your-hydra-domain>/api/push-github
. - In
Content type
switch toapplication/json
. - The
Secret
field can stay empty. - For
Which events would you like to trigger this webhook?
keep the default option for events onJust the push event.
.
Then add the hook with Add webhook
.
Monitoring Hydra
Webserver
The webserver exposes Prometheus metrics for the webserver itself at /metrics
.
Queue Runner
The queue runner's status is exposed at /queue-runner-status
:
$ curl --header "Accept: application/json" http://localhost:63333/queue-runner-status
... JSON payload ...
Notification Daemon
The hydra-notify
process can expose Prometheus metrics for plugin execution. See
hydra-notify's Prometheus service
for details on enabling and configuring the exporter.
The notification exporter exposes metrics on a per-plugin, per-event-type basis: execution durations, frequency, successes, and failures.
Diagnostic Dump
The notification daemon can also dump its metrics to stderr whether or not the exporter is configured. This is particularly useful for cases where metrics data is needed but the exporter was not enabled.
To trigger this diagnostic dump, send a Postgres notification with the
hydra_notify_dump_metrics
channel and no payload. See
Re-sending a notification.
Hacking
This section provides some notes on how to hack on Hydra. To get the latest version of Hydra from GitHub:
$ git clone git://github.com/NixOS/hydra.git
$ cd hydra
To enter a shell in which all environment variables (such as PERL5LIB
)
and dependencies can be found:
$ nix-shell
To build Hydra, you should then do:
[nix-shell]$ ./bootstrap
[nix-shell]$ configurePhase
[nix-shell]$ make
You start a local database, the webserver, and other components with foreman:
$ foreman start
You can run just the Hydra web server in your source tree as follows:
$ ./src/script/hydra-server
You can run Hydra's test suite with the following:
[nix-shell]$ make check
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ make check YATH_JOB_COUNT=$NIX_BUILD_CORES
[nix-shell]$ # or run yath directly:
[nix-shell]$ yath test
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ yath test -j $NIX_BUILD_CORES
When using yath
instead of make check
, ensure you have run make
in the root of the repository at least once.
Warning: Currently, the tests can fail
if run with high parallelism due to an issue in
Test::PostgreSQL
causing database ports to collide.
Working on the Manual
By default, foreman start
runs mdbook in "watch" mode. mdbook listens
at http://localhost:63332/, and
will reload the page every time you save.
Building
To build Hydra and its dependencies:
$ nix-build release.nix -A build.x86_64-linux
Development Tasks
Connecting to the database
Assuming you're running the default configuration with foreman start
,
open an interactive session with Postgres via:
$ psql --host localhost --port 64444 hydra
Runinng the builder locally
For hydra-queue-runner
to successfully build locally, your
development user will need to be "trusted" by your Nix store.
Add yourself to the trusted_users
option of /etc/nix/nix.conf
.
On NixOS:
{
nix.settings.trusted-users = [ "YOURUSER" ];
}
Off NixOS, change /etc/nix/nix.conf
:
trusted-users = root YOURUSERNAME
hydra-notify
and Hydra's Notifications
Hydra uses a notification-based subsystem to implement some features and support plugin development. Notifications are sent to hydra-notify
, which is responsible for dispatching each notification to each plugin.
Notifications are passed from hydra-queue-runner
to hydra-notify
through Postgres's NOTIFY
and LISTEN
feature.
Notification Types
Note that the notification format is subject to change and should not be considered an API. Integrate with hydra-notify
instead of listening directly.
cached_build_finished
- Payload: Exactly two values, tab separated: The ID of the evaluation which contains the finished build, followed by the ID of the finished build.
- When: Issued directly after an evaluation completes, when that evaluation includes this finished build.
- Delivery Semantics: At most once per evaluation.
cached_build_queued
- Payload: Exactly two values, tab separated: The ID of the evaluation which contains the finished build, followed by the ID of the queued build.
- When: Issued directly after an evaluation completes, when that evaluation includes this queued build.
- Delivery Semantics: At most once per evaluation.
build_queued
- Payload: Exactly one value, the ID of the build.
- When: Issued after the transaction inserting the build in to the database is committed. One notification is sent per new build.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
build_started
- Payload: Exactly one value, the ID of the build.
- When: Issued directly before building happens, and only if the derivation's outputs cannot be substituted.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
step_finished
- Payload: Three values, tab separated: The ID of the build which the step is part of, the step number, and the path on disk to the log file.
- When: Issued directly after a step completes, regardless of success. Is not issued if the step's derivation's outputs can be substituted.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
build_finished
- Payload: At least one value, tab separated: The ID of the build which finished, followed by IDs of all of the builds which also depended upon this build.
- When: Issued directly after a build completes, regardless of success and substitutability.
- Delivery Semantics: At least once.
hydra-notify
will call buildFinished
for each plugin in two ways:
-
The
builds
table'snotificationspendingsince
column stores when the build finished. On startup,hydra-notify
will query all builds with a non-nullnotificationspendingsince
value and treat each row as a receivedbuild_finished
event. -
Additionally,
hydra-notify
subscribes tobuild_finished
events and processes them in real time.
After processing, the row's notificationspendingsince
column is set to null.
It is possible for subsequent deliveries of the same build_finished
data to imply different outcomes. For example, if the build fails, is restarted, and then succeeds. In this scenario the build_finished
events will be delivered at least twice, once for the failure and then once for the success.
eval_started
- Payload: Exactly two values, tab separated: an opaque trace ID representing this evaluation, and the ID of the jobset.
- When: At the beginning of the evaluation phase for the jobset, before any work is done.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
eval_added
- Payload: Exactly three values, tab separated: an opaque trace ID representing this evaluation, the ID of the jobset, and the ID of the JobsetEval record.
- When: After the evaluator fetches inputs and completes the evaluation successfully.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
eval_cached
- Payload: Exactly three values: an opaque trace ID representing this evaluation, the ID of the jobset, and the ID of the previous identical evaluation.
- When: After the evaluator fetches inputs, if none of the inputs changed.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
eval_failed
- Payload: Exactly two values: an opaque trace ID representing this evaluation, and the ID of the jobset.
- When: After any fetching any input fails, or any other evaluation error occurs.
- Delivery Semantics: Ephemeral.
hydra-notify
must be running to react to this event. No record of this event is stored.
Development Notes
Re-sending a notification
Notifications can be experimentally re-sent on the command line with psql
, with NOTIFY $notificationname, '$payload'
.
Authors
- Eelco Dolstra, Delft University of Technology, Department of Software Technology
- Rob Vermaas, Delft University of Technology, Department of Software Technology
- Eelco Visser, Delft University of Technology, Department of Software Technology
- Ludovic Courtès