NixOS MIT License Chat

⚠ Advisory ⚠

DevOS requires the flakes feature available via an experimental branch of nix. Until nix 2.4 is released, this project should be considered unstable.


Make an awesome template for NixOS users, with consideration for common tools like home-manager, devshell, and more.

No. Why flakes?

Flakes are a part of an explicit push to improve Nix's UX, and have become an integral part of that effort.

They also make Nix expressions easier to distribute and reuse with convient flake references for building or using packages, modules, and whole systems.

Getting Started

Check out the guide to get up and running. Also, have a look at flake.nix. If anything is not immediately discoverable via "digga's mkFlake, please file a bug report.

Status: Beta

Although this project has already matured quite a bit, especially through recent outfactoring of digga, a fair amount of api polishing is still expected. There are unstable versions (0.x.x) to help users keep track of changes and progress, and a develop branch for the brave 😜

In the Wild


This work does not reinvent the wheel. It stands on the shoulders of the following giants:

:onion: — like the layers of an onion

:family: — like family


Inspiration & Art


The divnix org is an open space that spontaniously formed out of "the Nix". It is really just a place where otherwise unrelated people a) get together and b) stuff done.

It's a place to stop "geeking out in isolation" (or within company boundaries), experiment and learn together and iterate quickly on best practices. That's what it is.

It might eventually become a non-profit if that's not too complicated or if those goals are sufficiently upstreamed into "the Nix", dissolved.


DevOS is licensed under the MIT License.

Quick Start

The only dependency is nix, so make sure you have it installed.

Get the Template

Here is a snippet that will get you the template without the git history:

nix-shell -p cachix --run "cachix use nrdxp"

nix-shell -A shell \
  --run "bud get main"

cd devos


git init
git add .
git commit -m init

This will place you in a new folder named devos with git initialized, and a nix-shell that provides all the dependencies, including the unstable nix version required.

In addition, the binary cache is added for faster deployment.

  • Flakes ignore files that have not been added to git, so be sure to stage new files before building the system.
  • You can choose to simply clone the repo with git if you want to follow upstream changes.
  • If the nix-shell -p cachix --run "cachix use nrdxp" line doesn't work you can try with sudo: sudo nix-shell -p cachix --run "cachix use nrdxp"

Next Steps:


Making and writing an installable iso for hosts/bootstrap.nix is as simple as:

bud build bootstrap bootstrapIso
sudo -E $(which bud) burn

This works for any host.

ISO image nix store & cache

The iso image holds the store to the live environment and also acts as a binary cache to the installer. To considerably speed up things, the image already includes all flake inputs as well as the devshell closures.

While you could provision any machine with a single stick, a custom-made iso for the host you want to install DevOS to, maximises those local cache hits.

For hosts that don't differ too much, a single usb stick might be ok, whereas when there are bigger differences, a custom-made usb stick will be considerably faster.


This will help you boostrap a bare host with the help of the bespoke iso live installer.

Note: nothing prevents you from remotely executing the boostrapping process. See below.

Once your target host has booted into the live iso, you need to partition and format your disk according to the official manual.

Mount partitions

Then properly mount the formatted partitions at /mnt, so that you can install your system to those new partitions.

Mount nixos partition to /mnt and — for UEFI — boot partition to /mnt/boot:

$ mount /dev/disk/by-label/nixos /mnt
$ mkdir -p /mnt/boot && mount /dev/disk/by-label/boot /mnt/boot # UEFI only
$ swapon /dev/disk/by-label/swap

Add some extra space to the store. In the iso, it's running on a tmpfs off your RAM:

$ mkdir -p /mnt/tmpstore/{work,store}
$ mount -t overlay overlay -olowerdir=/nix/store,upperdir=/mnt/tmpstore/store,workdir=/mnt/tmpstore/work /nix/store


Install off of a copy of devos from the time the iso was built:

$ cd /iso/devos
$ nixos-install --flake .#NixOS

Notes of interest

Remote access to the live installer

The iso live installer comes preconfigured with a network configuration which announces it's hostname via MulticastDNS as hostname.local, that is bootstrap.local in the iso example.

In the rare case that MulticastDNS is not availabe or turned off in your network, there is a static link-local IPv6 address configured to fe80::47(mnemonic from the letter's position in the english alphabet: n=14 i=9 x=24; 47 = n+i+x).

Provided that you have added your public key to the authorized keys of the root user (hint: deploy-rs needs passwordless sudo access):

{ ... }:
  users.users.root.openssh.authorizedKeys.keyFiles = [

You can then ssh into the live installer through one of the following options:

ssh [email protected]

ssh [email protected]::47%eno1  # where eno1 is your network interface on which you are linked to the target

Note: the static link-local IPv6 address and MulticastDNS is only configured on the live installer. If you wish to enable MulticastDNS for your environment, you ought to configure that in a regular profile.

EUI-64 LLA & Host Identity

The iso's IPv6 Link Local Address (LLA) is configured with a static 64-bit Extended Unique Identifiers (EUI-64) that is derived from the host interface's Message Authentication Code (MAC) address.

After a little while (a few seconds), you can remotely discover this unique and host specific address over NDP for example with:

ip -6 neigh show # also shows fe80::47

This LLA is stable for the host, unless you need to swap that particular network card. Under this reservation, though, you may use this EUI-64 to wire up a specific (cryptographic) host identity.

From NixOS

Generate Configuration

Assuming you're happy with your existing partition layout, you can generate a basic NixOS configuration for your system using:

bud up

This will make a new file hosts/up-$(hostname).nix, which you can edit to your liking.

You must then add a host to nixos.hosts in flake.nix:

  nixos.hosts = {
    modules = hosts/NixOS.nix;

Make sure your i18n.defaultLocale and time.timeZone are set properly for your region. Keep in mind that networking.hostName will be automatically set to the name of your host;

Now might be a good time to read the docs on suites and profiles and add or create any that you need.


While the up sub-command is provided as a convenience to quickly set up and install a "fresh" NixOS system on current hardware, committing these files is discouraged.

They are placed in the git staging area automatically because they would be invisible to the flake otherwise, but it is best to move what you need from them directly into a host module of your own making, and commit that instead.


Once you're ready to deploy hosts/my-host.nix:

bud my-host switch

This calls nixos-rebuild with sudo to build and install your configuration.

  • Instead of switch, you can pass build, test, boot, etc just as with nixos-rebuild.

Key Concepts

Key concepts are derived from digga. Please refer to its docs for more details.

This section is dedicated to helping you develop a more hands on understanding of them them.


Nix flakes contain an output called nixosConfigurations declaring an attribute set of valid NixOS systems. To simplify the management and creation of these hosts, devos automatically imports every .nix file inside this directory to the mentioned attribute set, applying the projects defaults to each. The only hard requirement is that the file contain a valid NixOS module.

As an example, a file hosts/system.nix or hosts/system/default.nix will be available via the flake output nixosConfigurations.system. You can have as many hosts as you want and all of them will be automatically imported based on their name.

For each host, the configuration automatically sets the networking.hostName attribute to the folder name or name of the file minus the .nix extension. This is for convenience, since nixos-rebuild automatically searches for a configuration matching the current systems hostname if one is not specified explicitly.

You can set channels, systems, and add extra modules to each host by editing the nixos.hosts argument in flake.nix. This is the perfect place to import host specific modules from external sources, such as the nixos-hardware repository.

It is recommended that the host modules only contain configuration information specific to a particular piece of hardware. Anything reusable across machines is best saved for profile modules.

This is a good place to import sets of profiles, called suites, that you intend to use on your machine.



  nixos = {
    imports = [ (devos.lib.importHosts ./hosts) ];
    hosts = {
      librem = {
        channelName = "latest";
        modules = [ nixos-hardware.nixosModules.purism-librem-13v3 ];


{ suites, ... }:
  imports = suites.laptop;

  boot.loader.systemd-boot.enable = true;
  boot.loader.efi.canTouchEfiVariables = true;

  fileSystems."/" = { device = "/dev/disk/by-label/nixos"; };


Each NixOS host follows one channel. But many times it is useful to get packages or modules from different channels.


You can make use of overlays/overrides.nix to override specific packages in the default channel to be pulled from other channels. That file is simply an example of how any overlay can get channels as their first argument.

You can add overlays to any channel to override packages from other channels.

Pulling the manix package from the latest channel:

channels: final: prev: {
  __dontExport = true;
  inherit (pkgs.latest) manix;

It is recommended to set the __dontExport property for override specific overlays. overlays/overrides.nix is the best place to consolidate all package overrides and the property is already set for you.


You can also pull modules from other channels. All modules have access to the modulesPath for each channel as <channelName>ModulesPath. And you can use disabledModules to remove modules from the current channel.

To pull zsh module from the latest channel this code can be placed in any module, whether its your host file, a profile, or a module in ./modules etc:

{ latestModulesPath }:
  imports = [ "${latestModulesPath}/programs/zsh/zsh.nix" ];
  disabledModules = [ "programs/zsh/zsh.nix" ];

Sometimes a modules name will change from one branch to another.


Profiles are a convenient shorthand for the definition of options in contrast to their declaration. They're built into the NixOS module system for a reason: to elegantly provide a clear separation of concerns.


Profiles are created with the rakeLeaves function which recursively collects .nix files from within a folder. The recursion stops at folders with a default.nix in them. You end up with an attribute set with leaves(paths to profiles) or nodes(attrsets leading to more nodes or leaves).

A profile is used for quick modularization of interelated bits.

  • For declaring module options, there's the modules directory.
  • This directory takes inspiration from upstream .

Nested profiles

Profiles can be nested in attribute sets due to the recursive nature of rakeLeaves. This can be useful to have a set of profiles created for a specific purpose. It is sometimes useful to have a common profile that has high level concerns related to all its sister profiles.



  imports = [ ./zsh ];
  # some generic development concerns ...


{  ... }:
  programs.zsh.enable = true;
  # zsh specific options ...

The examples above will end up with a profiles set like this:

  develop = {
    common = ./profiles/develop/common.nix;
    zsh = ./profiles/develop/zsh.nix;


Profiles are the most important concept in DevOS. They allow us to keep our Nix expressions self contained and modular. This way we can maximize reuse across hosts while minimizing boilerplate. Remember, anything machine specific belongs in your host files instead.


Suites provide a mechanism for users to easily combine and name collections of profiles.

suites are defined in the importables argument in either the home or nixos namespace. They are a special case of an importable which is passed as a special argument (one that can be use in an imports line) to your hosts. All lists defined in suites are flattened and type-checked as paths.


rec {
  workstation = [ profiles.develop profiles.graphical users.nixos ];
  mobileWS = workstation ++ [ profiles.laptop ];



{ suites, ... }:
  imports = suites.mobileWS;

This section and its semantics need a conceptiual rework. Since recently portable home configurations that are not bound to any specific host are a thing.


Users are a special case of profiles that define system users and home-manager configurations. For your convenience, home manager is wired in by default so all you have to worry about is declaring your users. For a fully fleshed out example, check out the developers personal branch.

Basic Usage


{ ... }:
  users.users.myuser = {
    isNormalUser = true;

  home-manager.users.myuser = {
    programs.mpv.enable = true;

Home Manager

Home Manager support follows the same principles as regular nixos configurations, it even gets its own namespace in your flake.nix as home.

All modules defined in user modules will be imported to Home Manager. User profiles can be collected in a similar fashion as system ones into a suites argument that gets passed to your home-manager users.


  home-manager.users.nixos = { suites, ... }: {
    imports = suites.base;

External Usage

You can easily use the defined home-manager configurations outside of NixOS using the homeConfigurations flake output. The bud helper script makes this even easier.

This is great for keeping your environment consistent across Unix systems, including OSX.

From within the projects devshell:

# builds the nixos user defined in the NixOS host
bud home NixOS nixos

# build and activate
bud home NixOS nixos switch

Manually from outside the project:

# build
nix build "github:divnix/devos#[email protected]"

# activate
./result/activate && unlink result


Each of the following sections is a directory whose contents are output to the outside world via the flake's outputs. Check each chapter for details.


The modules directory is a replica of nixpkg's NixOS modules , and follows the same semantics. This allows for trivial upstreaming into nixpkgs proper once your module is sufficiently stable.

All modules linked in module-list.nix are automatically exported via nixosModules.<file-basename>, and imported into all hosts.


This is reserved for declaring brand new module options. If you just want to declare a coherent configuration of already existing and related NixOS options , use profiles instead.


In case you've never written a module for nixpkgs before, here is a brief outline of the process.



{ config, lib, ... }:
  cfg =;
{ = {
    enable = lib.mkEnableOption "Description of my new service.";

    # additional options ...

  config = lib.mkIf cfg.enable {
    # implementation ...







{ ... }:
  services.MyService.enable = true;



  # inputs omitted

  outputs = { self, devos, nixpkgs, ... }: {
    nixosConfigurations.myConfig = nixpkgs.lib.nixosSystem {
      system = "...";

      modules = [
        ({ ... }: {
          services.MyService.enable = true;


Writing overlays is a common occurence when using a NixOS system. Therefore, we want to keep the process as simple and straightforward as possible.

Any .nix files declared in this directory will be assumed to be a valid overlay, and will be automatically imported into all hosts, and exported via overlays.<channel>/<pkgName> as well as packages.<system>.<pkgName> (for valid systems), so all you have to do is write it.



final: prev: {
  kakoune = prev.kakoune.override {
    configure.plugins = with final.kakounePlugins; [
      (kak-fzf.override { fzf = final.skim; })


Similar to modules, the pkgs directory mirrors the upstream nixpkgs/pkgs, and for the same reason; if you ever want to upstream your package, it's as simple as dropping it into the nixpkgs/pkgs directory.

The only minor difference is that, instead of adding the callPackage call to all-packages.nix, you just add it the the default.nix in this directory, which is defined as a simple overlay.

All the packages are exported via packages.<system>.<pkg-name>, for all the supported systems listed in the package's meta.platforms attribute.

And, as usual, every package in the overlay is also available to any NixOS host.

Another convenient difference is that it is possible to use nvfetcher to keep packages up to date. This is best understood by the simple example below.


It is possible to specify sources separately to keep them up to date semi automatically. The basic rules are specified in pkgs/sources.toml:

# nvfetcher.toml
src.github = "benhoyt/inih"
fetch.github = "benhoyt/inih"

After changes to this file as well as to update the packages specified in there run nvfetcher (for more details see nvfetcher).

The pkgs overlay is managed in pkgs/default.nix:

final: prev: {
  # keep sources first, this makes sources available to the pkgs
  sources = prev.callPackage (import ./_sources/generated.nix) { };

  # then, call packages with `final.callPackage`
  libinih = prev.callPackage ./development/libraries/libinih { };

Lastly the example package is in pkgs/development/libraries/libinih/default.nix:

{ stdenv, meson, ninja, lib, sources, ... }:
stdenv.mkDerivation {
  pname = "libinih";

  # version will resolve to the latest available on gitub
  inherit (sources.libinih) version src;

  buildInputs = [ meson ninja ];

  # ...

Migration from flake based approach

Previous to nvfetcher it was possible to manage sources via a pkgs/flake.nix, the main changes from there are that sources where in the attribute "srcs" (now "sources") and the contents of the sources where slightly different. In order to switch to the new system, rewrite pkgs/flake.nix to a pkgs/sources.toml file using the documentation of nvfetcher, add the line that calls the sources at the beginning of pkgs/default.nix, and accomodate the small changes in the packages as can be seen from the example.

The example package looked like:


  description = "Package sources";

  inputs = {
    libinih.url = "github:benhoyt/inih/r53";
    libinih.flake = false;


final: prev: {
  # then, call packages with `final.callPackage`
  libinih = prev.callPackage ./development/libraries/libinih { };


{ stdenv, meson, ninja, lib, srcs, ... }:
let inherit (srcs) libinih; in
stdenv.mkDerivation {
  pname = "libinih";

  # version will resolve to 53, as specified in the flake.nix file
  inherit (libinih) version;

  src = libinih;

  buildInputs = [ meson ninja ];

  # ...


Secrets are managed using agenix so you can keep your flake in a public repository like GitHub without exposing your password or other sensitive data.


Currently, there is no mechanism in nix itself to deploy secrets within the nix store because it is world-readable.

Most NixOS modules have the ability to set options to files in the system, outside the nix store, that contain sensitive information. You can use agenix to easily setup those secret files declaratively.

agenix encrypts secrets and stores them as .age files in your repository. Age files are encrypted with multiple ssh public keys, so any host or user with a matching ssh private key can read the data. The age module will add those encrypted files to the nix store and decrypt them on activation to /run/secrets.


All hosts must have openssh enabled, this is done by default in the core profile.

You need to populate your secrets/secrets.nix with the proper ssh public keys. Be extra careful to make sure you only add public keys, you should never share a private key!!


  system = "<system ssh key>";
  user = "<user ssh key>";
  allKeys = [ system user ];

On most systems, you can get your systems ssh public key from /etc/ssh/ If this file doesn't exist you likely need to enable openssh and rebuild your system.

Your users ssh public key is probably stored in ~/.ssh/ or ~/.ssh/ If you haven't generated a ssh key yet, be sure do so:

ssh-keygen -t ed25519

The underlying tool used by agenix, rage, doesn't work well with password protected ssh keys. So if you have lots of secrets you might have to type in your password many times.


You will need the agenix command to create secrets. DevOS conveniently provides that in the devShell, so just run nix develop whenever you want to edit secrets. Make sure to always run agenix while in the secrets/ folder, so it can pick up your secrets.nix.

To create secrets, simply add lines to your secrets/secrets.nix:

  allKeys = [ system user ];
  "secret.age".publicKeys = allKeys;

That would tell agenix to create a secret.age file that is encrypted with the system and user ssh public key.

Then go into the secrets folder and run:

agenix -e secret.age

This will create the secret.age, if it doesn't already exist, and allow you to edit it.

If you ever change the publicKeys entry of any secret make sure to rekey the secrets:

agenix --rekey


Once you have your secret file encrypted and ready to use, you can utilize the age module to ensure that your secrets end up in /run/secrets.

In any profile that uses a NixOS module that requires a secret you can enable a particular secret like so:

{ self, ... }:
  age.secrets.mysecret.file = "${self}/secrets/mysecret.age";

Then you can just pass the path /run/secrets/mysecret to the module.

You can make use of the many options provided by the age module to customize where and how secrets get decrypted. You can learn about them by looking at the age module.


You can take a look at the agenix repository for more information about the tool.


Testing is always an important aspect of any software development project, and NixOS offers some incredibly powerful tools to write tests for your configuration, and, optionally, run them in CI.

Unit Tests

Unit tests can be created from regular derivations, and they can do almost anything you can imagine. By convention, it is best to test your packages during their check phase. All packages and their tests will be built during CI.

Integration Tests

All your profiles defined in suites will be tested in a NixOS VM.

You can write integration tests for one or more NixOS VMs that can, optionally, be networked together, and yes, it's as awesome as it sounds!

Be sure to use the mkTest function from digga, digga.lib.pkgs-lib.mkTest which wraps the official testing-python function to ensure that the system is setup exactly as it is for a bare DevOS system. There are already great resources for learning how to use these tests effectively, including the official docs, a fantastic blog post, and the examples in nixpkgs.

bud command

The template incudes a convenient script for managing your system called bud.

It is a portable and highly composable system control tool that work anywhere on your host or in the flake's devshell.

Although it comes with some predefined standard helpers, it is very extensible and you are encouraged to write your own script snippets to ease your workflows. An example is the bud module for a get command that comes included with devos.

While writing scripts you can convenientely access smart environment variables that can tell the current architecture, user or host name, among others, regardless wether you invoke bud within the devshell or as the system-wide installed bud.

For details, please review the bud repo.


bud help


The get subcommand is useful for getting a bare copy of devos without the git history.


bud get DEST-DIR

If DEST-DIR is ommitted, it defaults to ./devos.


This section explores some of the optional tools included with devos to provide a solution to common concerns such as ci and remote deployment. An effort is made to choose tools that treat nix, and where possible flakes, as first class citizens.


The system will automatically pull a cachix.nix at the root if one exists. This is usually created automatically by a sudo cachix use. If you're more inclined to keep the root clean, you can drop any generated files in the cachix directory into the profiles/cachix directory without further modification.

For example, to add your own cache, assuming the template lives in /etc/nixos, by simply running sudo cachix use yourcache. Then, optionally, move cachix/yourcache.nix to profiles/cachix/yourcache.nix

These caches are only added to the system after a nixos-rebuild switch, so it is recommended to call cachix use nrdxp before the initial deployment, as it will save a lot of build time.

In the future, users will be able to skip this step once the ability to define the nix.conf within the flake is fully fleshed out upstream.


Deploy-rs is a tool for managing NixOS remote machines. It was chosen for devos after the author experienced some frustrations with the stateful nature of nixops' db. It was also designed from scratch to support flake based deployments, and so is an excellent tool for the job.

By default, all the hosts are also available as deploy-rs nodes, configured with the hostname set to networking.hostName; overridable via the command line.


Just add your ssh key to the host:

{ ... }:
  users.users.${sshUser}.openssh.authorizedKeys.keyFiles = [

And the private key to your user:

{ ... }:
  home-manager.users.${sshUser}.programs.ssh = {
    enable = true;

    matchBlocks = {
      ${host} = {
        host = hostName;
        identityFile = ../secrets/path/to/key;
        extraOptions = { AddKeysToAgent = "yes"; };

And run the deployment:

deploy '.#hostName' --hostname

Your user will need passwordless sudo access

Home Manager

Digga's lib.mkDeployNodes provides only system profile. In order to deploy your home-manager configuration you should provide additional profile(s) to deploy-rs config:

# Initially, this line looks like this: deploy.nodes = digga.lib.mkDeployNodes self.nixosConfigurations { };
deploy.nodes = digga.lib.mkDeployNodes self.nixosConfigurations
    <HOSTNAME> = {
      profilesOrder = [ "system" "<HM_PROFILE>" "<ANOTHER_HM_PROFILE>"];
      profiles.<HM_PROFILE> = {
        user = "<YOUR_USERNAME>";
        path = deploy.lib.x86_64-linux.activate.home-manager self.homeConfigurationsPortable.x86_64-linux.<YOUR_USERNAME>;
      profiles.<ANOTHER_HM_PROFILE> = {
        user = "<ANOTHER_USERNAME>";
        path = deploy.lib.x86_64-linux.activate.home-manager self.homeConfigurationsPortable.x86_64-linux.<ANOTHER_USERNAME>;

Substitute <HOSTNAME>, <HM_PROFILE> and <YOUR_USERNAME> placeholders (omitting the <>).

<ANOTHER_HM_PROFILE> is there to illustrate deploying multiple home-manager configurations. Either substitute those as well, or remove them altogether. Don't forget the profileOrder variable.


NvFetcher is a workflow companion for updating nix sources.

You can specify an origin source and an update configuration, and nvfetcher can for example track updates to a specific branch and automatically update your nix sources configuration on each run to the tip of that branch.

All package source declaration is done in sources.toml.

From within the devshell of this repo, run nvfetcher, a wrapped version of nvfetcher that knows where to find and place its files and commit the results.


Statically fetching (not tracking) a particular tag from a github repo:

src.manual = "v0.6.3"
fetch.github = "mlvzk/manix"

Tracking the latest github release from a github repo:

src.github = "mlvzk/manix" # responsible for tracking
fetch.github = "mlvzk/manix" # responsible for fetching

Tracking the latest commit of a git repository and fetch from a git repo:

src.git = "" # responsible for tracking
fetch.git = "" # responsible for fetching

Please refer to the NvFetcher Readme for more options.

Hercules CI

If you start adding your own packages and configurations, you'll probably have at least a few binary artifacts. With hercules we can build every package in our configuration automatically, on every commit. Additionally, we can have it upload all our build artifacts to a binary cache like cachix.

This will work whether your copy is a fork, or a bare template, as long as your repo is hosted on GitHub.


Just head over to to make an account.

Then follow the docs to set up an agent, if you want to deploy to a binary cache (and of course you do), be sure not to skip the binary-caches.json.

Ready to Use

The repo is already set up with the proper default.nix file, building all declared packages, checks, profiles and shells. So you can see if something breaks, and never build the same package twice!

If you want to get fancy, you could even have hercules deploy your configuration!


Hercules doesn't have access to anything encrypted in the secrets folder, so none of your secrets will accidentally get pushed to a cache by mistake.

You could pull all your secrets via your user, and then exclude it from allUsers to keep checks passing.

Pull Requests


  • Target Branch: main
  • Merge Policy: bors is always right (→ bors try)
  • Docs: every changeset is expected to contain doc updates
  • Commit Msg: be a poet! Comprehensive and explanatory commit messages should cover the motivation and use case in an easily understandable manner even when read after a few months.
  • Test Driven Development: please default to test driven development where possible.

Within the Devshell (nix develop)

  • Hooks: please git commit within the devshell
  • Fail Early: please run from within the devshell on your local machine:
    • nix flake check