Installing NixOS on an OVH dedicated server

Table of contents

In this article, a way to install NixOS onto an OVH server will be proposed.

The server will be installed with a ZFS file system. With separate datasets for /, /var, /home, and so on. ZFS will bring the possibility to take snapshots over several days easily and without loss of performance. Recovering a file deleted by mistake then takes a few seconds by accessing the special directory .zfs. Backup will also be facilitated in the future by sending ZFS snapshots to another server.

To automate the installation as much as possible, we will use Disko and Nixos-anywhere. Nixos-anywhere will create a hardware.nix configuration module suitable for machine hardware and install it using Disko to partition disks and format file systems.

This article was written for the installation of a KS-5 Kimsufi server with two 450GB SSD drives, but should be easily adaptable to other models. It could be adapted to other hosting providers, but preparation of the server could differ if their rescue environment doesn't have the same tools.

Prerequisite

This article is for people who know some NixOS and would like to install it remotely on an OVH server without juggling with virtual ISO disk mounts.

To use this procedure, you need a machine running NixOS, or at least Nix installed on your machine. "Flakes" must be activated in your environment, see the procedure on the NixOS website for directions.

Tested on x86_64 only

The installation was done from a NixOS Linux machine with an AMD CPU. It is possible that on another architecture like a Mac, you'd need to adapt file flake.nix.

Customizing this article's code examples

Throughout the article, you'll be able to enter values corresponding to your situation in form boxes. The article's code examples will then use your values so that you can use them directly. The change is made locally with JavaScript, your data is safe, it isn't sent anywhere.

Otherwise you can simply edit the examples in your favorite text editing software.

Preparing your server

All of the data present on the remote server will be deleted!

Only do the operations described in this article to install NixOS on a machine that is not containing any data you'd like to keep.

Starting the server in OVH rescue mode

  1. In the OVH interface, switch the machine to rescue mode, with authentication via your SSH key.
  2. Disable OVH monitoring, NixOS is not part of the OS that OVH supports so they will not be able to fix it in case of problem.
  3. Reboot the machine in rescue mode.
  4. Note its DNS address, you can enter it below.

Server verification and information retrieval

Once the server has booted, connect to it via SSH, using options to prevent the addition of its key in SSH (this is the rescue mode, it's a temporary key).

ssh -o UserKnownHostsFile=/dev/null \
    -o StrictHostKeyChecking=no \
    root@addrmachine

Note down the identifiers for the disks:

ls /dev/disk/by-id/*

Take the identifiers for the disk, not partitions. Example:

/dev/disk/by-id/nvme-INTEL_SSDPE2MX450G7_AZERTYAZERTYAZERTY
/dev/disk/by-id/nvme-INTEL_SSDPE2MX450G7_QWERTYQWERTYQWERTY

You can note them down here.

Preparing the server

This section is just regular preparation before installing Linux on OVH.

Check disk health

smartctl -H /dev/disk/by-id/iddisque1
smartctl -H /dev/disk/by-id/iddisque2

You should see the status as "PASSED".

NVMe format

Reminder: The following commands will erase all data from your server!

Only do the operations described in this article to install NixOS on a machine that is not containing any data you'd like to keep.

If your server uses NVMe drives, you can format them for best performance. NVMe drives can use various LBA (logical block addressing), each offering various metadata size and sector size. List the supported LBA:

nvme id-ns -H /dev/disk/by-id/iddisque1 | grep LBA
echo
nvme id-ns -H /dev/disk/by-id/iddisque2 | grep LBA

You'll get a list with the LBA number and parameters:

LBA Format  0 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format  1 : Metadata Size: 8   bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good 
LBA Format  2 : Metadata Size: 16  bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good 
LBA Format  3 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  4 : Metadata Size: 8   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  5 : Metadata Size: 64  bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  6 : Metadata Size: 128 bytes - Data Size: 4096 bytes - Relative Performance: 0 Best

LBA Format  0 : Metadata Size: 0   bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good (in use)
LBA Format  1 : Metadata Size: 8   bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good 
LBA Format  2 : Metadata Size: 16  bytes - Data Size: 512 bytes - Relative Performance: 0x2 Good 
LBA Format  3 : Metadata Size: 0   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  4 : Metadata Size: 8   bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  5 : Metadata Size: 64  bytes - Data Size: 4096 bytes - Relative Performance: 0 Best 
LBA Format  6 : Metadata Size: 128 bytes - Data Size: 4096 bytes - Relative Performance: 0 Best

Metadata is useless for us ; they are used by software that check data integrity. The ZFS documentation explains that it is of no use for ZFS (see ZFS documentation for more explanations).

In this example, the disk is formatted with an LBA that uses 512 byte sectors. This isn't optimal: if you write 4Kb, 4 writes of 512 bytes will be done, while the drive could do it in only one write of 4Kb. The ZFS documentation advise to use 4Kb sectors.

Reformat your drives with the best LBA. In this example, we chose LBA number 3

nvme format --lbaf=3 /dev/disk/by-id/iddisque1 
nvme format --lbaf=3 /dev/disk/by-id/iddisque2

Check that it now uses the best LBA.

nvme id-ns -H /dev/disk/by-id/iddisque1 |grep "in use"
nvme id-ns -H /dev/disk/by-id/iddisque2 |grep "in use"
Things you can do when returning your OVH server.

When you return your server, you might want to format it back to 512 bytes sectors and erase data. You can use these commands:

nvme format --lbaf 0 /dev/disk/by-id/iddisque1
nvme format --lbaf 0 /dev/disk/by-id/iddisque2

Then erase data with:

nvme format -s 1 /dev/disk/by-id/iddisque1
nvme format -s 1 /dev/disk/by-id/iddisque2

Generating a ZFS hostId

While you are on your server's rescue environment, you can generate an unique identifier for ZFS. This id needs to be unique across your servers, so that ZFS knows with server owns which ZFS pool.

The identifier is a 4 byte value in hexadecimal. You can generate it with this command:

zgenhostid -o /dev/stdout |od -t x1 -An |tr -d ' '

You can note it down here.

Preparing a NixOS configuration

Swap space

ZFS configuration

Choose a name for the ZFS pool. tank is the most popular. You then need to calculate needed free space for ZFS. For good performances, we need to keep 20% of the disk space unused.

User account

Enter the name of the user account you want present on the server. This user will have password-less Sudo rights, so protect it's SSH key securely on your computer. Enter your public SSH key bellow too. (the content of .ssh/id_rsa.pub) to be able to connect to your server once it is installed.

Nix files

Several files need to be created. Feel free to copy paste the following code examples, or download a ZIP file containing them. (The ZIP is generated by JavaScript in your browser, your data does not leave your browser).

  • The flake.nix file. It imports tools like Disko, define a shell environment with Nixos-anywhere in the PATH, and define the server configuration. More info on flakes.

    flake.nix
    {
      description = "My OVH server";
    
      inputs = {
        nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
    
        # Disko, for disk partitionning.
        disko = {
          url = "github:nix-community/disko";
          inputs.nixpkgs.follows = "nixpkgs";
        };
      };
    
      # Flake output are a shell and the server configuration.
      outputs =
        {
          self,
          nixpkgs,
          disko,
          ...
        }:
        let
          system = "x86_64-linux";
          pkgs = import nixpkgs {
            inherit system;
          };
        in
        {
          formatter.x86_64-linux = pkgs.nixfmt-tree;
    
          # Shell with nixos-anywhere available.
          devShells.x86_64-linux.default = pkgs.mkShell {
            packages = with pkgs; [
              nixos-anywhere
            ];
          };
    
          # Your server definition.
          nixosConfigurations.nommachine = nixpkgs.lib.nixosSystem {
            inherit system;
            inherit pkgs;
    
            modules = [
              # Make Disko options available in the configuration.
              disko.nixosModules.disko
    
              # File with the hardware configuration, it will be generated by nixos-anywhere.
              ./hardware.nix
    
              # File with partitions and filesystems.
    	  ./disk.nix
    
              # Server configuration.
              ./configuration.nix
            ];
          };
        };
    }
    
  • The disk.nix file. It defines the disk partitions and file systems to create and then mount in the server. If you have only one disk, remove one from the file in disko.devices.disk, boot.loader.grub.mirroredBoots ; then remove zpool.nompool.mode.

    disk.nix
    {
      disko,
      pkgs,
      lib,
      ...
    }: {
    
      networking.hostId = "valeurHostId";
    
      # Disk partitions. This exemple is for two 450GB NVME drives.
      disko.devices = {
        disk = {
          disk1 = {
            type = "disk";
            device = "/dev/disk/by-id/iddisque1";
            content = {
              type = "gpt";
    
              partitions = {
                ESP = {
                  size = "1G";
                  type = "EF00";
                  content = {
                    type = "filesystem";
                    format = "vfat";
                    mountpoint = "/boot/efis/disk1";
                    mountOptions = [ "umask=0077" ];
                  };
                };
                swap = {
                  size = "tailleswap";
                  content = {
                    type = "swap";
                    # Effacer le swap au démarrage et lorsqu'un secteur est libéré.
                    discardPolicy = "both";
                  };
                };
                "zfs_nompool" = {
                  size = "100%";
                  content = {
                    type = "zfs";
                    pool = "nompool";
                  };
                };
              };
            };
          };
          # Second disk, just remove this if you have only one.
          disk2 = {
            type = "disk";
            device = "/dev/disk/by-id/iddisque2";
            content = {
              type = "gpt";
              partitions = {
                ESP = {
                  size = "1G";
                  type = "EF00";
                  content = {
                    type = "filesystem";
                    format = "vfat";
                    mountpoint = "/boot/efis/disk2";
                    mountOptions = [ "umask=0077" ];
                  };
                };
                swap = {
                  size = "tailleswap";
                  content = {
                    type = "swap";
                    discardPolicy = "both";
                  };
                };
                "zfs_nompool" = {
                  size = "100%";
                  content = {
                    type = "zfs";
                    pool = "nompool";
                  };
                };
              };
            };
          };
        };
    
        # ZFS pool.
        zpool = {
          nompool = {
            type = "zpool";
    
            # Create a ZFS mirror with our two drives. If you only have one, remove "mode" completely.
            mode = {
              topology = {
                type = "topology";
                vdev = [
                  {
                    mode = "mirror";
                    members = [
                      "/dev/disk/by-partlabel/disk-disk1-zfs_nompool"
                      "/dev/disk/by-partlabel/disk-disk2-zfs_nompool"
                    ];
                  }
                ];
              };
            };
    
            rootFsOptions = {
              acltype = "posixacl";
              atime = "off"; # Don't ruin performance writing access time to files.
              compression = "zstd"; # ZFS compression level. "zstd" is a good middle ground between speed and compression.
    	  xattr = "sa";
              # By default, don't mark datasets as needing automatic snapshots.
    	  # Sanoid will be used for snapshots.
              "com.sun:auto-snapshot" = "false";
            };
    
            # Disk sector size. Usually 12 (4KB sectors). If your drive has 512 bytes sectors, choose 9.
            options.ashift = "12";
    
            # ZFS dataset. Feel free to change according to your preferences.
            datasets = {
    
              # Local, non system dataset used by the server itself (not shared or used by a VM).
              "local" = {
                type = "zfs_fs";
                options.mountpoint = "none";
              };
    
              # 20% free space for good ZFS performances.
              "local/reserved" = {
                type = "zfs_fs";
                options.mountpoint = "none";
                options.reservation = "reserveZFS";
              };
    
              # System datasets.
              "system" = {
                type = "zfs_fs";
                options.mountpoint = "none";
              };
    
              "system/root" = {
                type = "zfs_fs";
                mountpoint = "/";
                options.mountpoint = "legacy";
              };
    
              "system/nix" = {
                type = "zfs_fs";
                mountpoint = "/nix";
                options.mountpoint = "legacy";
              };
    
              "system/var" = {
                type = "zfs_fs";
                mountpoint = "/var";
                options.mountpoint = "legacy";
              };
    
              "system/var/lib" = {
                type = "zfs_fs";
                mountpoint = "/var/lib";
                options.mountpoint = "legacy";
                # Faster compression for /var/lib files since they change a lot.
                options.compression = "lz4";
              };
    
              "system/var/log" = {
                type = "zfs_fs";
                mountpoint = "/var/log";
                options.mountpoint = "legacy";
              };
    
              # Users datasets.
              "user" = {
                type = "zfs_fs";
                options.mountpoint = "none";
              };
    
              "user/home" = {
                type = "zfs_fs";
                mountpoint = "/home";
                options.mountpoint = "legacy";
              };
    
              "user/root" = {
                type = "zfs_fs";
                mountpoint = "/root";
                options.mountpoint = "legacy";
              };
            };
          };
        };
      };
    
      # Grub boot with options needed to start on ZFS.
      boot.loader.systemd-boot.enable = false;
      boot.loader.grub = {
        enable = true;
        zfsSupport = true;
        efiSupport = true;
    
        # Tell GRUB that the 2 EFI partitions need to be mirrors or each other.
        mirroredBoots = [
          {
            devices = [ "nodev" ];
            path = "/boot/efis/disk1";
          }
          {
            devices = [ "nodev" ];
            path = "/boot/efis/disk2";
          }
        ];
      };
    
      boot.loader.grub.efiInstallAsRemovable = true;
      boot.loader.efi.canTouchEfiVariables = false;
      boot.loader.efi.efiSysMountPoint = "/boot/efis/disk1";
    
      # ZFS auto scrub.
      services.zfs.autoScrub.enable = true;
    
      # ZFS should send trim commands to SSD.
      services.zfs.trim.enable = true;
    
      # Automatic snapshots.
      services.sanoid = {
        enable = true;
        interval = "hourly";
    
        # For these datasets, take:
        # 24 hourly snapshots.
        # 3 daily snapshots.
        datasets =
          (lib.genAttrs
            [
              "nompool/local"
              "nompool/machines"
              "nompool/services"
              "nompool/shared"
              "nompool/system"
              "nompool/user"
            ]
            (vol: {
              autoprune = true;
              autosnap = true;
              daily = 3;
              hourly = 24;
              recursive = true;
            })
          )
          // {
            # Disable snapshots for these datasets.
            # No need to snapshot the Nix store.
            "nompool/local/nix" = {
              autosnap = false;
            };
            # No need to snapshot the logs.
            "nompool/system/var/log" = {
              autosnap = false;
            };
          };
      };
    
      # Make sanoid available to command line.
      environment.systemPackages = with pkgs; [ sanoid ];
    }
    
  • Le configuration.nix file. It defines the server's configuration. In our example, we create a user that can connect via your SSH key, with password less Sudo rights. You can replace this with your desired configuration.

    configuration.nix
    {
      pkgs,
      lib,
      ...
    }:
    {
    
    
      # Network with DHCP
      networking.hostName = "nommachine";
      networking.useDHCP = false;
    
      # We use Systemd for network configuration.
      systemd.network = {
        enable = true;
        networks = {
          # Internet
          "10-eno1" = {
            matchConfig.Name = "eno1";
            networkConfig.DHCP = true;
            linkConfig.RequiredForOnline = "yes";
          };
        };
      };
    
      # Basic configuration for a french server in France.
      time.timeZone = "Europe/Paris";
    
      # Replace with your own language.
      i18n.defaultLocale = "fr_FR.UTF-8";
      console = {
        font = "Lat2-Terminus16";
        keyMap = lib.mkForce "fr";
        useXkbConfig = true;
      };
    
      # Your user.
      users.users.nomutilisateur = {
        isNormalUser = true;
        extraGroups = [ "wheel" ];
        openssh.authorizedKeys.keys = [
          "cleSSH"
        ];
      };
    
      # Give SSH rights to the user without password.
      # Consider using a dedicated user for NixOS configuration deployments, and protect the SSH key well.
      security.sudo.extraRules = [
        {
          users = [ "nomutilisateur" ];
          commands = [
            {
              command = "ALL";
              options = [ "NOPASSWD" ];
            }
          ];
        }
      ];
    
      # Your favorite packages here.
      environment.systemPackages = with pkgs; [
        vim
        wget
        cowsay
      ];
    
      # OpenSSH.
      services.openssh.enable = true;
    
      # Protect SSH from brute forcing.
      services.fail2ban.enable = true;
    
      nix.settings.experimental-features = "nix-command flakes";
    
      # Clean up the Nix store regularly.
      nix.gc = {
        automatic = true;
        dates = "weekly";
        options = "--delete-older-than 30d";
      };
    
      # NixOS version used during the first server installation.
      # Never change this after installation.
      system.stateVersion = "25.11";
    }
    

Installation

The next step is installing the server. Nixos-anywhere will connect to it and gather information, create the hardware.nix file then install the server. First, enter the flake's shell environment, where Nixos-anywhere is available as a command.

nix develop

Run Nixos-anywhere.

Reminder: This command will erase you server's data!

Do not run this command if your server still has data you want to keep (if you didn't format the disks yet).

nixos-anywhere --flake .#nommachine --generate-hardware-config nixos-generate-config ./hardware.nix --target-host root@addrmachine
The command in details.
--flake .#nommachine
Install the server that is defined in nixosConfigurations.nommachine. You can add more servers in the flake if you want, but be careful not to deploy one server's configuration onto another !
--generate-hardware-config nixos-generate-config ./hardware.nix
Generate a hardware.nix file containing the configuration needed for the server's hardware.
--target-host root@addrmachine
The SSH username and address to use to connect to the server. Nixos-anywhere will connect to it, start NixOS on it (with the "kexec" functionality that replace a running OS with another), then it will install NixOS onto the disks.

Exit rescue mode

One NixOS is installed, the server will reboot. But it will end up in the rescue OS.

In the OVH interface, set the server to boot from its hard drives, then reboot it. After a few minutes, you should be able to connect to your new NixOS server.

ssh nomutilisateur@addrmachine

Things to do after install

Don't forget to set a password for the root account. This will be useful to connect via KVM in case of big trouble.

You could also take ZFS snapshots of your system. So that you can roll back to them if you want to experiment without risk.

sudo zfs snapshot nompool/system/root@installation
sudo zfs snapshot nompool/system/var@installation
sudo zfs snapshot nompool/system/var/lib@installation
sudo zfs snapshot nompool/user/home@installation
sudo zfs snapshot nompool/user/root@installation

Changing your server configuration

Once installed, if you want to push changes you made in the server's configuration. You can use the nixos-rebuild command:

nixos-rebuild switch --flake .#nommachine --sudo --target-host nomutilisateur@addrmachine --build-host nomutilisateur@addrmachine
The command in details.
switch
Switch to the new configuration and set it to be used in consecutive boots too. Instead, you can use "test" so that the configuration is pushed to the server, but the previous one is used on boot. Very useful when you fiddle with the server's network configuration ; in case you lose network access, you are just one boot away from gaining it back.
--flake .#nommachine
The server's configuration. This is nixosConfigurations.nommachine in the flake. You can add more servers in the flake, but always be careful not to push one server's configuration, onto the wrong server.
--sudo
Tell the command it needs to use Sudo to gain privileges. If you use the root account to connect to the server, it's unneeded.
--target-host nomutilisateur@addrmachine
Username and address to connect to via SSH.
--build-host nomutilisateur@addrmachine
Build the configuration on the server instead of building it on your computer then uploading it.

Conclusion

You now have NixOS running on your OVH server. All done without manual partitioning, formatting, or mounting of virtual ISO files.