In this article we will automate the DC/OS installation procedure specifically for CoreOS VMs running on KVM (libvirt).

I have been toying with the idea of setting up a home lab server where I can freely break things and abuse the hardware to my liking, without having to pay a monstrous AWS bill. I currently have my eyes set on two different boxes and have decided to not pull the trigger until I find their WAF.

Meanwhile, I am going to try to make use of a relatively old NUC with i5 processor and 16GB of memory and see if I can experiment with DC/OS (on the cheap). On it, I have a fresh installation of headless Fedora Server with the KVM group installed. Everything is going to be automated so there is no need of a graphical server. I will run all steps of the set up from that machine.


  • Box with at least 16GB of memory you can SSH into and manipulate
  • Wired connection to the internet. Wireless would be much harder to set up.
  • Latest Fedora Server, of course anything else from the RHEL family would work, or Debian / Ubuntu Server (if you want to adjust the commands from dnf to apt)
  • Latest KVM software
sudo dnf install -y @virtualization
sudo systemctl enable libvirtd
sudo systemctl start libvirtd
  • Tmux, git
sudo dnf install -y tmux git
  • Make sure you have public and private keys in your ~/.ssh/ directory. If you do not, generate them with ssh-keygen.

Networking setup

Our DC/OS cluster will operate on the default internal NAT-ed network bridge (virbr0) created by the KVM installer. This usually is in the range.

We will also need to expose our DC/OS UI and public node to our home network so those can be accessed without having to create any tunnels. My home network is in the range. In order for our VMs to be able to get an IP address on the home network, we will create a network bridge (br0).

The wired network adapter on my NUC is named enp0s25. I am going to use it to set up the br0 bridge connection.

sudo nmcli connection add ifname br0 type bridge con-name br0
sudo nmcli connection add type bridge-slave ifname enp0s25 master br0


  • Create automation script that downloads a disk image of the correct CoreOS verion
  • Configure and spin up CoreOS VMs
  • Run a script to automate the installation of the DC/OS bits on those VMs
  • Verify the installation

Get the code from github

git checkout https://github.com/dobriak/kvm-coreos-dcos.git

Explanation of setup.sh

  • Download the correct CoreOS version - as of the current DC/OS version (1.10.0) the recommended CoreOS version is 1235.9.0 and we do that in the initialSetup() function

  • Edit cluster.conf and set the USER variable to your user (or user whose public and private keys will be used to authenticate against your CoreOS VMs)

  • The following VMs will be created:

    • b - bootstrap, used only to set up the cluster. Can later be removed if you are not planning to run any upgrades
    • m1 - master node 1, this will be a single master cluster as the resources are limited. The master node gets an “external” facing leg on the home network so we can access the DC/OS GUI.
    • a1 and a2 - agent nodes 1 and 2, this is where our services will be running
    • p1 - public node 1, this is our “external” facing node for anything that you may want to expose outside of your cluster
  • All network MAC and IP addresses are hardcoded in an array, so if any of those IPs collide with yours, feel free to change them (don’t forget to do so in the cluster.conf file too).

  • Same with the directories under /var/lib/libvirt - standard naming scheme is followed, but you have a different set up, feel free to edit the domain_dir and image_dir variables in setup.sh.

Explanation of dcos_parallel_install.sh

  • This script is based on a previous article and it is, obviously, adapted to work on CoreOS.

  • If you need to change any IP addressing, please make sure to also update cluster.conf, as this is what all installation scripts use to connect to the CoreOS VMs

Run the installation

sudo ./setup.sh && sleep 3m && ./dcos_parallel_install.sh

The installation will take a few minutes. All logs of the installation can be found under /tmp/tmp.${RANDOM}.log.${PID}.

That is it, you are done! You should be able to access the DC/OS UI at (or whichever IP you have decided to use)

To clean up everything created by the above scripts, just re-run setup with a clean as the only parameter:

sudo ./setup.sh clean