User Tools

Site Tools

Install and configure Ansible automation tool on one of the servers
After installation of Ansible, write and execute Ansible playbook that performs the following on all servers:
  Reconfigures DNS server from current to
  Install and start the latest version of Docker CE as well as it's dependencies
  Add systemd.timer configuration to run shell command "docker system prune -af" as root every day at 11:00 UTC+0
Create Docker Swarm cluster between all 3 nodes. 
Launch Docker container in swarm mode on all 3 nodes that runs Nginx service and listens on port 8080.

Some theoretical questions that you should think about and provide your thoughts:

  What is the purpose of  "docker system prune -af" systemd timer in this context and do you think this is needed? Why?
  What is the difference, if any, of using systemd timer versus cron?
  For what purposes this setup can be used and how would you improve it? - ansible, docker manager - docker worker1 - docker worker2

Personalize part, not needed in prod

sudo yum install wget nano git zsh tmux bind-utils -y
sudo useradd -m -G wheel -s /bin/zsh c0rn3j
sudo mkdir -p /home/c0rn3j/.ssh
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0UrYQJE+udiy4LldhUIzfuaKM6F3wBUV/CjQwMaksF c0rn3j@c0rn3jDesktop" | sudo tee /home/c0rn3j/.ssh/authorized_keys
sudo chown -R c0rn3j:c0rn3j /home/c0rn3j
echo "c0rn3j:asdf" | sudo chpasswd
# login to c0rn3j
wget -O - | zsh
sed -i s/robbyrussell/"mortalscumbag"/g ~/.zshrc
echo "c2" | sudo tee /etc/hostname
git clone
echo "source /home/c0rn3j/zsh-autosuggestions/zsh-autosuggestions.zsh" >> ~/.zshrc
sudo reboot

Install and setup Ansible:

c1 # sudo yum install -y ansible
c1 # nano /etc/ansible/hosts
c1 #ansible

c1 # # Connect to each server from root or use an ugly hack to not validate keys on connect
c1 $ ssh-keygen -t ed25519
c1 c2 c3 # copy pubkey to root
c1 c2 c3 # - I chose adding repos

Following is not needed as it is handled by an ansible job:

#sudo yum install -y yum-utils device-mapper-persistent-data lvm2
#sudo yum-config-manager --add-repo
#sudo yum install -y docker-ce
#sudo systemctl enable --now docker # Is the docker.service meant as the "dependencies" in the given task?

#sudo usermod -aG docker c0rn3j # Add to group docker so `sudo` isn't needed for every command
# Re-log to get the new group

# docker run hello-world # Test that docker works fine

Setup docker cluster:

c1 $ docker swarm init --advertise-addr  ##<MANAGER-IP>

# Token is echoed out after swarm init on manager node
c2 c3 $ docker swarm join --token SWMTKN-1-2zxpv50or60geswi0le6vj5vtnxnuoopxr778eedlrusgu3dp8-a7qeawm439twfh7wb7lcz6y7v
c1 $ docker service create --replicas 1 --name helloworld -p 8080:80 nginx
# Note: Worker nodes cannot see the service running, only the manager can (?!)
# Test that it is running:
c1 $ docker container ls 
c1 $ docker service ls
c1 $ curl && curl && curl

Save following two files on Ansible server since the playbook needs it:

[0] % cat /etc/systemd/system/dockerprune.service
Description=prune docker images daily

ExecStart=/usr/bin/docker system prune -af


[0] % cat /etc/systemd/system/dockerprune.timer
Description=Run docker prune daily at 11:00 UTC+0

OnCalendar=*-*-* 11:00:00


Save the following playbook as a file:

- hosts: aws
  remote_user: root
    - name: test connection
    - lineinfile:
        path: /etc/sysconfig/network-scripts/ifcfg-eth0
        regexp: '^DNS1='
        line: 'DNS1='
        attr: i
    - name: Add docker-ce repositories
        dest: /etc/yum.repos.d/docker-ce.repo
    - name: install docker-ce and its prereqs
      action: >
        {{ ansible_pkg_mgr }} name={{ item }} state=present update_cache=yes
        - yum-utils
        - device-mapper-persistent-data
        - lvm2
        - docker-ce
    - name: Enable and start docker-ce.service
        name: docker
        enabled: yes
        state: started
    - name: write dockeprune service
        src: ./dockerprune.service
        dest: /etc/systemd/system/dockerprune.service
    - name: write dockeprune timer
        src: ./dockerprune.timer
        dest: /etc/systemd/system/dockerprune.timer
    - name: reload systemd daemon to notice the new services
        daemon-reload: yes
    - name: Enable and start timer dockerprune
        name: dockerprune.timer
        enabled: yes
        state: started

What is the purpose of  "docker system prune -af" systemd timer in this context and do you think this is needed? Why?

To always get the latest images(though I imagine there's a docker command for this that'd be a better solution for that).

It can also help if the images are somehow broken (crash or hard reboot damaged some files) by simply redownloading them periodically.

Though in this context it would fit to cleanup test dev images daily.

What is the difference, if any, of using systemd timer versus cron?

Harder to debug and test cron jobs vs systemd units, it's not possible to just 'systemctl start test.service' a cron job, you need to temporarily rewrite the execution to '* * * * *' or similar, and that's just not elegant or convenient.

Cron has simplicity going for it though, as you can't just create a script that launches on a timer in one line with systemd units.

Note: The task specified the task to be ran at UTC+0. This will be done since the provided server's local timezone is +0, but if it weren't the job would run at a different time. UTC timestamps are supported in systemd 228 but Centos 7 runs a much older version of systemd -

For what purposes this setup can be used and how would you improve it?

Since the server prunes all docker images daily, I imagine this would server as a development server(cluster) where devs could create and test their images which would automatically get cleaned up daily.

Ansible could work with private IPv4 addresses instead of public ones since it's on LAN. I just used the public IPv4 for simplicity.

Secondary DNS could be DNS is set by making a config file immutable to work around a bug in cloud-init:

The Ansible playbook seems okay, but as I first manually installed everything before turning it into a playbook, I don't have a clean-state server to verify.

intwtask.txt · Last modified: 2018/10/26 19:46 by c0rn3j