User Tools

Site Tools


intwtask
Install and configure Ansible automation tool on one of the servers
After installation of Ansible, write and execute Ansible playbook that performs the following on all servers:
  Reconfigures DNS server from current to 8.8.8.8
  Install and start the latest version of Docker CE as well as it's dependencies
  Add systemd.timer configuration to run shell command "docker system prune -af" as root every day at 11:00 UTC+0
Create Docker Swarm cluster between all 3 nodes. 
Launch Docker container in swarm mode on all 3 nodes that runs Nginx service and listens on port 8080.


Some theoretical questions that you should think about and provide your thoughts:

  What is the purpose of  "docker system prune -af" systemd timer in this context and do you think this is needed? Why?
  What is the difference, if any, of using systemd timer versus cron?
  For what purposes this setup can be used and how would you improve it?

c1.rys.pw - ansible, docker manager
c2.rys.pw - docker worker1
c3.rys.pw - docker worker2

Personalize part, not needed in prod

sudo yum install wget nano git zsh tmux bind-utils -y
sudo useradd -m -G wheel -s /bin/zsh c0rn3j
sudo mkdir -p /home/c0rn3j/.ssh
echo "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN0UrYQJE+udiy4LldhUIzfuaKM6F3wBUV/CjQwMaksF c0rn3j@c0rn3jDesktop" | sudo tee /home/c0rn3j/.ssh/authorized_keys
sudo chown -R c0rn3j:c0rn3j /home/c0rn3j
echo "c0rn3j:asdf" | sudo chpasswd
# login to c0rn3j
wget https://github.com/robbyrussell/oh-my-zsh/raw/master/tools/install.sh -O - | zsh
sed -i s/robbyrussell/"mortalscumbag"/g ~/.zshrc
echo "c2" | sudo tee /etc/hostname
git clone https://github.com/zsh-users/zsh-autosuggestions
echo "source /home/c0rn3j/zsh-autosuggestions/zsh-autosuggestions.zsh" >> ~/.zshrc
sudo reboot

Install and setup Ansible:

c1 # sudo yum install -y ansible
c1 # nano /etc/ansible/hosts
[aws]
c1 ansible_host=c1.rys.pw #ansible
c2 ansible_host=c2.rys.pw
c3 ansible_host=c3.rys.pw

c1 # # Connect to each server from root or use an ugly hack to not validate keys on connect
c1 $ ssh-keygen -t ed25519
c1 c2 c3 # copy pubkey to root
c1 c2 c3 # https://docs.docker.com/install/linux/docker-ce/centos/ - I chose adding repos

Following is not needed as it is handled by an ansible job:

#sudo yum install -y yum-utils device-mapper-persistent-data lvm2
#sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
#sudo yum install -y docker-ce
#sudo systemctl enable --now docker # Is the docker.service meant as the "dependencies" in the given task?

#sudo usermod -aG docker c0rn3j # Add to group docker so `sudo` isn't needed for every command
# Re-log to get the new group

# docker run hello-world # Test that docker works fine

Setup docker cluster:

## https://docs.docker.com/engine/swarm/swarm-tutorial/deploy-service/
c1 $ docker swarm init --advertise-addr 172.16.0.112  ##<MANAGER-IP>

# Token is echoed out after swarm init on manager node
c2 c3 $ docker swarm join --token SWMTKN-1-2zxpv50or60geswi0le6vj5vtnxnuoopxr778eedlrusgu3dp8-a7qeawm439twfh7wb7lcz6y7v 172.16.0.112:2377
c1 $ docker service create --replicas 1 --name helloworld -p 8080:80 nginx
# Note: Worker nodes cannot see the service running, only the manager can (?!)
# Test that it is running:
c1 $ docker container ls 
c1 $ docker service ls
c1 $ curl http://c1.rys.pw:8080 && curl http://c2.rys.pw:8080 && curl http://c3.rys.pw:8080

Save following two files on Ansible server since the playbook needs it:

[0] % cat /etc/systemd/system/dockerprune.service
[Unit]
Description=prune docker images daily

[Service]
ExecStart=/usr/bin/docker system prune -af

[Install]
WantedBy=multi-user.target



[0] % cat /etc/systemd/system/dockerprune.timer
[Unit]
Description=Run docker prune daily at 11:00 UTC+0

[Timer]
OnCalendar=*-*-* 11:00:00

[Install]
WantedBy=timers.target

Save the following playbook as a file:

---
- hosts: aws
  remote_user: root
  tasks:
    - name: test connection
      ping:
    - lineinfile:
        path: /etc/sysconfig/network-scripts/ifcfg-eth0
        regexp: '^DNS1='
        line: 'DNS1=8.8.8.8'
        attr: i
    - name: Add docker-ce repositories
      get_url:
        url: https://download.docker.com/linux/centos/docker-ce.repo
        dest: /etc/yum.repos.d/docker-ce.repo
    - name: install docker-ce and its prereqs
      action: >
        {{ ansible_pkg_mgr }} name={{ item }} state=present update_cache=yes
      with_items:
        - yum-utils
        - device-mapper-persistent-data
        - lvm2
        - docker-ce
    - name: Enable and start docker-ce.service
      service:
        name: docker
        enabled: yes
        state: started
    - name: write dockeprune service
      template:
        src: ./dockerprune.service
        dest: /etc/systemd/system/dockerprune.service
    - name: write dockeprune timer
      template:
        src: ./dockerprune.timer
        dest: /etc/systemd/system/dockerprune.timer
    - name: reload systemd daemon to notice the new services
      systemd:
        daemon-reload: yes
    - name: Enable and start timer dockerprune
      service:
        name: dockerprune.timer
        enabled: yes
        state: started

        
What is the purpose of  "docker system prune -af" systemd timer in this context and do you think this is needed? Why?

To always get the latest images(though I imagine there's a docker command for this that'd be a better solution for that).

It can also help if the images are somehow broken (crash or hard reboot damaged some files) by simply redownloading them periodically.

Though in this context it would fit to cleanup test dev images daily.

What is the difference, if any, of using systemd timer versus cron?

Harder to debug and test cron jobs vs systemd units, it's not possible to just 'systemctl start test.service' a cron job, you need to temporarily rewrite the execution to '* * * * *' or similar, and that's just not elegant or convenient.

Cron has simplicity going for it though, as you can't just create a script that launches on a timer in one line with systemd units.

Note: The task specified the task to be ran at UTC+0. This will be done since the provided server's local timezone is +0, but if it weren't the job would run at a different time. UTC timestamps are supported in systemd 228 but Centos 7 runs a much older version of systemd - https://github.com/systemd/systemd/blob/dd050decb6ad131ebdeabb71c4f9ecb4733269c0/NEWS#L23-L30

For what purposes this setup can be used and how would you improve it?

Since the server prunes all docker images daily, I imagine this would server as a development server(cluster) where devs could create and test their images which would automatically get cleaned up daily.

Ansible could work with private IPv4 addresses instead of public ones since it's on LAN. I just used the public IPv4 for simplicity.

Secondary DNS could be 1.1.1.1. DNS is set by making a config file immutable to work around a bug in cloud-init: https://bugs.launchpad.net/cloud-init/+bug/1712680

The Ansible playbook seems okay, but as I first manually installed everything before turning it into a playbook, I don't have a clean-state server to verify.

intwtask.txt · Last modified: 2018/10/26 19:46 by c0rn3j