Mauricio Magnani bio photo

Mauricio Magnani

A guy who loves technology and new challenges!

Email Twitter LinkedIn Github

Introduction

OpenShift is Red Hat’s Cloud Computing Platform as a Service PaaS that offers several alternatives for developers to build, test and run applications quickly, simply and scalably.

OpenShift “takes care” of all the complexity related to infrastructure, middleware, and management, keeping the developer focused on application design and coding.

Currently the latest version is 4.x which really brought numerous facilities like Installer Provisioned Infrastructure (IPI), which makes it much easier for the most inexperienced. In this article we will focus on version 3.x because it is an already established version and has been widely used in the last 4 years.

Explaining some terms:

  • OpenShift Enterprise - Available only to customers who have purchased a Red Hat subscription. This version can be installed and configured on Amazon, Azure, or any company’s internal infrastructure.

  • OpenShift OKD - Available in the community where any / sysadmin developer can install and configure whether on Amazon, Azure or internal infrastructure, is not officially supported and is not offered any SLA.

  • OpenShift Online - Online version where the developer pays for “usage” (CPU, Memory, etc.). Interaction is performed through the command line.

  • OpenShift Dedicated - Version used by large companies that do not want to have the work of installing, configuring, upgrading, etc., leaving this task to the Red Hat engineering team.

  • Openshift Minishift - Version intended for local execution only. It is used by developers to observe application behavior prior to shipping to the Openshift cluster.

Using OpenShift you can quickly see the following advantages:

  • Self-service platform - Developers can create their own applications in different languages (Polyglot, multilanguage support) where resource utilization limits and quotas can be set for each project.

  • Scalable - You can scale to dozens of PODs in just a few minutes and this automatically.

  • Open Source / Choice of cloud - In addition to no lock-in vendor, you can deploy across multiple platforms with Amazon, Microsoft Azure, Google and even On-Premise.

OpenShift Platform Key Concepts

  • Containers - An instance running as a JBoss, Wordpress, Mysql, etc. application.

  • Pods - Set of one or more containers and management layers.

  • Nodes - RHEL/CentOS or CoreOS instance where the application is currently running.

  • Masters - RHEL/CentOS or CoreOS instance which aims to manage everything that happens in the cluster.

Briefly we can visualize the following flow:

  • Applications are packaged in docker images;
  • Applications are executed in containers and grouped in Pods;
  • Pod / containers run on Nodes;
  • Nodes are managed by the Master;

Installing Openshift OKD 3.11

As I said earlier, OpenShift can be installed on many platforms, so for this article’s tests we will use oVirt. oVirt is a free open-source virtualization platform.

As I want to show how to create a lab environment, I will not configure HA for the management layer (Master) but I will configure the “HA” of the application layer (Router) for learning purposes.

Prerequisites:

  • The servers must have CentOS 7.7 installed
  • DNS needs to be working. The servers need to talk to each other using “names”.
  • An extra disk attached to the server.

I will use the following servers:

  • master-01.mmagnani.lab - 10.0.0.80
  • node-01.mmagnani.lab - 10.0.0.81
  • node-02.mmagnani.lab - 10.0.0.82
  • infra-01.mmagnani.lab - 10.0.0.83
  • infra-02.mmagnani.lab - 10.0.0.84
  • bastion.mmagnani.lab - 10.0.0.116

We will execute all commands from the “bastion.mmagnani.lab” server, as the name already says it will be our central execution point. The first step of course is to install Ansible and create our inventory so that commands can be executed on all nodes:

[root@bastion ~]# yum update -y
[root@bastion ~]# yum install https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.6.9-1.el7.ans.noarch.rpm -y
[root@bastion ~]# mkdir -p /root/workspace/okd
[root@bastion ~]# cd /root/workspace/okd
[root@bastion ~]# vi inventory

[all]
master-01.mmagnani.lab
node-01.mmagnani.lab
node-02.mmagnani.lab
infra-01.mmagnani.lab
infra-02.mmagnani.lab

[root@bastion ~]# ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
[root@bastion ~]# ssh-copy-id master-01.mmagnani.lab
[root@bastion ~]# ssh-copy-id node-01.mmagnani.lab 
[root@bastion ~]# ssh-copy-id node-02.mmagnani.lab 
[root@bastion ~]# ssh-copy-id infra-01.mmagnani.lab 
[root@bastion ~]# ssh-copy-id infra-02.mmagnani.lab

We now have access to execute commands on all nodes using Ansible. You can test by pinging:

[root@bastion ~]# ansible all -i inventory -m ping
node-01.mmagnani.lab | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
infra-01.mmagnani.lab | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
infra-02.mmagnani.lab | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
node-02.mmagnani.lab | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
master-01.mmagnani.lab | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Update all nodes, run:

[root@bastion ~]# ansible all -i inventory -m shell -a "yum update -y"

Selinux must be enforcing on all nodes, run:

[root@bastion ~]# ansible all -i inventory -m shell -a "getenforce"

Firewalld in this case must be disabled on all nodes, run:

[root@bastion ~]# ansible all -i inventory -m shell -a "systemctl stop firewalld"
[root@bastion ~]# ansible all -i inventory -m shell -a "systemctl disable firewalld"

Install EPEL repo on all nodes, run:

[root@bastion ~]# ansible all -i inventory -m shell -a "yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm -y"
[root@bastion ~]# ansible all -i inventory -m shell -a "sed -i -e "s/^enabled=1/enabled=0/" /etc/yum.repos.d/epel.repo"

Install the following packages on all nodes:

[root@bastion ~]# ansible all -i inventory -m shell -a "yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -y"
[root@bastion ~]# ansible all -i inventory -m shell -a "yum -y --enablerepo=epel install pyOpenSSL -y"

On all nodes there is a disk “/dev/sdb” that will be used by the docker:

[root@bastion ~]# ansible all -i inventory -m shell -a "yum install docker-1.13.1 -y"
[root@bastion ~]# ansible all -i inventory -m shell -a "pvcreate /dev/sdb"
[root@bastion ~]# ansible all -i inventory -m shell -a "vgcreate docker_vol /dev/sdb"
[root@bastion ~]# ansible all -i inventory -m shell -a "echo 'VG="docker_vol"' > /etc/sysconfig/docker-storage-setup"
[root@bastion ~]# ansible all -i inventory -m shell -a "docker-storage-setup"
[root@bastion ~]# ansible all -i inventory -m shell -a "systemctl enable docker"
[root@bastion ~]# ansible all -i inventory -m shell -a "systemctl start docker"

Now clone the openshift-ansible repository:

[root@bastion ~]# pwd /root/workspace/okd [root@bastion ~]# git clone https://github.com/openshift/openshift-ansible.git [root@bastion ~]# cd openshift-ansible [root@bastion ~]# git checkout release-3.11

Now one of the most important steps. Configure “/etc/ansible/hosts” that defines how Openshift will be installed:

[root@bastion ~]# vi /etc/ansible/hosts
[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]

ansible_become=yes
ansible_ssh_user=root

openshift_deployment_type=origin
deployment_type=origin

openshift_master_default_subdomain=cloudapps.mmagnani.lab

openshift_master_cluster_hostname=master-01.mmagnani.lab

osm_use_cockpit=true
openshift_clock_enabled=true

openshift_docker_options='--selinux-enabled --insecure-registry 172.30.0.0/16'

osm_default_node_selector='node-role.kubernetes.io/compute=true'
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin': '$apr1$LDUvvylp$sOP3cx4rTgyNpkPGtaULx.'}

openshift_disable_check=disk_availability,docker_storage,memory_availability,package_version
openshift_enable_unsupported_configurations=True
openshift_cluster_monitoring_operator_install=false

osm_default_node_selector='node-role.kubernetes.io/compute=true'

openshift_hosted_router_replicas=2
openshift_router_selector='node-role.kubernetes.io/infra=true'

[masters]
master-01.mmagnani.lab

[etcd]
master-01.mmagnani.lab

[nodes]
master-01.mmagnani.lab openshift_node_group_name='node-config-master'
node-01.mmagnani.lab openshift_node_group_name='node-config-compute'
node-02.mmagnani.lab openshift_node_group_name='node-config-compute'
infra-01.mmagnani.lab openshift_node_group_name='node-config-infra'
infra-02.mmagnani.lab openshift_node_group_name='node-config-infra'

If you want to know in detail about each parameter, check the documentation: Openshift Doc

Now create the entries for DNS, this is very important.

For the parameter “openshift_master_default_subdomain=cloudapps.mmagnani.lab, should be creating a “wildcard” pointing to where the routers will be deployed, which in this case will be infra-01 and infra-02.

  • “*.cloudapps - 10.0.0.83 and 10.0.0.84”

Now you are probably ready to begin the installation of Openshift. Run the “prerequisites” playbook:

[root@bastion ~]# pwd
/root/workspace/okd/openshift-ansible
[root@bastion ~]#  ansible-playbook playbooks/prerequisites.yml

If all went well, run the installation playbook:

[root@bastion ~]# pwd
/root/workspace/okd/openshift-ansible
[root@bastion ~]#  ansible-playbook playbooks/deploy_cluster.yml

In the end you should see something like:

Make sure all nodes are in “Ready” status:

[root@bastion ~]# ansible masters[0] -b -m fetch -a "src=/root/.kube/config dest=/root/.kube/config flat=yes"
[root@bastion ~]# oc whoami
[root@bastion ~]# oc get nodes

Openshift OKD has been successfully installed! In the next part we will look at how to add persistence and check the health of key components.

Best,