OpenShift OKD 3.11

Want to learn OpenShift? Here's how to install and set up OpenShift OKD to get you started.

OKD: The Origin Community distribution of Kubernetes
OKD is the Origin community distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OKD adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. OKD is also referred to as Origin in github and in the documentation.

Features:

Easily build applications with integrated service discovery and persistent storage.
Quickly and easily scale applications to handle periods of increased demand.
Support for automatic high availability, load balancing, health checking, and failover.
Push source code to your Git repository and automatically deploy containerized applications.
Web console and command-line client for building and monitoring applications.
Centralized administration and management of an entire stack, team, or organization.
Create reusable templates for components of your system, and iteratively deploy them over time.
Roll out modifications to software stacks to your entire organization in a controlled fashion.
Integration with your existing authentication mechanisms, including LDAP, Active Directory, and public OAuth providers such as GitHub.
Multi-tenancy support, including team and user isolation of containers, builds, and network communication.
Allow developers to run containers securely with fine-grained controls in production.
Limit, track, and manage the developers and teams on the platform.
Integrated Docker registry, automatic edge load balancing, cluster logging, and integrated metrics.

Overview:

OpenShift v3 is a layered system designed to expose underlying Docker-formatted container image and Kubernetes concepts as accurately as possible, with a focus on easy composition of applications by a developer. For example, install Ruby, push code, and add MySQL.
Unlike OpenShift v2, more flexibility of configuration is exposed after creation in all aspects of the model. The concept of an application as a separate object is removed in favor of more flexible composition of "services", allowing two web containers to reuse a database or expose a database directly to the edge of the network.

What Are the Layers?
The Docker service provides the abstraction for packaging and creating Linux-based, lightweight container images. Kubernetes provides the cluster management and orchestrates containers on multiple hosts.

OKD adds:
Source code management, builds, and deployments for developers
Managing and promoting images at scale as they flow through your system
Application management at scale
Team and user tracking for organizing a large developer organization
Networking infrastructure that supports the cluster.

































OKD has a microservices-based architecture of smaller, decoupled units that work together. It runs on top of a Kubernetes cluster, with data about the objects stored in etcd, a reliable clustered key-value store. Those services are broken down by function:
                 REST APIs, which expose each of the core objects.
                 Controllers, which read those APIs, apply changes to other objects, and report status or write back to the object.

Users make calls to the REST API to change the state of the system. Controllers use the REST API to read the user’s desired state, and then try to bring the other parts of the system into sync. For example, when a user requests a build they create a "build" object. The build controller sees that a new build has been created, and runs a process on the cluster to perform that build. When the build completes, the controller updates the build object via the REST API and the user sees that their build is complete.
The controller pattern means that much of the functionality in OKD is extensible. The way that builds are run and launched can be customized independently of how images are managed, or how deployments happen. The controllers are performing the "business logic" of the system, taking user actions and transforming them into reality. By customizing those controllers or replacing them with your own logic, different behaviors can be implemented. From a system administration perspective, this also means the API can be used to script common administrative actions on a repeating schedule. Those scripts are also controllers that watch for changes and take action. OKD makes the ability to customize the cluster in this way a first-class behavior.
To make this possible, controllers leverage a reliable stream of changes to the system to sync their view of the system with what users are doing. This event stream pushes changes from etcd to the REST API and then to the controllers as soon as changes occur, so changes can ripple out through the system very quickly and efficiently.

Concepts:

OKD builds a developer-centric workflow around Docker containers and Kubernetes runtime concepts. An Image Stream lets you easily tag, import, and publish Docker images from the integrated registry. A Build Config allows you to launch Docker builds, build directly from source code, or trigger Jenkins Pipeline jobs whenever an image stream tag is updated. A Deployment Config allows you to use custom deployment logic to rollout your application, and Kubernetes workflow objects like DaemonSetsDeployments, or StatefulSets are upgraded to automatically trigger when new images are available. Routes make it trivial to expose your Kubernetes services via a public DNS name. As an administrator, you can enable your developers to request new Projects which come with predefined roles, quotas, and security controls to fairly divide access.

What can we run on OKD ?

OKD is designed to run any existing Docker images. Additionally, you can define builds that will produce new Docker images using a Dockerfile.
For an easier experience running your source code, Source-to-Image (S2I) allows developers to simply provide an application source repository containing code to build and run. It works by combining an existing S2I-enabled Docker image with application source to produce a new runnable image for your application.

Some of the available images include: Ruby, Python, Node.js, PHP, Perl, WildFly.

Your application image can be easily extended with a database service with our database images: MySQL , MongoDB, PostgreSQL.
Single master and node on one system
You can install OKD on a single system for only a development environment.

Update yum repolist
[root@openshift ~]# yum repolist

Install epel-release
################
[root@openshift ~]# yum install epel-release -y

Install required packages
####################
[root@openshift ~]# yum install wget git net-tools bind-utils yum-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct -y

Install ansible
#############
[root@openshift ~]# yum install ansible pyOpenSSL -y

Note: Ansible version should be below 2.8.0 for OKD 3.11
If you do yum install ansible -y, it will automaticlally install the latest 2.8.0 version which is not supported for OKD 3.11

Check Ansible version
#################
[root@openshift ~]# ansible --version
ansible 2.8.0


Note: If Ansible 2.8.0 is installed as above, downgrade it to 2.7.9

Downgrade Ansible version
##################
[root@openshift openshift-ansible]# yum install https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.7.9-1.el7.ans.noarch.rpm -y

Check Ansible version now
##################
[root@openshift openshift-ansible]# ansible --version
ansible 2.7.9
 

Update your packages
################
[root@openshift ~]# yum update -y

Install docker
###############
[root@openshift ~]# yum -y install docker

Start and enable docker
##################
[root@openshift ~]# systemctl restart docker && systemctl enable docker

Edit /etc/sysconfig/docker file and Add the below line
###################################
[root@openshift ~]# vim /etc/sysconfig/docker

" OPTIONS='--insecure-registry=172.30.0.0/16 --selinux-enabled --log-opt max-size=1M --log-opt max-file=3' "

Restart docker
############
[root@openshift ~]# systemctl restart docker

Generate SSH key with RSA
###################
[root@openshift ~]# ssh-keygen

Copy key to your localhost/IP
#####################
[root@openshift ~]# ssh-copy-id 127.0.0.1

Try to login to your localhost as you setup password less SSH
#########################################
[root@openshift ~]# ssh 127.0.0.1

Clone openshift git repository
########################
[root@openshift ~]# git clone https://github.com/openshift/openshift-ansible
[root@openshift ~]# cd openshift-ansible

Checkout latest 3.11 release
#######################
[root@openshift openshift-ansible]# git checkout release-3.11

Create inventory file
########################
[root@openshift ~]# vim openshift_inventory
[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root

# General Cluster Variables
openshift_deployment_type=origin
openshift_disable_check=disk_availability,docker_storage,memory_availability,

# Cluster Authentication Variables
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]

# Service Catalog Variables
openshift_enable_service_catalog=True
openshift_service_catalog_image_version=v3.11
ansible_service_broker_install=false
openshift_install_examples=true

# OpenShift Networking Variables
os_firewall_use_firewalld=true
openshift_master_cluster_hostname=openshift.example.com
openshift_master_default_subdomain=openshift.example.com

# Additional Repos
openshift_additional_repos=[{'id': 'centos-paas', 'name': 'centos-paas','baseurl' :'https://buildlogs.centos.org/centos/7/paas/x86_64/openshift-origin311','gpgcheck' :'0', 'enabled' :'1'}]

# Defining Nodes
[masters]
openshift.example.com openshift_schedulable=true

[etcd]
openshift.example.com

[nodes]
openshift.example.com openshift_schedulable=true openshift_node_group_name='node-config-all-in-one' openshift_node_group_name='node-config-master-infra'


Make sure your firewalld service is running 
#############################
[root@openshift openshift-ansible]# systemctl status firewalld.service

Running Pre-requisite playbook
############################## 
[root@openshift openshift-ansible]# ansible-playbook -i /root/openshift_inventory playbooks/prerequisites.yml -vvv

Running deploy cluster playbook
################################
[root@openshift openshift-ansible]# ansible-playbook -i /root/openshift_inventory playbooks/deploy_cluster.yml -vvv

You will see the output as below after cluster created
########################################
PLAY RECAP **********************************************************************************************
192.168.0.25               : ok=573  changed=171  unreachable=0    failed=0  
localhost                  : ok=11   changed=0    unreachable=0    failed=0  

INSTALLER STATUS ****************************************************************************************
Initialization               : Complete (0:00:11)
Health Check                 : Complete (0:00:05)
Node Bootstrap Preparation   : Complete (0:02:41)
etcd Install                 : Complete (0:00:54)
Master Install               : Complete (0:04:36)
Master Additional Install    : Complete (0:00:59)
Node Join                    : Complete (0:00:15)
Hosted Install               : Complete (0:00:52)
Cluster Monitoring Operator  : Complete (0:02:59)
Web Console Install          : Complete (0:01:01)
Console Install              : Complete (0:00:34)
metrics-server Install       : Complete (0:00:00)

Check the firewall ports added
########################
[root@openshift openshift-ansible]# firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: wlp2s0
  sources:
  services: ssh dhcpv6-client
  ports: 10250/tcp 10256/tcp 80/tcp 443/tcp 4789/udp 9000-10000/tcp 1936/tcp 2379/tcp 2380/tcp 8443/tcp 8444/tcp 8053/tcp 8053/udp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
 

Now check oc nodes, pods, svc and route
################################
[root@openshift openshift-ansible]# oc get nodes
NAME                    STATUS    ROLES          AGE       VERSION
openshift.example.com   Ready     infra,master   15m       v1.11.0+d4cacc0

[root@openshift openshift-ansible]# oc get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-1-l74ns    1/1       Running   0          5m
registry-console-1-2jc5l   1/1       Running   0          5m
router-1-km8vg             1/1       Running   0          5m

[root@openshift openshift-ansible]# oc get svc
NAME               TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry    ClusterIP   172.30.186.212   <none>        5000/TCP                  5m
kubernetes         ClusterIP   172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     16m
registry-console   ClusterIP   172.30.128.249   <none>        9000/TCP                  5m
router             ClusterIP   172.30.37.246    <none>        80/TCP,443/TCP,1936/TCP   5m

[root@openshift openshift-ansible]# oc get route
NAME               HOST/PORT                                                   PATH      SERVICES           PORT      TERMINATION   WILDCARD
docker-registry    docker-registry-default.router.default.svc.cluster.local              docker-registry    <all>     passthrough   None
registry-console   registry-console-default.router.default.svc.cluster.local             registry-console   <all>     passthrough   None

Create admin password
####################
[root@openshift openshift-ansible]# htpasswd -c /etc/origin/master/htpasswd admin
New password:
Re-type new password:
Adding password for user admin

check whoami
###########
[root@openshift openshift-ansible]# oc whoami
system:admin

check the running ports
##################
[root@openshift openshift-ansible]# netstat -tnlp | grep openshift
tcp        0      0 0.0.0.0:8053            0.0.0.0:*               LISTEN      68048/openshift    
tcp        0      0 127.0.0.1:53            0.0.0.0:*               LISTEN      7768/openshift     
tcp        0      0 127.0.0.1:11256         0.0.0.0:*               LISTEN      7768/openshift     
tcp        0      0 0.0.0.0:8443            0.0.0.0:*               LISTEN      68048/openshift    
tcp        0      0 0.0.0.0:8444            0.0.0.0:*               LISTEN      68382/openshift    
tcp6       0      0 :::1936                 :::*                    LISTEN      14223/openshift-rou
tcp6       0      0 :::10256                :::*                    LISTEN      7768/openshift   

Add localhost address in /etc/hosts file
############################
[root@openshift openshift-ansible]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.0.25 openshift.example.com

Now open OKD dashboard/console on your browser and login using admin with the above changed password at https://openshift.example.com:8443/
#################################################

Comments

Post a Comment

Popular posts from this blog

DevOps Interview Questions

Calico Certified Operator: AWS Expert Questions & Answers

CKAD Certification Exam Preparation Guide and Tips