Run containerized python app in kubernetes

Run containerized python app in kubernetes

First of all we need a Docker image that will be run inside the kubernetes cluster. So I assumed that we already have a kubernetes cluster. So the next we do is to build the docker image or you can use your docker image yourself.

But in this tutorial, I will show you how to run the containerized python app with my version from the start.

What we need

These applications should be installed on your local machine before get started. In my case, I use my remote server with ubuntu 16.04 installed.

1. Docker
2. Kubernetes

Setup Kubernetes on Ubuntu 16.04

Build docker image

Let’s begin with clone of of my repo that contains Dockerfile to build the image:

$ git clone https://github.com/muffat/docker-images.git
$ cd docker-images/simple-python-app/
~/docker-images/simple-python-app$ sudo docker build -t simple .

Wait until the process successfully built. And then you’ll see a new docker image when you type this command:

$ docker images

Push docker image to repository (docker hub)

Before pushing the image to docker hub, we need to tag the successfully built image.

$ docker tag fbd064597ae4 cerpin/simple:1.0

Push the image

$ docker push cerpin/simple
The push refers to a repository [docker.io/cerpin/simple]
bc69ee44ef1a: Pushed 
7957c9ab59bb: Pushed 
2366fc011ccb: Pushed 
b18f9eea2de6: Pushed 
6213b3fcd974: Pushed 
fa767832af66: Pushed 
bcff331e13e3: Mounted from cerpin/test 
2166dba7c95b: Mounted from cerpin/test 
5e95929b2798: Mounted from cerpin/test 
c2af38e6b250: Mounted from cerpin/test 
0a42ee6ceccb: Mounted from cerpin/test

After it pushed. You will have the docker image in the repository and ready to use it:

cerpin/simple:1.0

Run the image in kubernetes

First of all, I’m not a big fan of kubectl command, so I usually make a symlink to create the shorter version of kubectl:

$ sudo ln -s /usr/bin/kubectl /usr/bin/cap

Run the docker image in kubernetes

$ cap run simple --image=cerpin/simple:1.0

Then the container will be created. Just wait a moment until the state becomes Running

$ cap get pods
simple-79d85db8b9-466kd 1/1 Running 0 26m

After it’s ready, expose the service with port 5002 to become LoadBalancer. So the service will be accessible from the outside world

$ cap expose deployment simple --type=LoadBalancer --port=5002

Check the service that has been exposed:

$ cap get services
simple LoadBalancer 10.105.115.251 <pending> 5002:31969/TCP 21m

You will see that the service will have forwarded port to 31969 from 5002.

If you open up the browser and navigate to http://external IP:31969, you’ll see the app is running.

Or, just use a curl command instead:

$ curl http://167.xxx.xxx.xxx:31969
{
"message": "welcome", 
"status": "ok"
}

Setup Kubernetes on Ubuntu 16.04

Summary

This setup is supposedly to install the kubernetes on ubuntu machine with version 16.04 (64bit). I did this in the cloud and have worked perfectly.

$ sudo apt-get update
$ sudo apt-get install -y apt-transport-https
$ sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
$ sudo echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
$ sudo apt-get update -y
$ sudo apt install docker.io
$ sudo apt-get install -y kubelet kubeadm kubernetes-cni
$ cat /proc/swaps
$ swapoff
$ kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=<private IP>
$ sudo useradd kube -G sudo -m
$ sudo passwd kube
$ sudo su - kube
$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf
$ echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc

Check pods status, wait until all running

$ kubectl get pods --all-namespaces
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml

# or

$ kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml
$ kubectl taint nodes --all node-role.kubernetes.io/master-

Install kubernetes dashboard

$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Create user dashboard

create-user.yml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system

create-role.yml

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
$ kubectl create -f create-user.yml
$ kubectl create -f create-role.yml
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print }')

References :

Create AWS codebuild project with Terraform

Summary

AWS Codebuild is fully managed build service that compiles source code, run tests, and produces software packages that are ready to reploy. To make it easier, we can create it’s infrastructure using terraform.

Setup directory structure

Before we begin, we can create our own directory structure for the infrastructure. Why this is important? because whenever we setup something and we want to change it, when revisit these files and change what necessary. To do this, we can just simply create the one just like this:

$ mkdir test-codebuild
$ cd test-codebuild
~test-codebuild$ touch main.tf vars.tf terraform.tfvars buildspec.yml

Write some terraform codes

Let’s do the code! fill each one of the files we created :

main.tf

provider "aws" {
  region = "ap-southeast-1"
}

terraform {
  backend "s3" {
    bucket = "terraform-state-test-pulpn"
    key    = "test-codebuild-project"
    region = "ap-southeast-1"
  }
}

module "codebuild" {
  source       = "git::ssh://git@github.com/muffat/tf-codebuild-module.git?ref=master"
  project_name = "${var.project_name}"
  description  = "${var.description}"
  bucket_name  = "${var.bucket_name}"
  repo_type    = "${var.repo_type}"
  repo_url     = "${var.repo_url}"
  team         = "${var.team}"
  image_name   = "${var.image_name}"
  buildspec    = "${file("buildspec.yml")}"
}

terraform.tfvars

In this file, we should define our project based on what we need. You might need to change the each variables according with what fits you needs.

project_name = "test-project"
description  = "test python project"
bucket_name  = "python-artifact"
repo_type    = "GITHUB"
repo_url     = "https://github.com/muffat/test-python-pulpn"
team         = "pulpn"
image_name   = "aws/codebuild/python:3.6.5"

vars.tf

variable "project_name" {}
variable "description" {}
variable "bucket_name" {}
variable "repo_type" {}
variable "repo_url" {}
variable "team" {}
variable "image_name" {}

buildspec.yml

Buildspec is list of steps that should be doing during the build process.

version: 0.1

phases:
  build:
    commands:
      - pip install flask

Deploy the codes

$ cd test-codebuild
~test-codebuild$ terraform init
~test-codebuild$ terraform plan
......................
TL;DR
......................
Plan: 4 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.

You should be able to seen anything like above. Terraform attemps to create the infrastructure that we’ve defined in the codes before.

~test-codebuild$ terraform apply
...............
TL;DR
...............
Plan: 4 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: 

After we ran command terraform apply, we should be prompted to accept the action that terraform asked. To pass this, enter the value with yes or no to cancel it.

Accept the action by enter, yes. Then terraform will be created the codebuild infrastructure in AWS.

Apply complete! Resources: 4 added, 0 changed, 0 destroyed.

Feeling lazy? Use the links below to get your codebuild deployed with terraform

References:

Create docker image and push to AWS ECR

Image tag : test-image

awsudo -u aws-profile aws ecr get-login --no-include-email --region ap-southeast-1
sudo docker build -t test-image .
sudo docker tag codebuild:test-image 743977200366.dkr.ecr.ap-southeast-1.amazonaws.com/codebuild:test-image
sudo docker push 743977200366.dkr.ecr.ap-southeast-1.amazonaws.com/codebuild:test-image

 

Python migration with Alembic

$ pip install alembic
$ alembic init --template generic alembic

edit alembic.ini

sqlalchemy.url = mysql://root:@localhost/database_name
$ alembic current
$ alembic revision -m "Init"
$ alembic upgrade head
INFO [alembic.migration] Context impl MySQLImpl. INFO [alembic.migration] Will assume non-transactional DDL. INFO [alembic.migration] Running upgrade None -> 174f01a0ar12, Init

Get rid of “Another update is currently in progress” in WordPress

When you updating one of your resources in wordpress, accidentally you close the window (that’s what I did).

Then when you try to update it again, there is still another process already running. You need to get rid of this before you continue.

Use wp-cli:

wp option delete core_updater.lock

Setup python app in centos from scratch (centos 6.9+uwsgi+nginx+flask+mysql)

Initial setup

$ sudo yum update
$ sudo yum install epel-release
$ sudo yum groupinstall "Development tools"
$ sudo yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel telnet htop
$ sudo yum install python-devel python-virtualenv
$ sudo yum install mysql-connector-python mysql-devel mysql-server

Install Python

Download and install Python : https://www.python.org/

./configure && make && make altinstall

Install uWSGI

$ wget https://bootstrap.pypa.io/get-pip.py
$ which python2.7
$ sudo /usr/local/bin/python2.7 get-pip.py
$ which pip2.7
$ sudo /usr/local/bin/pip2.7 install uWSGI
$ which uwsgi
$ uwsgi --version

Setup vassels

$ sudo mkdir -p /etc/uwsgi/vassels

Setup Emperor service

$ sudo vim /etc/init.d/emperor
#!/bin/sh
# chkconfig: 2345 99 10
# Description: Starts and stops the emperor-uwsgi
# See how we were called.

RUNEMPEROR="/usr/local/bin/uwsgi --emperor=/etc/uwsgi/vassels"

PIDFILE=/var/run/emperor-uwsgi.pid
LOGFILE=/var/log/uwsgi/emperor.log

start() {
  if [ -f /var/run/$PIDNAME ] && kill -0 $(cat /var/run/$PIDNAME); then
    echo 'Service emperor-uwsgi already running' >&2
    return 1
  fi
  echo 'Starting Emperor...' >&2
  local CMD="$RUNEMPEROR &> \"$LOGFILE\" & echo $!"
  su -c "$CMD" > "$PIDFILE"
  echo 'Service started' >&2
}

stop() {
  if [ ! -f "$PIDFILE" ] || ! kill -0 $(cat "$PIDFILE"); then
    echo 'Service emperor-uwsgi not running' >&2
    return 1
  fi
  echo 'Stopping emperor-uwsgi' >&2
  kill -7 $(cat "$PIDFILE") && rm -f "$PIDFILE"
  echo 'Service stopped' >&2
}

status() {
    if [ ! -f "$PIDFILE" ]; then
	echo "Emperor is not running." >&2
	return 1
    else
    	echo "Emperor (pid  `cat ${PIDFILE}`) is running..."
    	ps -ef |grep `cat $PIDFILE`| grep -v grep
    fi
}

case "" in
start)
      start
      ;;
stop)
      stop
      ;;
status)
      status
      ;;
restart)
      stop
      start
      ;;
*)
    echo "Usage: emperor {start|stop|restart}"
    exit 1
esac

 Setup app user & environment

$ useradd foobar
$ usermod -md /srv/foobar foobar
$ chmod 755 /srv/foobar
$ sudo su - foobar
foobar@local~$ virtualenv --python=python2.7 ~/venv
foobar@local~$ mkdir www
foobar@local~$ mkdir logs
foobar@local~$ touch logs/uwsgi.log
foobar@local~$ touch uwsgi.ini
foobar@local~$ echo "source ~/venv/bin/activate" >> ~/.bashrc
foobar@local~$ source ~/venv/bin/activate
(venv)foobar@local~$ vim uwsgi.ini
[uwsgi]
master = true
processes = 2
socket = /tmp/foobar.sock

chdir = /srv/foobar/www
virtualenv = /srv/foobar/venv
module = app:app

uid = foobar
chown-socket = foobar:nginx
chmod-socket = 660
vacuum = true

die-on-term = true
python-autoreload = 3
py-autoreload = 1
logger = file:/srv/foobar/logs/uwsgi.log

Exit from foobar user & create uwsgi symlink

(venv)foobar@local~$ exit
$ sudo ln -s /srv/foobar/uwsgi.ini /etc/uwsgi/vassels/foobar.ini

Start emperor service & setup set the startup

$ sudo service emperor start
$ sudo chkconfig emperor on