CoreOs Kubernetes with Elasticsearch Cluster

April 10th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “CoreOs Kubernetes with Elasticsearch Cluster”

If you are new to Kubernetes, this post will give you a quick overview of how it works. We will use a multi-node Kubernetes cluster using Vagrant and CoreOS.

First install the prerequisites:

 
You will need at least 16GB of RAM.

Grab the appropriate Vagrant package for your system.

Then install the kubectl the main program for interacting with the Kubernetes API.

The linux kubectl binary can be fetched with a command like:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/linux/amd64/kubectl

On an OS X workstation, replace linux in the URL above with darwin:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/darwin/amd64/kubectl

After downloading the binary, ensure it is executable and move it into your PATH:

chmod +x kubectl
mv kubectl /usr/local/bin/kubectl

Clone the Repository

git clone https://github.com/coreos/coreos-kubernetes.git
cd coreos-kubernetes/multi-node/vagrant

Edit the multi-node/vagrant/Vagrantfile file to use 2 CPUs for the Kubernetes workers by adding vb.cpus = 2 in the worker provider.

      worker.vm.provider :virtualbox do |vb|
        vb.memory = $worker_vm_memory
        vb.cpus = 2
      end

Start the Vagrant

 
Copy the config.rb.sample to config.rb

cp config.rb.sample config.rb

Then modify the config.rb file with:

$update_channel = "stable"
$controller_count = 1
$controller_vm_memory = 1024
$worker_count = 1
$worker_vm_memory = 4098
$etcd_count = 1
$etcd_vm_memory = 512

and run vagrant up.

vagrant up

Configure kubectl

 

export KUBECONFIG=”${KUBECONFIG}:$(pwd)/kubeconfig”
kubectl config use-context vagrant-multi

Check that kubectl is configured properly by inspecting the cluster:

kubectl get nodes
NAME           STATUS                     AGE
172.17.4.101   Ready,SchedulingDisabled   14h
172.17.4.201   Ready                      14h

If you are unable to connect, wait a little, Kubernetes is pulling its images, it may take some time depending on your internet connection.

We are now going to connect to the Dashboard:

kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

To access the Dashboard and other services, you have to start the kubectl proxy.

kubectl proxy
Starting to serve on 127.0.0.1:8001

You can now access the Dashboard by pointing your browser to: http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard or http://localhost:8001/ui/

Now, open a new terminal, don’t forget to set again the KUBECONFIG environment.

Let’s get the Pods:

kubectl get pods
No resources found.

A pod is a group of containers that are deployed together on the same host. If you frequently deploy single containers, you can generally replace the word “pod” with “container” and accurately understand the concept.

And now the services.

kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.3.0.1             443/TCP   14h

A service is a grouping of pods that are running on the cluster. Services are “cheap” and you can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.

Let’s create the our Elasticache cluster:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/service-account.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-discovery-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-master-rc.yaml

Wait until es-master is provisioned, to check when the es-master is provisioned.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-master-d64th   1/1       Running   0          19s

Then run the es-client and wait till it is provisioned.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-client-rc.yaml
kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2s
es-master-d64th   1/1       Running   0          2m

And lastly, the es-data.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-data-rc.yaml

First step is to wait for containers to be in RUNNING state.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2m
es-data-cx6pt     1/1       Running   0          1s
es-master-d64th   1/1       Running   0          5m

Let’s check the Elasticsearch master logs.

kubectl logs es-master-d64th
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2017-04-07 17:58:25,554][INFO ][node                     ] [Ezekiel Sims] version[1.7.1], pid[5], build[b88f43f/2015-07-29T09:54:16Z]
[2017-04-07 17:58:25,555][INFO ][node                     ] [Ezekiel Sims] initializing …
[2017-04-07 17:58:25,686][INFO ][plugins                  ] [Ezekiel Sims] loaded [cloud-kubernetes], sites []
[2017-04-07 17:58:25,743][INFO ][env                      ] [Ezekiel Sims] using [1] data paths, mounts [[/data >(/dev/sda9)]], net usable_space [11.9gb], net total_space [15.5gb], types [ext4]
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] initialized
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] starting …
[2017-04-07 17:58:28,982][INFO ][transport                ] [Ezekiel Sims] bound_address
{inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.2.19.7:9300]}
[2017-04-07 17:58:29,011][INFO ][discovery                ] [Ezekiel Sims] myesdb/1tvZJi2rTu6yeT9QrDR2CQ
[2017-04-07 17:58:34,237][INFO ][cluster.service          ] [Ezekiel Sims] new_master [Ezekiel Sims]

Let’s have a look at the Dashboard Pods.

Let’s Scale!

 

kubectl scale --replicas=2 rc es-master
replicationcontroller “es-master” scaled
kubectl scale --replicas=2 rc es-client
replicationcontroller “es-client” scaled
kubectl scale --replicas=2 rc es-data
replicationcontroller “es-data” scaled

Looking at the pods.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-fsksw   1/1       Running   0          2m
es-client-glgt2   1/1       Running   0          16m
es-data-4qlrg     1/1       Running   0          1m
es-data-cx6pt     1/1       Running   0          13m
es-master-4dn7l   1/1       Running   0          2m
es-master-d64th   1/1       Running   0          18m

Accessing the service

 

kubectl get service elasticsearch
NAME            CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
elasticsearch   10.3.0.112        9200:32536/TCP   33m
curl http://172.17.4.101:32536
{
”status” : 200,
”name” : “Silverclaw”,
”cluster_name” : “myesdb”,
”version” : {
”number” : “1.7.1”,
”build_hash” : “b88f43fc40b0bcd7f173a1f9ee2e97816de80b19″,
”build_timestamp” : “2015-07-29T09:54:16Z”,
”build_snapshot” : false,
”lucene_version” : “4.10.4”
},
”tagline” : “You Know, for Search”
}
curl http://172.17.4.101:32536/_cluster/health?pretty
{
“cluster_name” : “myesdb”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 2,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0
}

That’s it, hope this short tutorials has shown you the cool stuff you can do with Kubernetes, I will make a new post about persistant storage in the coming weeks, if time permits 🙂

Continuous S.A.
Avenue des Hauts-Fourneaux 9
L-4362 Esch-sur-Alzette
Luxembourg

© Continuous S.A. 2017