Posts in DevOps

Amazon Aurora relational database engine

September 14th, 2017 Posted by Blog, DevOps 0 thoughts on “Amazon Aurora relational database engine”

Today Pierre Tomasina has presented the Amazon Aurora relational database engine, a fully managed, MySQL-compatible, relational database engine that combines the speed and reliability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. It delivers up to five times the performance of MySQL without requiring changes to most of your existing applications.

Visit the AWS User Group Luxembourg meet-up group for more coming talks.

Or if you rather prefer discussing about DevOps around a fresh beverage, check when is the next DevOps Café Luxembourg!

Continuous is now a Registered Education Partner of the DevOps Institute

September 4th, 2017 Posted by Blog, DevOps 0 thoughts on “Continuous is now a Registered Education Partner of the DevOps Institute”

Continuous is now a Registered Education Partner of the DevOps Institute!

The DevOps Institute (DOI) is the global learning community around emerging DevOps practices.   

Working with recognized thought leaders and the strategic examination partner PEOPLECERT, the DevOps Institute is setting the quality standard for DevOps competency-based qualifications.

Continuous provides through the DOI several certification courses: an introductory DevOps Foundation course as well as an entire series of DevOps Practitioner courses that are geared toward modern IT roles:

  • DevOps Foundation
  • Certified Agile Service Manager
  • Certified Agile Process Owner
  • DevOps Test Engineering
  • Continuous Delivery Architecture
  • DevOps Leader
  • DevSecOps Engineering

 
The DOI board consists of thought leaders within the DevOps community like Gene Kim, author of the highly recommended book the Phoenix Project, as well as the larger IT training community.

If you are starting a DevOps transformation, contact us to start on a solid ground with our quality training and certification courses in French, English or German!

 

Continuous is now an AWS Consulting and Reseller Partner!

August 7th, 2017 Posted by Blog, DevOps 0 thoughts on “Continuous is now an AWS Consulting and Reseller Partner!”

We are proud to announce that Continuous is now an AWS Consulting and Reseller Partner!

With more than 4 years of extensive experience on Amazon Web Services, we support and accelerate our customers and partners with their Cloud critical infrastructure.

We have worked with startups and large organizations on:

  • Highly Scalable / High Availability infrastructure
  • DevOps Automation/Tool Chains
  • Disaster Recovery Solutions
  • IoT (including Hardware)
  • API Gateways
  • Serverless Apps

 
As a AWS reseller, we are now able to provide you a complete solution supporting your entire value stream with one point of contact. Don’t hesitate to contact us to know what we can do for you!

 

WannaCry vulnerability detection with Metasploit

May 22nd, 2017 Posted by Blog, DevOps 0 thoughts on “WannaCry vulnerability detection with Metasploit”

Follow the instruction to install metasploit or create a Kali Linux Virtual Machine.

Let’s start the metasploit console.

msfconsole
=[ metasploit v4.14.17-dev                         ]
+ --- --=[ 1648 exploits -- 946 auxiliary -- 293 post        ]
+ --- --=[ 486 payloads -- 40 encoders -- 9 nops             ]
+ --- --=[ Free Metasploit Pro trial: http://r-7.co/trymsp ]

Then scan our network to identify hosts with db_nmap to run an Nmap against our targets and our scan results will be stored automatically in our metasploit database.

msf > db_nmap -v -A 192.168.99.0/24
[*] Nmap: Starting Nmap 7.40 ( https://nmap.org ) at 2017-05-21 17:30 CEST
[*] Nmap: NSE: Loaded 143 scripts for scanning.
[*] Nmap: NSE: Script Pre-scanning.
[*] Nmap: Initiating NSE at 17:30
[*] Nmap: Completed NSE at 17:30, 0.00s elapsed
[*] Nmap: Initiating NSE at 17:30

Let’s look at the hosts found with the hosts command.

msf > hosts
Hosts
=====
address         name                os_name            os_flavor
-------         ----                -------            ---------
192.168.99.43  kali.local           Linux                      server
192.168.99.53  metasploitable.local Linux              8.04    server
192.168.99.54                       Microsoft Windows  8       client
192.168.99.55                       Windows 10                 client
192.168.99.66                       Mac OS X           10.7.X  device

Let’s use the auxiliary scanner MS17-010 SMB vulnerability

msf > use auxiliary/scanner/smb/smb_ms17_010

Let’s see the options of the scan.

msf auxiliary(smb_ms17_010) > show options
Module options (auxiliary/scanner/smb/smb_ms17_010):
Name       Current Setting  Required  Description
----       ---------------  --------  -----------
RHOSTS                      yes       The target address range or CIDR identifier
RPORT      445              yes       The SMB service port (TCP)
SMBDomain  .                no        The Windows domain to use for authentication
SMBPass                     no        The password for the specified username
SMBUser                     no        The username to authenticate as
THREADS    1                yes       The number of concurrent threads

As we can see the RHOSTS option is required, let’s set it with our windows hosts IPs

msf auxiliary(smb_ms17_010) > set RHOSTS 192.168.99.55, 192.168.99.54
RHOSTS => 192.168.99.55, 192.168.99.54

Let’s run the scan

msf auxiliary(smb_ms17_010) > run
[-] 192.168.99.55:445    -- Host does NOT appear vulnerable.
[*] Scanned 1 of 2 hosts (50% complete)
[+] 192.168.99.54:445    -- Host is likely VULNERABLE to MS17-010!  (Windows 10 Enterprise Evaluation 14393)
[*] Scanned 2 of 2 hosts (100% complete)
[*] Auxiliary module execution completed

One of our windows host is vulnerable!

Minikube and Helm the Kubernetes Package Manager

April 28th, 2017 Posted by Blog, DevOps, Tips, Uncategorized 0 thoughts on “Minikube and Helm the Kubernetes Package Manager”

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

Helm is a Kubernetes Package Manager, it is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

Let’s install minikube.

For OSX:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

For Linux:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Let’s start minikube.

minikube start
Starting local Kubernetes cluster…
Starting VM…
SSH-ing files into VM…
Setting up certs…
Starting cluster components…
Connecting to cluster…
Setting up kubeconfig…
Kubectl is now configured to use the cluster.

To be able to work with the docker daemon on your mac/linux host use the docker-env command in your shell, you can get it running:

minikube docker-env
export DOCKER_TLS_VERIFY=”1″
export DOCKER_HOST=”tcp://192.168.99.100:2376″
export DOCKER_CERT_PATH=”/Users/oswaldderiemaecker/.minikube/certs”
export DOCKER_API_VERSION=”1.23″
# Run this command to configure your shell:
# eval $(minikube docker-env)

Let’s run the docker-env command.

eval $(minikube docker-env)

To access the Kubernetes Dashboard, run:

minikube dashboard

We have now our minikube running, let’s install Helm the Kubernetes Package Manager.

For OSX:

brew install kubernetes-helm

For Linux:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Initialize the local CLI and also install Tiller into your Kubernetes cluster.

helm init
$HELM_HOME has been configured at /Users/oswaldderiemaecker/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

Using the search command without arguments lets us fetch all the available packages.

helm search
NAME                          VERSION DESCRIPTION
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.2.0   Open-source web application written in Go and R…
stable/cockroachdb            0.2.2   CockroachDB is a scalable, survivable, strongly…
stable/concourse              0.1.3   Concourse is a simple and scalable CI system.
stable/consul                 0.2.0   Highly available and distributed service discov…
stable/coredns                0.1.0   CoreDNS is a DNS server that chains middleware …
stable/datadog                0.2.1   DataDog Agent

Let’s install the Jenkins package, Helm display informations about the package installed.

helm install stable/jenkins
AME:   original-llama
LAST DEPLOYED: Fri Apr 28 08:32:59 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME                    DATA  AGE
original-llama-jenkins  2     0s
==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
original-llama-jenkins  Bound   pvc-863abb19-2bdc-11e7-ab97-080027de986e  8Gi       RWO          0s
==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)                         AGE
original-llama-jenkins  10.0.0.175      8080:30924/TCP,50000:30321/TCP  0s
==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
original-llama-jenkins  1        1        1           0          0s
==> v1/Secret
NAME                    TYPE    DATA  AGE
original-llama-jenkins  Opaque  2     0s
NOTES:
1. Get your ‘admin’ user password by running:
printf $(kubectl get secret --namespace default original-llama-jenkins -o jsonpath=”{.data.jenkins-admin-password}” | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running ‘kubectl get svc --namespace default -w original-llama-jenkins’
export SERVICE_IP=$(kubectl get svc original-llama-jenkins --namespace default --template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

Look on the minikube Dashboard the status of the Jenkins service and its pod.

When its running, note the internal endpoint, here it is 8080.

Let’s get the Jenkins service name.

kubectl get services
NAME                     CLUSTER-IP   EXTERNAL-IP   PORT(S)                          AGE
kubernetes               10.0.0.1             443/TCP                          1d
original-llama-jenkins   10.0.0.175        8080:30924/TCP,50000:30321/TCP   2h

And use minikube to open the service.

minikube service original-llama-jenkins

We have our Jenkins, it needs now to be configured, it is out of the scope of this post.

Let’s clean up everything but getting the Helm release name.

helm ls
NAME           REVISION UPDATED                  STATUS   CHART         NAMESPACE
original-llama 1        Fri Apr 28 08:32:59 2017 DEPLOYED jenkins-0.3.1 default

and delete it.

helm delete original-llama
release “original-llama” deleted

That’s all!

Docker Bench for Security

April 24th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Docker Bench for Security”

The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated, and are inspired by the CIS Docker 1.13 Benchmark.

Clone the docker Bench for Security repository:

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security

Build the docker Bench for Security image:

docker build -t docker-bench-security .

Run the docker Bench for Security on your system:

docker run -it --net host --pid host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /etc:/etc --label docker_bench_security \
docker-bench-security

Based on the docker Bench for Security Report, check the CIS Docker 1.13 Benchmark for remediation.

Opscode Chef

Chef 13: Testing for Deprecations

April 14th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Chef 13: Testing for Deprecations”

Chef 13 is out, to test your code for deprecations, you can put Test Kitchen in a mode where any deprecations cause the chef run to fail.

Ensure your .kitchen.yml includes:

provisioner:
  deprecations_as_errors: true

and then run Test Kitchen as usual. Test Kitchen will fail if any deprecation errors are issued.

This feature was added in Test Kitchen 1.13, which shipped in ChefDK 1.0.

Connecting in a running container with bash

April 13th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Connecting in a running container with bash”
docker run -ti solr:latest
Starting Solr on port 8983 from /opt/solr/server
docker ps -q
6d4f4b262dc3
docker exec -it 6d4f4b262dc3 bash
solr@6d4f4b262dc3:/opt/solr$

CoreOs Kubernetes with Elasticsearch Cluster

April 10th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “CoreOs Kubernetes with Elasticsearch Cluster”

If you are new to Kubernetes, this post will give you a quick overview of how it works. We will use a multi-node Kubernetes cluster using Vagrant and CoreOS.

First install the prerequisites:

 
You will need at least 16GB of RAM.

Grab the appropriate Vagrant package for your system.

Then install the kubectl the main program for interacting with the Kubernetes API.

The linux kubectl binary can be fetched with a command like:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/linux/amd64/kubectl

On an OS X workstation, replace linux in the URL above with darwin:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/darwin/amd64/kubectl

After downloading the binary, ensure it is executable and move it into your PATH:

chmod +x kubectl
mv kubectl /usr/local/bin/kubectl

Clone the Repository

git clone https://github.com/coreos/coreos-kubernetes.git
cd coreos-kubernetes/multi-node/vagrant

Edit the multi-node/vagrant/Vagrantfile file to use 2 CPUs for the Kubernetes workers by adding vb.cpus = 2 in the worker provider.

      worker.vm.provider :virtualbox do |vb|
        vb.memory = $worker_vm_memory
        vb.cpus = 2
      end

Start the Vagrant

 
Copy the config.rb.sample to config.rb

cp config.rb.sample config.rb

Then modify the config.rb file with:

$update_channel = "stable"
$controller_count = 1
$controller_vm_memory = 1024
$worker_count = 1
$worker_vm_memory = 4098
$etcd_count = 1
$etcd_vm_memory = 512

and run vagrant up.

vagrant up

Configure kubectl

 

export KUBECONFIG=”${KUBECONFIG}:$(pwd)/kubeconfig”
kubectl config use-context vagrant-multi

Check that kubectl is configured properly by inspecting the cluster:

kubectl get nodes
NAME           STATUS                     AGE
172.17.4.101   Ready,SchedulingDisabled   14h
172.17.4.201   Ready                      14h

If you are unable to connect, wait a little, Kubernetes is pulling its images, it may take some time depending on your internet connection.

We are now going to connect to the Dashboard:

kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

To access the Dashboard and other services, you have to start the kubectl proxy.

kubectl proxy
Starting to serve on 127.0.0.1:8001

You can now access the Dashboard by pointing your browser to: http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard or http://localhost:8001/ui/

Now, open a new terminal, don’t forget to set again the KUBECONFIG environment.

Let’s get the Pods:

kubectl get pods
No resources found.

A pod is a group of containers that are deployed together on the same host. If you frequently deploy single containers, you can generally replace the word “pod” with “container” and accurately understand the concept.

And now the services.

kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.3.0.1             443/TCP   14h

A service is a grouping of pods that are running on the cluster. Services are “cheap” and you can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.

Let’s create the our Elasticache cluster:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/service-account.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-discovery-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-master-rc.yaml

Wait until es-master is provisioned, to check when the es-master is provisioned.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-master-d64th   1/1       Running   0          19s

Then run the es-client and wait till it is provisioned.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-client-rc.yaml
kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2s
es-master-d64th   1/1       Running   0          2m

And lastly, the es-data.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-data-rc.yaml

First step is to wait for containers to be in RUNNING state.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2m
es-data-cx6pt     1/1       Running   0          1s
es-master-d64th   1/1       Running   0          5m

Let’s check the Elasticsearch master logs.

kubectl logs es-master-d64th
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2017-04-07 17:58:25,554][INFO ][node                     ] [Ezekiel Sims] version[1.7.1], pid[5], build[b88f43f/2015-07-29T09:54:16Z]
[2017-04-07 17:58:25,555][INFO ][node                     ] [Ezekiel Sims] initializing …
[2017-04-07 17:58:25,686][INFO ][plugins                  ] [Ezekiel Sims] loaded [cloud-kubernetes], sites []
[2017-04-07 17:58:25,743][INFO ][env                      ] [Ezekiel Sims] using [1] data paths, mounts [[/data >(/dev/sda9)]], net usable_space [11.9gb], net total_space [15.5gb], types [ext4]
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] initialized
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] starting …
[2017-04-07 17:58:28,982][INFO ][transport                ] [Ezekiel Sims] bound_address
{inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.2.19.7:9300]}
[2017-04-07 17:58:29,011][INFO ][discovery                ] [Ezekiel Sims] myesdb/1tvZJi2rTu6yeT9QrDR2CQ
[2017-04-07 17:58:34,237][INFO ][cluster.service          ] [Ezekiel Sims] new_master [Ezekiel Sims]

Let’s have a look at the Dashboard Pods.

Let’s Scale!

 

kubectl scale --replicas=2 rc es-master
replicationcontroller “es-master” scaled
kubectl scale --replicas=2 rc es-client
replicationcontroller “es-client” scaled
kubectl scale --replicas=2 rc es-data
replicationcontroller “es-data” scaled

Looking at the pods.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-fsksw   1/1       Running   0          2m
es-client-glgt2   1/1       Running   0          16m
es-data-4qlrg     1/1       Running   0          1m
es-data-cx6pt     1/1       Running   0          13m
es-master-4dn7l   1/1       Running   0          2m
es-master-d64th   1/1       Running   0          18m

Accessing the service

 

kubectl get service elasticsearch
NAME            CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
elasticsearch   10.3.0.112        9200:32536/TCP   33m
curl http://172.17.4.101:32536
{
”status” : 200,
”name” : “Silverclaw”,
”cluster_name” : “myesdb”,
”version” : {
”number” : “1.7.1”,
”build_hash” : “b88f43fc40b0bcd7f173a1f9ee2e97816de80b19″,
”build_timestamp” : “2015-07-29T09:54:16Z”,
”build_snapshot” : false,
”lucene_version” : “4.10.4”
},
”tagline” : “You Know, for Search”
}
curl http://172.17.4.101:32536/_cluster/health?pretty
{
“cluster_name” : “myesdb”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 2,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0
}

That’s it, hope this short tutorials has shown you the cool stuff you can do with Kubernetes, I will make a new post about persistant storage in the coming weeks, if time permits 🙂

Docker CMD vs ENTRYPOINT

April 7th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Docker CMD vs ENTRYPOINT”

CMD arguments can be overridden:

Dockerfile:

FROM centos:7
CMD ["echo"]
docker build .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:16.04
---> bd3d4369aebc
Step 2 : ENTRYPOINT echo
---> Running in 7951a71e0c69
---> d8eedc1c5380
Removing intermediate container 7951a71e0c69
Successfully built d8eedc1c5380
docker run d8eedc1c5380 ls
bin
boot
dev
etc

ENTRYPOINT arguments can NOT be overridden:

Dockerfile:

FROM centos:7
ENTRYPOINT ["echo"]
docker build .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:16.04
---> bd3d4369aebc
Step 2 : ENTRYPOINT echo
---> Running in 96697aa9cfd9
---> e1397282b7cf
Removing intermediate container 96697aa9cfd9
Successfully built e1397282b7cf
docker run e1397282b7cf ls
ls

Continuous S.A.
11 boulevard du Jazz
L-4370 Belvaux
Luxembourg

© Continuous S.A. 2017