Posts in Tips

Minikube and Helm the Kubernetes Package Manager

April 28th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Minikube and Helm the Kubernetes Package Manager”

Minikube is a tool that makes it easy to run Kubernetes locally. Minikube runs a single-node Kubernetes cluster inside a VM on your laptop for users looking to try out Kubernetes or develop with it day-to-day.

Helm is a Kubernetes Package Manager, it is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.

Let’s install minikube.

For OSX:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-darwin-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

For Linux:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/

Let’s start minikube.

minikube start
Starting local Kubernetes cluster…
Starting VM…
SSH-ing files into VM…
Setting up certs…
Starting cluster components…
Connecting to cluster…
Setting up kubeconfig…
Kubectl is now configured to use the cluster.

To be able to work with the docker daemon on your mac/linux host use the docker-env command in your shell, you can get it running:

minikube docker-env
export DOCKER_TLS_VERIFY=”1″
export DOCKER_HOST=”tcp://192.168.99.100:2376″
export DOCKER_CERT_PATH=”/Users/oswaldderiemaecker/.minikube/certs”
export DOCKER_API_VERSION=”1.23″
# Run this command to configure your shell:
# eval $(minikube docker-env)

Let’s run the docker-env command.

eval $(minikube docker-env)

To access the Kubernetes Dashboard, run:

minikube dashboard

We have now our minikube running, let’s install Helm the Kubernetes Package Manager.

For OSX:

brew install kubernetes-helm

For Linux:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh

Initialize the local CLI and also install Tiller into your Kubernetes cluster.

helm init
$HELM_HOME has been configured at /Users/oswaldderiemaecker/.helm.
Warning: Tiller is already installed in the cluster.
(Use --client-only to suppress this message, or --upgrade to upgrade Tiller to the current version.)
Happy Helming!

Using the search command without arguments lets us fetch all the available packages.

helm search
NAME                          VERSION DESCRIPTION
stable/aws-cluster-autoscaler 0.2.1   Scales worker nodes within autoscaling groups.
stable/chaoskube              0.5.0   Chaoskube periodically kills random pods in you…
stable/chronograf             0.2.0   Open-source web application written in Go and R…
stable/cockroachdb            0.2.2   CockroachDB is a scalable, survivable, strongly…
stable/concourse              0.1.3   Concourse is a simple and scalable CI system.
stable/consul                 0.2.0   Highly available and distributed service discov…
stable/coredns                0.1.0   CoreDNS is a DNS server that chains middleware …
stable/datadog                0.2.1   DataDog Agent

Let’s install the Jenkins package, Helm display informations about the package installed.

helm install stable/jenkins
AME:   original-llama
LAST DEPLOYED: Fri Apr 28 08:32:59 2017
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME                    DATA  AGE
original-llama-jenkins  2     0s
==> v1/PersistentVolumeClaim
NAME                    STATUS  VOLUME                                    CAPACITY  ACCESSMODES  AGE
original-llama-jenkins  Bound   pvc-863abb19-2bdc-11e7-ab97-080027de986e  8Gi       RWO          0s
==> v1/Service
NAME                    CLUSTER-IP  EXTERNAL-IP  PORT(S)                         AGE
original-llama-jenkins  10.0.0.175      8080:30924/TCP,50000:30321/TCP  0s
==> extensions/v1beta1/Deployment
NAME                    DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
original-llama-jenkins  1        1        1           0          0s
==> v1/Secret
NAME                    TYPE    DATA  AGE
original-llama-jenkins  Opaque  2     0s
NOTES:
1. Get your ‘admin’ user password by running:
printf $(kubectl get secret --namespace default original-llama-jenkins -o jsonpath=”{.data.jenkins-admin-password}” | base64 --decode);echo
2. Get the Jenkins URL to visit by running these commands in the same shell:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running ‘kubectl get svc --namespace default -w original-llama-jenkins’
export SERVICE_IP=$(kubectl get svc original-llama-jenkins --namespace default --template “{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}”)
echo http://$SERVICE_IP:8080/login
3. Login with the password from step 1 and the username: admin
For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

Look on the minikube Dashboard the status of the Jenkins service and its pod.

When its running, note the internal endpoint, here it is 8080.

Let’s get the Jenkins service name.

kubectl get services
NAME                     CLUSTER-IP   EXTERNAL-IP   PORT(S)                          AGE
kubernetes               10.0.0.1             443/TCP                          1d
original-llama-jenkins   10.0.0.175        8080:30924/TCP,50000:30321/TCP   2h

And use minikube to open the service.

minikube service original-llama-jenkins

We have our Jenkins, it needs now to be configured, it is out of the scope of this post.

Let’s clean up everything but getting the Helm release name.

helm ls
NAME           REVISION UPDATED                  STATUS   CHART         NAMESPACE
original-llama 1        Fri Apr 28 08:32:59 2017 DEPLOYED jenkins-0.3.1 default

and delete it.

helm delete original-llama
release “original-llama” deleted

That’s all!

Docker Bench for Security

April 24th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Docker Bench for Security”

The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated, and are inspired by the CIS Docker 1.13 Benchmark.

Clone the docker Bench for Security repository:

git clone https://github.com/docker/docker-bench-security.git
cd docker-bench-security

Build the docker Bench for Security image:

docker build -t docker-bench-security .

Run the docker Bench for Security on your system:

docker run -it --net host --pid host --cap-add audit_control \
-e DOCKER_CONTENT_TRUST=$DOCKER_CONTENT_TRUST \
-v /var/lib:/var/lib \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/lib/systemd:/usr/lib/systemd \
-v /etc:/etc --label docker_bench_security \
docker-bench-security

Based on the docker Bench for Security Report, check the CIS Docker 1.13 Benchmark for remediation.

Pareto principle

April 19th, 2017 Posted by Agile, Blog, Tips 0 thoughts on “Pareto principle”

The Pareto principle (also known as the 80/20 rule, the law of the vital few, or the principle of factor sparsity) states that, for many events, roughly 80% of the effects come from 20% of the causes.

  • 80% of problems can be attributed to 20% of causes.
  • 80% of a company’s profits come from 20% of its customers
  • 80% of a company’s complaints come from 20% of its customers
  • 80% of a company’s profits come from 20% of the time its staff spent
  • 80% of a company’s revenue comes from 20% of its products

 

But also:

  • 80% of the product value come from the 20% of its features.

 

Think about it when you define your Minimal Viable Product (MVP).

Opscode Chef

Chef 13: Testing for Deprecations

April 14th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Chef 13: Testing for Deprecations”

Chef 13 is out, to test your code for deprecations, you can put Test Kitchen in a mode where any deprecations cause the chef run to fail.

Ensure your .kitchen.yml includes:

provisioner:
  deprecations_as_errors: true

and then run Test Kitchen as usual. Test Kitchen will fail if any deprecation errors are issued.

This feature was added in Test Kitchen 1.13, which shipped in ChefDK 1.0.

Connecting in a running container with bash

April 13th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Connecting in a running container with bash”
docker run -ti solr:latest
Starting Solr on port 8983 from /opt/solr/server
docker ps -q
6d4f4b262dc3
docker exec -it 6d4f4b262dc3 bash
solr@6d4f4b262dc3:/opt/solr$

Deleting a very large amount of files inside a folder

April 12th, 2017 Posted by Blog, Tips 0 thoughts on “Deleting a very large amount of files inside a folder”

Sometimes you have to delete a very large amount of files inside a folder.

When ls and rm reach shell limit that make impossible to use them. Perl is at the rescue.

perl -e ‘for(<*>){((stat)[9]<(unlink))}’

CoreOs Kubernetes with Elasticsearch Cluster

April 10th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “CoreOs Kubernetes with Elasticsearch Cluster”

If you are new to Kubernetes, this post will give you a quick overview of how it works. We will use a multi-node Kubernetes cluster using Vagrant and CoreOS.

First install the prerequisites:

 
You will need at least 16GB of RAM.

Grab the appropriate Vagrant package for your system.

Then install the kubectl the main program for interacting with the Kubernetes API.

The linux kubectl binary can be fetched with a command like:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/linux/amd64/kubectl

On an OS X workstation, replace linux in the URL above with darwin:

curl -O https://storage.googleapis.com/kubernetes-release/release/v1.5.4/bin/darwin/amd64/kubectl

After downloading the binary, ensure it is executable and move it into your PATH:

chmod +x kubectl
mv kubectl /usr/local/bin/kubectl

Clone the Repository

git clone https://github.com/coreos/coreos-kubernetes.git
cd coreos-kubernetes/multi-node/vagrant

Edit the multi-node/vagrant/Vagrantfile file to use 2 CPUs for the Kubernetes workers by adding vb.cpus = 2 in the worker provider.

      worker.vm.provider :virtualbox do |vb|
        vb.memory = $worker_vm_memory
        vb.cpus = 2
      end

Start the Vagrant

 
Copy the config.rb.sample to config.rb

cp config.rb.sample config.rb

Then modify the config.rb file with:

$update_channel = "stable"
$controller_count = 1
$controller_vm_memory = 1024
$worker_count = 1
$worker_vm_memory = 4098
$etcd_count = 1
$etcd_vm_memory = 512

and run vagrant up.

vagrant up

Configure kubectl

 

export KUBECONFIG=”${KUBECONFIG}:$(pwd)/kubeconfig”
kubectl config use-context vagrant-multi

Check that kubectl is configured properly by inspecting the cluster:

kubectl get nodes
NAME           STATUS                     AGE
172.17.4.101   Ready,SchedulingDisabled   14h
172.17.4.201   Ready                      14h

If you are unable to connect, wait a little, Kubernetes is pulling its images, it may take some time depending on your internet connection.

We are now going to connect to the Dashboard:

kubectl cluster-info
Kubernetes master is running at https://172.17.4.101:443
Heapster is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/heapster
KubeDNS is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://172.17.4.101:443/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
To further debug and diagnose cluster problems, use ‘kubectl cluster-info dump’.

To access the Dashboard and other services, you have to start the kubectl proxy.

kubectl proxy
Starting to serve on 127.0.0.1:8001

You can now access the Dashboard by pointing your browser to: http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard or http://localhost:8001/ui/

Now, open a new terminal, don’t forget to set again the KUBECONFIG environment.

Let’s get the Pods:

kubectl get pods
No resources found.

A pod is a group of containers that are deployed together on the same host. If you frequently deploy single containers, you can generally replace the word “pod” with “container” and accurately understand the concept.

And now the services.

kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.3.0.1             443/TCP   14h

A service is a grouping of pods that are running on the cluster. Services are “cheap” and you can have many services within the cluster. Kubernetes services can efficiently power a microservice architecture.

Let’s create the our Elasticache cluster:

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/service-account.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-discovery-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-svc.yaml
kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-master-rc.yaml

Wait until es-master is provisioned, to check when the es-master is provisioned.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-master-d64th   1/1       Running   0          19s

Then run the es-client and wait till it is provisioned.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-client-rc.yaml
kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2s
es-master-d64th   1/1       Running   0          2m

And lastly, the es-data.

kubectl create -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/production_cluster/es-data-rc.yaml

First step is to wait for containers to be in RUNNING state.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-glgt2   1/1       Running   0          2m
es-data-cx6pt     1/1       Running   0          1s
es-master-d64th   1/1       Running   0          5m

Let’s check the Elasticsearch master logs.

kubectl logs es-master-d64th
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
log4j:WARN No such property [maxBackupIndex] in org.apache.log4j.DailyRollingFileAppender.
[2017-04-07 17:58:25,554][INFO ][node                     ] [Ezekiel Sims] version[1.7.1], pid[5], build[b88f43f/2015-07-29T09:54:16Z]
[2017-04-07 17:58:25,555][INFO ][node                     ] [Ezekiel Sims] initializing …
[2017-04-07 17:58:25,686][INFO ][plugins                  ] [Ezekiel Sims] loaded [cloud-kubernetes], sites []
[2017-04-07 17:58:25,743][INFO ][env                      ] [Ezekiel Sims] using [1] data paths, mounts [[/data >(/dev/sda9)]], net usable_space [11.9gb], net total_space [15.5gb], types [ext4]
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] initialized
[2017-04-07 17:58:28,777][INFO ][node                     ] [Ezekiel Sims] starting …
[2017-04-07 17:58:28,982][INFO ][transport                ] [Ezekiel Sims] bound_address
{inet[/0:0:0:0:0:0:0:0:9300]}, publish_address {inet[/10.2.19.7:9300]}
[2017-04-07 17:58:29,011][INFO ][discovery                ] [Ezekiel Sims] myesdb/1tvZJi2rTu6yeT9QrDR2CQ
[2017-04-07 17:58:34,237][INFO ][cluster.service          ] [Ezekiel Sims] new_master [Ezekiel Sims]

Let’s have a look at the Dashboard Pods.

Let’s Scale!

 

kubectl scale --replicas=2 rc es-master
replicationcontroller “es-master” scaled
kubectl scale --replicas=2 rc es-client
replicationcontroller “es-client” scaled
kubectl scale --replicas=2 rc es-data
replicationcontroller “es-data” scaled

Looking at the pods.

kubectl get pods
NAME              READY     STATUS    RESTARTS   AGE
es-client-fsksw   1/1       Running   0          2m
es-client-glgt2   1/1       Running   0          16m
es-data-4qlrg     1/1       Running   0          1m
es-data-cx6pt     1/1       Running   0          13m
es-master-4dn7l   1/1       Running   0          2m
es-master-d64th   1/1       Running   0          18m

Accessing the service

 

kubectl get service elasticsearch
NAME            CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE
elasticsearch   10.3.0.112        9200:32536/TCP   33m
curl http://172.17.4.101:32536
{
”status” : 200,
”name” : “Silverclaw”,
”cluster_name” : “myesdb”,
”version” : {
”number” : “1.7.1”,
”build_hash” : “b88f43fc40b0bcd7f173a1f9ee2e97816de80b19″,
”build_timestamp” : “2015-07-29T09:54:16Z”,
”build_snapshot” : false,
”lucene_version” : “4.10.4”
},
”tagline” : “You Know, for Search”
}
curl http://172.17.4.101:32536/_cluster/health?pretty
{
“cluster_name” : “myesdb”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 6,
“number_of_data_nodes” : 2,
“active_primary_shards” : 0,
“active_shards” : 0,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0
}

That’s it, hope this short tutorials has shown you the cool stuff you can do with Kubernetes, I will make a new post about persistant storage in the coming weeks, if time permits 🙂

Docker CMD vs ENTRYPOINT

April 7th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “Docker CMD vs ENTRYPOINT”

CMD arguments can be overridden:

Dockerfile:

FROM centos:7
CMD ["echo"]
docker build .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:16.04
---> bd3d4369aebc
Step 2 : ENTRYPOINT echo
---> Running in 7951a71e0c69
---> d8eedc1c5380
Removing intermediate container 7951a71e0c69
Successfully built d8eedc1c5380
docker run d8eedc1c5380 ls
bin
boot
dev
etc

ENTRYPOINT arguments can NOT be overridden:

Dockerfile:

FROM centos:7
ENTRYPOINT ["echo"]
docker build .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM ubuntu:16.04
---> bd3d4369aebc
Step 2 : ENTRYPOINT echo
---> Running in 96697aa9cfd9
---> e1397282b7cf
Removing intermediate container 96697aa9cfd9
Successfully built e1397282b7cf
docker run e1397282b7cf ls
ls

AWS CloudFormation lists Availability Zones for a specified region

April 6th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “AWS CloudFormation lists Availability Zones for a specified region”

Use the the intrinsic function Fn::GetAZs to returns an array that lists Availability Zones for a specified region.

Because you have access to different Availability Zones, the intrinsic function Fn::GetAZs enables CloudFormation template authors to write templates that adapt to the calling user’s access.

That way you don’t have to hard-code a full list of Availability Zones for a specified region.

Important Note: For the EC2-VPC platform, the Fn::GetAZs function returns only Availability Zones that have a default subnet unless none of the Availability Zones has a default subnet; in that case, all Availability Zones are returned.

More information on the AWS User Guide.

ChatOps: Rocket.chat with Hubot chat bot using docker

April 5th, 2017 Posted by Blog, DevOps, Tips 0 thoughts on “ChatOps: Rocket.chat with Hubot chat bot using docker”

We are going to install Rocket.chat with the github Hubot chat bot.

Create a docker-compose.yml file with:

db:
  image: mongo:3.5.5
  volumes:
    - ./data/runtime/db:/data/db
    - ./data/dump:/dump
  command: mongod --smallfiles

rocketchat:
  image: rocketchat/rocket.chat:0.54.2
  environment:
    - MONGO_URL=mongodb://db:27017/rocketchat
    - ROOT_URL=https://192.168.99.100/
    - Accounts_UseDNSDomainCheck=True
    - ADMIN_USERNAME=admin
    - ADMIN_PASS=supersecret
    - ADMIN_EMAIL=admin@example.com
  links:
    - db:db
  ports:
    - 3000:3000

hubot:
  image: rocketchat/hubot-rocketchat:v1.0.6
  environment:
    - ROCKETCHAT_URL=192.168.99.100:3000
    - ROCKETCHAT_ROOM=GENERAL
    - ROCKETCHAT_USER=bot
    - ROCKETCHAT_PASSWORD=bot
    - LISTEN_ON_ALL_PUBLIC=true
    - BOT_NAME=hubot
    - EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-greetings,hubot-diagnostics,hubot-google,hubot-reddit,hubot-bofh,hubot-bookmark,hubot-shipit,hubot-maps
  links:
    - rocketchat:rocketchat
# this is used to expose the hubot port for notifications on the host on port 3001, e.g. for hubot-jenkins-notifier
  ports:
    - 3001:8080

Note: Replace the ROCKETCHAT_URL and ROOT_URL with IP address of your docker-machine docker host IP or use localhost.

Let’s run docker-compose:

docker-compose up
Creating hubotrocketchat_db_1
Creating hubotrocketchat_rocketchat_1
Creating hubotrocketchat_hubot_1
Attaching to hubotrocketchat_db_1, hubotrocketchat_rocketchat_1, hubotrocketchat_hubot_1
db_1          | about to fork child process, waiting until server is ready for connections.
db_1          | forked process: 15
rocketchat_1  | ➔ +------------------------------------------------+
rocketchat_1  | ➔ |                 SERVER RUNNING                 |
rocketchat_1  | ➔ +------------------------------------------------+
rocketchat_1  | ➔ |                                                |
rocketchat_1  | ➔ |  Rocket.Chat Version: 0.54.2                   |
rocketchat_1  | ➔ |       NodeJS Version: 4.8.1 -- x64              |
rocketchat_1  | ➔ |             Platform: linux                    |
rocketchat_1  | ➔ |         Process Port: 3000                     |
rocketchat_1  | ➔ |             Site URL: https://192.168.99.100/  |
rocketchat_1  | ➔ |     ReplicaSet OpLog: Disabled                 |
rocketchat_1  | ➔ |          Commit Hash: 13571b070e               |
rocketchat_1  | ➔ |        Commit Branch: HEAD                     |
rocketchat_1  | ➔ |                                                |
rocketchat_1  | ➔ +------------------------------------------------+
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO Starting Rocketchat adapter version 1.0.6…
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO Once connected to rooms I will respond to the name: >hubot
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO I will also respond to my Rocket.Chat username as an alias: bot
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO Connecting To: 192.168.99.100:3000
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO Successfully connected!
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO
hubot_1       | [Tue Apr 04 2017 18:50:05 GMT+0000 (UTC)] INFO Logging In

Let’s login on Rocket.Chat with our admin account credential we defined in our docker-compose.yml (admin/supersecret), for this point your browser to http://192.168.99.100:3000/

Once logged, click on Admin and select Administration/Users and add the bot user with the following:

  • Name: hubot
  • Username: bot
  • Email: bot@yourdomainname.com
  • Check Verified
  • Password: bot
  • Uncheck Require Password Change
  • Role: bot
  • Check: Join the main channel
  • Uncheck send the welcome message
  •  
    Click Save and Logout.

    Time to Register your user, on the login page of Rocket.Chat, select Register a new account, follow the information and login with your new user.

    In order to have Hubot communicate with Rocket.Chat, it must be able to login as the user bot, for this let’s stop our docker-composer by doing a CTRL-C and run again docker-compose up.

    Gracefully stopping… (press Ctrl+C again to force)
    Stopping hubotrocketchat_hubot_1 … done
    Stopping hubotrocketchat_rocketchat_1 … done
    Stopping hubotrocketchat_db_1 … done
    docker-compose up
    Starting hubotrocketchat_db_1
    Starting hubotrocketchat_rocketchat_1
    Starting hubotrocketchat_hubot_1
    Attaching to hubotrocketchat_db_1, hubotrocketchat_rocketchat_1, hubotrocketchat_hubot_1
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO Starting Rocketchat adapter version 1.0.6…
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO Once connected to rooms I will respond to the name: >hubot
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO I will also respond to my Rocket.Chat username as an alias: bot
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO Connecting To: 192.168.99.100:3000
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO Successfully connected!
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO
    hubot_1       | [Tue Apr 04 2017 19:20:22 GMT+0000 (UTC)] INFO Logging In
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO rid:  []
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO All rooms joined.
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO Preparing Meteor Subscriptions..
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO Subscribing to Room: __my_messages__
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO Successfully subscribed to messages
    hubot_1       | [Tue Apr 04 2017 19:30:51 GMT+0000 (UTC)] INFO Setting up reactive message list…

    You can now communicate with the hubot chat bot, i.e. @hubot help

    You can find new script at GitHub Hubot Scripts.

    Just add them in your docker-compose.yml:

    - EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-greetings,hubot-diagnostics,hubot-google,hubot-reddit,hubot-bofh,hubot-bookmark,hubot-shipit,hubot-maps,hubot-thesimpsons
    

    Continuous S.A.
    Avenue des Hauts-Fourneaux 9
    L-4362 Esch-sur-Alzette
    Luxembourg

    © Continuous S.A. 2017