Containerised Developer Testing Environment on Google Cloud Platform
Developer testing is one of the most important phases in software development life cycle, which requires appropriate test environment setup . Setting up of such an environment is challenging with server based deployment methods. But containerisation has made it more convenient.
In some scenarios developers are preferred to use a shared server for development, as it is more faster rather than setting up their own environment. Although software compilation and packaging is manageable in shared server, when it comes to developer testing, it is not much convenient.
So, In this article, I will walk you through the method that can be used to create containerised developer testing environment on Google Cloud Platform (GCP) while using a shared server environment for code compilation.
To interact with Google Cloud, gcloud
which is one of the tools provided in Cloud SDK (Set of command line tools that can be used to manage resources on Google Cloud) was used. And instead of local server installation, Cloud SDK docker image was used for quick setup in an isolated, correctly configured container to avoid the compatibility and dependency issues with local server (REF : Install Cloud SDK using Docker).
Docker utilities including docker client, docker-compose and docker-machine are already installed in local server and the installed location will be mounted to container environment at initialisation of gcloud-sdk container.
As a sample application We will use an erlang application from Github which contains DOCKER file which can be used to package the application in a Docker image, Docker-compose.yaml which starts two replicas (containers) of the application and also k8s manifest which starts k8s statefulset with two replicas (pods) of the application. Although the functionality is not much relevant in this use case, the README in repository can be referred.
git clone https://github.com/myErlangProjects/erlang_k8s_cluster.git
$HOME
location of the local server will also be mounted to container environment at cloud-sdk container initialisation. Therefore the above application will also be available in container environment.
Deployment on Docker
Authentication on Google Cloud
Docker-machine utility is a part of Docker Toolbox which can be used to create docker hosts on your computer, on cloud providers, and inside your own data centre. As our immediate goal is to provision docker installed virtual machine on Google Cloud using docker-machine
, the utility should be able to find authentication credentials automatically in the location for Application Default Credentials. In order to achieve that authentication, run gcloud auth application-default login
.
sudo docker run -it --name gcloud-config-demouser gcr.io/google.com/cloudsdktool/cloud-sdk gcloud auth application-default login
After authenticated successfully (via web flow), credentials are preserved in the volume of the gcloud-config-demouser container.
Docker installed VM on Google Cloud
To create docker installed VM on GCP, docker-machine
commands need to be executed with the credentials that were preserved in the previous step. Therefore, run the container with --volumes-from
by which the preserved credentials JSON named application_default_credentials.json will be embedded within the environment under a ~/.config/gcloud/
directory of the initiated container.
sudo docker run -it --net host --pid host --rm
--volumes-from gcloud-config-demouser
-v /usr/local/bin/:/usr/local/sbin
-v /home/demouser/:/home/demouser/
-v /opt/gcloud-config-demouser/root/.docker:/root/.docker
gcr.io/google.com/cloudsdktool/cloud-sdk bash
Although the same cloud-sdk docker image is used for initiating the container, any other image with shell which allows to execute standard linux commands can also be used, as the login credentials has already been created.
Availability of gcloud
utility can be verified by running gcolud -v
.
Location of docker utilities has been mounted in to container with -v /usr/local/bin/:/usr/local/sbin
. Therefore those utilities will be accessible in bash shell in the container environment.
As mentioned earlier, by running the container with
--volumes-from
,will embed some directories i.e.~/.config
and~/.kube
from the anonymous volume of the previous container. New content is added to those locations will also be preserved. As the location~/.docker
is not there, a host location will be mounted ìn to the container with-v /opt/gcloud-config-demouser/root/.docker:/root/.docker
which will preserve the docker/docker-machine related configurations.
Therefore on that container environment, docker-machine
utility can be used to create docker installed VM on GCP by running docker-machine create
.
docker-machine create --driver google --google-project <project> --google-zone <zone> --google-machine-type <machine-type> --google-disk-size <disk-size> <virtual-machine-name>
Note that not all the parameters are mandatory and there are sensible defaults for most of them. By running
docker-machine create --driver google --help
can get the available google specific parameters.
After successful creation of docker installed VM, relevant configurations are stored in ~/.docker/machine/machines/<virtual-machine-mane>
directory.
Information of the created machine can be displayed by running docker-machine ls
.
Accessibility to the created VM can be verified by running a simple shell command on VM docker-machine ssh <virtual-machine-name> <command>
.
Google Cloud console shows the created VM as below.
Configure connectivity to Docker daemon on Cloud VM
In order to deploy our docker containers on VM, created on Cloud, docker-client utility need to be configured to use docker-daemon on Cloud VM. Running docker-machine env <virtual-machine-name> --shell bash
will display the environmental variables to be set for the docker-client configuration.
So, Those environmental variable can be set by running eval $(docker-machine env <virtual-machine-name> --shell bash)
.
Then the docker-client utility has been configured to interact with remote docker-daemon on cloud VM. That can be verified by running docker system info
which displays the remote docker-daemon information.
Docker Deployment on Cloud VM
Now the environment is ready for the deployment. The $HOME
directory has been mounted in to container with -v /home/demouser/:/home/demouser/
. Therefore that sample application will be available in the container environment.
Docker image can be build by running docker-compose build
(Configurations like image name, version etc can be changed if required).
Created docker image, listed in docker image list, can be displayed by running docker image list
Docker containers can be started by running docker-compose up -d
The resulted containers can be listed by running docker container list
So, two containers has been started by docker-compose. Developer can interact with remote docker environment for the developer testing as same as having local docker environment.
At the end, container setup can be tear-down by running docker-compose down
Finally, the remote docker environment can be shutdown as below;
- Unset all remote docker-daemon configuration environmental variables by running
eval $(docker-machine env -u — shell bash)
- Stop VM on Google cloud by running
docker-machine stop <virtual-machine-name>
. Which will shutdown the VM on Google cloud. That can be verified by runningdocker-machine ls
.
Google Cloud console shows the stooped VM as below
Docker machine can be started again by running
docker-machine start <virtual-machine-name>
, which will start the VM on Google cloud. In case of public IP change of VM which is expected, authentication certificates will need to be regenerated and install again in VM. That can be achieved by runningdocker-machine regenerate-certs <virtual-machine-name>
. And after setting up of environmental variable required to configure docker-client to remote docker daemon by runningeval $(docker-machine env <virtual-machine-name> --shell bash)
, The remote docker environment will be available for deployment and testing again.
Or else, the Cloud VM can be decommissioned by running
docker-machine rm <virtual-machine-name>
which will remove the Cloud VM. So, new VM can be provisioned with different configuration(eg: with different machine-type) according to the requirement of the next deployment.
As a security precaution, The credentials stored in
~/.config/gcloud/application_default_credentials.json
can be revoked by runninggcloud auth application-default revoke
. Latergcloud auth application-default login
can be invoked and authenticated via web flow.
Kubernetes Deployment
Kubernetes cluster on Google Cloud
As the prerequisite, Kubernetes cluster should be provisioned on Google cloud and should have permission to connect to and use.
The private kubernetes cluster creation on Google Cloud is well documented in the GCP documentation. Google cloud console or Cloud SDK can be used for that.
Google Cloud console will display the created Kubernetes cluster as below.
Authentication on Google Cloud and connect Kubernetes cluster
As gcloud
in Cloud SDK is used to connect to K8s cluster. Running the container with --volumes-from
will make ~/.config/gcloud
available in the current container.
sudo docker run -it --net host --pid host --rm
--volumes-from gcloud-config-demouser
-v /usr/local/bin/:/usr/local/sbin
-v /home/demouser/:/home/demouser/ gcr.io/google.com/cloudsdktool/cloud-sdk bash
Then run the gcloud auth login
command to authenticate with a user identity (via web flow) which then authorises gcloud
and other SDK tools to access Google Cloud Platform.
gcloud auth login
After authenticated successfully (via web flow), credentials are preserved in the volume of the gcloud-config-demouser container.
Running the container at any subsequent time with
--volumes-from
, will the will embed preserved credentials within the container environment under the~/.config/gcloud/
directory.
kubectl
tool which is used to interact with k8s cluster is will be available once you start bash shell with cloud-sdk container.
As user credentials are available in default location, gcloud
command can be run in container environment and it will find your credentials automatically. kubectl
tool can be configured to connect with GKE cluster in GCP by running;
gcloud container clusters get-credentials <gke-cluster-name> --zone <zone> --project <project-name>
Which create k8s cluster configuration in ~/.kube/config.
Connectivity to the k8s cluster can be verified by running kubectl get nodes
, which displays the cluster worker node list.
Kubernetes deployment on Google Cloud
The k8s manifests in same sample application, that is used for aforementioned docker deployment can be used.
cd erlang_k8s_cluster/release/k8s/manifests
The application can be deployed on k8s cluster by running kubectl apply -f manifests/erlang_k8s_cluster.yaml
.
The manifests will deploy statefulset with 2 replicas (pods) and a head-less service on default name-space. The deployed resources can be verified by running kubectl get all
which will display the resources currently deployed on default name-space.
The Google Cloud console will also displays the deployed workloads as depicted below.
Finally, the Kubernetes deployment can be clear-out by running kubectl delete -f manifests/erlang_k8s_cluster.yaml
. And also can be verified by running kubectl get all
.
Kubernetes Cluster worker node count can be reduced to zero at the end of the testing and resize back again to the original configuration for the next deployment.
Or else, The Kubernetes cluster can be completely clear-out at the end of the testing and Create a new K8s cluster for the next developer testing is also an option.
For resizing or decommissioning, Google Cloud SDK or Cloud console can be used.
As a security precaution, The credentials stored can be revoked by running
gcloud auth revoke --all
. Latergcloud auth login
can be invoked and authenticated via web flow.
Summary
Creating docker installed VM on GCP is simple and much faster with docker-machine. After completion of testing, the VM can be terminated or fully decommissioned and Started/Created again quickly when needed.
Creation of Kubernetes cluster on GCP also much simple and faster. Which can also be resized to zero worker nodes or fully decommissioned at the end of the testing phase.
Therefore each developer can have dedicated low cost containerised testing environment.
Finally, make sure all your credentials are revoked at the end. Unless otherwise, an intruder can make use of your credentials to impersonate you to do some unwanted stuff.