At the project I’m currently working on we decided to move away from SeleniumGridExtras around September 2018. Since may 2018 the SeleniumGridExtras project is no longer maintained by its owner and they are asking for someone to take over his work. We considered to take ownership of SeleniumGridExtras but saw that there were better solutions in the market. So, to keep a long story short, after some comparison we decided to make use of Zalenium. This blog is about setting up Zalenium on a bare metal Kubernetes setup which consists of 1 master and two workers. All installation instructions are based on Ubuntu Linux. This blog is not about making it secure and perfecting the setup, just to make it work for you.
Benefits of Zalenium
There are a number of reasons to prefer Zalenium over the standard Selenium grid and even over SeleniumGridExtras. Because Zalenium is totally dockerized you can just run it by using a simple docker run command. Automatically it will spin up two other docker containers (if you do not differ from the standard configuration) that both contain one instance of Firefox and Chrome as a browser. When the demand of one of these browser will increase more docker images will be started to fulfill the demand. Biggest disadvantage of this practice is that it will also put a lot of load on the machine running many of these containers. Luckily Zalenium supports Kubernetes so we can distribute the load of containers over multiple machines. When the load on the number of workers gets to heavy we can always add an extra worker node to the Kubernetes cluster.
Another benefit of Zalenium is that you can view everything that is happening in the container directly from the grid via a so called Live view. With SeleniumGridExtras you could also view what is happening on the VM but not directly from the Selenium Grid console view. So you had to view what was happening directly on the machine by entering the IP of the VM in your favorite browser with a specific port number at the end that you specified when setting up SeleniumGridExtras. So viewing the test execution is way more easy with Zalenium.
Setup
Because Zalenium is actually an extension of Selenium Grid we are still able to add other nodes to grid. In this case we can still add a number of VM’s to the grid that contain IE and Edge. Next to that we still add our macmini’s to the grid so we can test Safari.
Installation of Kubernetes
Because having just one machine that should do it all is not optimal we have installed a Kubernetes cluster. This way we can distribute the containers over multiple machines. For now we have one master and two so called workers (slaves, minions or whatever you like to call it). Setting up and adding an extra workers is pretty easy. A prerequisite is that docker is installed. If you do not have it installed already, you can do so by executing the following:
$ sudo apt install docker.io |
Installation on master and workers:
Start by adding the Kubernetes signing key:
$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add |
After that, add the Kubernetes repository and install Kubernetes:
$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main" $ sudo apt install kubeadm |
Next, disable swap. Kubernetes does not work when not having swap turned off.
$ sudo swapoff -a |
Installation on master
With Kubernetes installed on all machines, go to the machine that will serve as the Kubernetes master and execute the following command:
$ sudo kubeadm init |
When this completes, you’ll be presented with the exact command you need to join the nodes to the master. This will look something like this:
Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 12.123.123.1:6443 --token b9swxo.3f74bxiv7x9hbnds --discovery-token-ca-cert-hash sha256:2a439c6496894df46a46cf54bc449de7575c95e7cc042710ccaba6c3a438a8a0 |
As you can see it tells you after the kubeadm init to execute some steps manually. Please execute the following steps manually:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
To install a communication network between the master and the nodes we choose to deploy the pod network weave. You can install it with the following command.
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" |
The master should now be ready to accept worker nodes.
Installation on worker
Next we can let the worker join the Kubernetes network by adding it via kubeadm join. You do not have to do this for the master. The following command is an example and should be corrected to have the right IP and join code.
kubernetes-slave:~$ kubeadm join 12.123.123.1:6443 --token b9swxo.3f74bxiv7x9hbnds --discovery-token-ca-cert-hash sha256:2a439c6496894cf46a46cf54bc449de7575c95e7cc042710ccaba6c3a438a8a0 |
So now we have our complete Kubernetes cluster up and running. Next thing we have to do is to create a deployment of Zalenium to get Zalenium up and running.
To verify that all workers have joined the cluster you can check with the following command on the master.
$ kubectl get nodes |
Output should be something like this:
NAME STATUS ROLES AGE VERSION master-machine Ready master 62d v1.13.0 worker-machine Ready none 45d v1.13.1 worker-machine Ready none 62d v1.13.1 |
Install Squid proxy server
Because the Kubernetes network creates it own network between containers, all containers will get an IP address that is out of range to be allowed through our firewall to the test environment. To handle this issue we can specify a proxy server to be used to tunnel our http requests to the test environments through the proxy server. This proxy server is installed on the same machine as the Kubernetes master so it will have an IP address that is allowed access on the test environments. Later on you will see how to specify the usage of the proxy for the docker containers with Chrome and Firefox. If you don’t have a firewall between your test environments and your Kubernetes cluster / browser VM’s you can skip the below steps and move on to ‘Creating a deployment and service for Zalenium’.
To install Squid proxy execute the following command:
$ sudo apt-get install squid3 |
Configuration:
The configuration of the Squid Proxy Server is handled in the /etc/squid/squid.conf. I will show you how to configure a very basic proxy server. The first thing we need to do is uncomment the line (by removing the # character):
#http_access allow localnet |
To find that line, issue the command:
$ sudo grep -n http_access /etc/squid/squid.conf |
As you can see (Figure A), the configuration option is found on line 1186 (of our installation). Open up the squid.conf file for editing, with the command sudo vi /etc/squid/squid.conf, and scroll down to that line and remove the # character.
Next you want to look for the line:
#acl localnet src |
There will be a number of them (for different network IP schemes). You need to remove the # character at the one that matches your network or create a new one (in our case 12.123.123.0/24) and alter it to your needs.
acl localnet src 12.123.123.0/24 |
Restart squid with the command:
$ sudo service squid restart |
That’s it. You now have a basic proxy server up and running on port 3128 and the IP address of the system you just installed Squid on.
Creating a deployment and service for Zalenium
There are two ways to create a deployment and a service with Kubernetes. You can do this directly via command line with kubectl but this has some disadvantages. Biggest one is that you have to delete and create a deployment every time you want to make a change. An other way to create a deployment and service is via YAML. With this you can just modify the YAML file and apply the change instead of having to delete and create the deployment and service every time.
So for Zalenium we use two YAML files which you can simple copy and paste in a vi editor on your master.
zalenium-deployment.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: zalenium spec: template: metadata: labels: app: zalenium role: grid spec: containers: - name: zalenium image: dosel/zalenium args: - start - '--desiredContainers' - '2' - '--maxTestSessions' - '1' - '--screenWidth' - '1440' - '--screenHeight' - '810' - '--videoRecordingEnabled' - 'false' env: - name: zalenium_https_proxy value: 'http://10.123.123.1:3128' - name: zalenium_http_proxy value: 'http://10.123.123.1:3128' - name: zalenium_no_proxy value: >- localhost,127.0.0.1,0.0.0.0,svc.cluster.local,cluster.local,.svc.cluster.local - name: OVERRIDE_WAIT_TIME value: 5m ports: - containerPort: 4444 livenessProbe: httpGet: path: / port: 4444 initialDelaySeconds: 30 timeoutSeconds: 1 |
In the deployment you can specify a number of arguments that you can use to set Zalenium more to your needs. For a full list of arguments you can use see here: https://opensource.zalando.com/zalenium/
In the ‘env’ sections you can set the properties of the proxy server.
zalenium-service.yaml
apiVersion: v1 kind: Service metadata: name: zalenium labels: app: zalenium spec: type: NodePort #Exposes the service as a node ports ports: - port: 4444 name: zaleniumgridport protocol: TCP targetPort: 4444 nodePort: 30000 selector: app: zalenium role: grid |
Next we need to bind the default role to the cluster so Zalenium will be able to deploy its own containers used for Chrome and Firefox.
zalenium-cluster-role-binding.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: zalenium subjects: - kind: ServiceAccount # Reference to upper's `metadata.name` name: default # Reference to upper's `metadata.namespace` namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io |
You can now create the deployment and service by running the create statements in the following order:
$ kubectl create -f zalenium-service.yaml $ kubectl create -f zalenium-cluster-role-binding.yaml $ kubectl create -f zalenium-deployment.yaml |
If you make any changes to the service or deployment YAML you can just simply apply the changes by executing:
$ kubectl apply -f zalenium-service.yaml OR $ kubectl apply -f zalenium-deployment.yaml |
Now you can use the Zalenium grid and start testing. If you already used a selenium grid you can change your endpoint to the ip adres of the Zalenium master and the port 30000 (in the above example http://12.123.123.1:30000/wd/hub). Otherwise you have to point them to the grid with your testautomation tool of choice.
And your done, Happy testing!
Hi,
thank You very much for such a cool technology and helpfull blog.
I have one comment to the part with setting Zalenium Grid via K8S resource YAML files.
Because of proxy settings zalenium_https_proxy, zalenium_http_proxy and zalenium_no_proxy in the file zalenium-deployment.yaml all URLs within chrome browser, including public internet adresses, where not reachable during the test.
After removing that part it was working.
So, I am speaking about following part, I removed, after what it was wroking for me:
###
– name: zalenium_https_proxy
value: ‘http://10.123.123.1:3128’
– name: zalenium_http_proxy
value: ‘http://10.123.123.1:3128’
– name: zalenium_no_proxy
value: >-
localhost,127.0.0.1,0.0.0.0,svc.cluster.local,cluster.local,.svc.cluster.local
###
Regards