Open /etc/cassandra/cassandra.yaml and modify authenticator: from AllowAllAuthenticator to PasswordAuthenticator, so Cassandra will create a default user cassandra/cassandra.
To create own user : create dir /docker-entrypoint-initdb.d/ and create cql file init-query.cql with content (CREATE USER IF NOT EXISTS admin WITH PASSWORD 'vmware' SUPERUSER;) so it will create a user admin/vmware.
To execute the init-query.cql file on db startup, need to modify the docker-entrypoint.sh file, add the below content right before exec "[email protected]"
for f in docker-entrypoint-initdb.d/*; do case "$f" in *.sh) echo "$0: running $f"; . "$f" ;; *.cql) echo "$0: running $f" && until cqlsh --ssl -u cassandra -p cassandra -f "$f"; do >&2 echo "Cassandra is unavailable - sleeping"; sleep 2; done & ;; *) echo "$0: ignoring $f" ;; esac echo done
Here, cqlsh --ssl -u cassandra -p cassandra used to run *.cql file (if ssl is not enabled then remove --ssl option)
Modify the start_rpc: true in /etc/cassandra/cassandra.yaml file.
To enable the SSL : generate the self sign certificate(Run generateDbCert.sh file inside container) and modify the /etc/cassandra/cassandra.yaml file with below content
To login cqlsh client : need to create a cqlshrc file and copy in /root/.cassandra/ and /home/cassandra/.cassandra/ folder
Exit from the running container and restart the container.
Login : cqlsh --ssl -u cassandra -p cassandra .
See logs : /var/log/cassandra .
Attaching the required files which help to enable the authentication and ssl in Cassandra base image.
To download the Cassandra client as DevCenter from DevCenter.
To Create Multi-Node Cassandra cluster
Create seed Node :
Join the Other Node to Seed Node :
For a quick POC we recommend that you deploy a single node instance of Mangle using the OVA file that we have made available here.
Using the OVA is a fast and easy way to create a Mangle VM on VMware vSphere.
After you have downloaded the OVA, log in to your vSphere environment and perform the following steps:
Start the Import Process
From the Actions pull-down menu for a datacenter, choose Deploy OVF Template.
Provide the location of the downloaded ova file.
Choose Next.
Specify the name and location of virtual machine
Enter a name for the virtual machine and select a location for the virtual machine.
Choose Next.
Specify the compute resource
Select a cluster, host or resource pool where the virtual machine needs to be deployed.
Choose Next.
Review details
Choose Next.
Accept License Agreement
Read through the Mangle License Agreement, and then choose I accept all license agreements.
Choose Next.
Select Storage
Mangle is provisioned with a maximum disk size. By default, Mangle uses only the portion of disk space that it needs, usually much less that the entire disk size ( Thin client). If you want to pre-allocate the entire disk size (reserving it entirely for Mangle instead), select Thick instead.
Choose Next.
Select Network
Provide a static or dhcp IP for Mangle after choosing an appropriate destination network.
Choose Next.
Verify Deployment Settings and click Finish to start creating the virtual machine. Depending on bandwidth, this operation might take a while. When finished, vSphere powers up the Mangle VM based on your selections.
After the VM is boots up successfully, open the command window. Press Alt+F2 to log into the VM.
The default account credentials are:
Username: root
Password: vmware
Because of limitations within OVA support on vSphere, it was necessary to specify a default password for the OVA option. However, for security reasons, we would recommend that you modify the password after importing the appliance.
It takes a couple of minutes for the containers to run. Once the Mangle application and DB containers are running, the Mangle application should be available at the following URL:
You will be prompted to change the admin password to continue.
Default Mangle Username: [email protected]
Password: admin
Export the VM as a Template (Optional)
Consider converting this imported VM into a template (from the Actions menu, choose Export ) so that you have a master Mangle instance that can be combined with vSphere Guest Customization to enable rapid provisioning of Mangle instances.
Now you can move on to the Mangle Users Guide.
Before creating the Mangle container a Cassandra DB container should be made available on a Docker host. You can choose to deploy the DB and the Application container on the same Docker host or on different Docker hosts. However, we recommend that you use a separate Docker host for each of these. You can setup a Docker host by following the instructions here.
To deploy Cassandra, you can either use the authentication enabled image tested and verified with Mangle available on the Mangle Docker repo or use the default public Cassandra image hosted on Dockerhub.
If you chose to use the Cassandra image from Mangle Docker Repo:
If you chose to use the Cassandra image from Dockerhub:
To deploy the Mangle container using a Cassandra DB deployed using the image from Mangle Docker repo or with DB authentication and ssl enabled, run the docker command below on the docker host after substituting the values in angle braces <> with actual values.
To deploy the Mangle container using a Cassandra DB deployed using the image from Dockerhub or with DB authentication and ssl disabled, run the docker command below on the docker host after substituting the values in angle braces <> with actual values.
The Mangle docker container takes two environmental variables
"DB_OPTIONS", which can take a list of java arguments identifying the properties of the database cluster
"CLUSTER_OPTIONS", which can take a list of java arguments identifying the properties of the Mangle application cluster
Although the docker run commands above lists only a few DB_OPTIONS and CLUSTER_OPTIONS parameters, Mangle supports a lot more for further customization.
Supported DB_OPTIONS
Supported CLUSTER_OPTIONS
Mangle vCenter Adapter is a fault injection adapter for injecting vCenter specific faults. All the vCenter operations from the Mangle application will be carried out through this adapter.
To deploy the vCenter adapter container run the docker command below on the docker host.
The API documentation for the vCenter Adapter can be found at:
https://::/mangle-vc-adapter/swagger-ui.html
The vCenter adapter requires authentication against any API calls. It supports only one user, admin with password admin. All the post APIs that are supported by the adapter will take the vCenter information as a request body.
All the relevant YAML files are available under the Mangle git repository under location 'mangle\mangle-support\kubernetes'
Create a namespace called 'mangle' under the K8s cluster.
kubectl --kubeconfig=
create namespace mangle
Create Cassandra pod and service.
kubectl --kubeconfig=-n mangle apply -f/cassandra.yaml
kubectl --kubeconfig=-n mangle apply -f/cassandra-service.yaml
Create Mangle pod and service
kubectl --kubeconfig=-n mangle apply -f/mangle.yaml
kubectl --kubeconfig=-n mangle apply -f/mangle-service.yaml
A multi-node setup for Mangle ensures availability in case of unexpected failures. We recommend that you use a 3 node Mangle setup.
You need at least 4 docker hosts for setting up a multi node Mangle instance; 1 for the Cassandra DB and 3 for Mangle application containers . You can setup a docker host by following the instructions here.
A multi node setup of Mangle is implemented using Hazelcast. Mangle multi node setup uses TCP connection to communicate with each other. The configuration of the setup is handled by providing the right arguments to the docker container run command, which identifies the cluster.
The docker container takes an environmental variable "CLUSTER_OPTIONS", which can take a list of java arguments identifying the properties of the cluster. Following are the different arguments that should be part of "CLUSTER_OPTIONS": clusterName - A unique string that identifies the cluster to which the current mangle app will be joining. If not provided, the Mangle app will by default use string "mangle" as the clusterName, and if this doesn't match the one already configured with the cluster, the node is trying to join to, container start fails. clusterValidationToken - A unique string which will act similar to a password for a member to get validated against the existing cluster. If the validation token doesn't match with the one that is being used by the cluster, the container will fail to start.
publicAddress - IP of the docker host on which the mangle application will be deployed. This is the IP that mangle will use to establish a connection with the other members that are already part of the cluster, and hence it is necessary to provide the host IP and make sure the docker host is discoverable from other nodes clusterMembers - This is an optional property that takes a comma-separated list of IP addresses that are part of the cluster. If not provided, Mangle will query DB and find the members of the cluster that is using the DB and will try connecting to that automatically. It is enough for mangle to connect to at least one member to become part of the cluster.
deploymentMode - Takes either CLUSTER/STANDALONE value. Deployment Mode parameter is mandatory for the multi-node deployment, with the value set to CLUSTER on every node that will be part of the cluster, where as for the single node deployment, this field can be ignored, which by default will be STANDALONE. If DB was used by one type of deployment setup, one needs to update this parameter to support the other type, either though the mangle or by directly going in to DB.
NOTE:
All the nodes (docker hosts) participating in the cluster should be synchronized with a single time server.
If a different mangle app uses the same clusterValidationToken, clusterName and DB of existing cluster, the node will automatically joins that existing cluster.
All the mangle app participating in the cluster should use the same cassandra DB.
The properties clusterValidationToken and publicAddress are mandatory for any mangle container spin up, if not provided container will fail to start.
Deploy a Cassandra DB container by referring to the section here.
Deploy the Mangle cluster by bringing up the mangle container in each docker host.
For the first node in the cluster:
For the subsequent nodes in the cluster:
Mangle implements quorum strategy to for supporting HA and to avoid any split brain scenarios when deployed as a cluster.
Mangle can be deployed in two different deployment modes
STANDALONE: quorum value is 1, cannot be updated to any other value
CLUSTER: min quorum value is 2, cannot be set to value lesser than ceil((n + 1)/2), n being number of nodes currently in the cluster
If a node is deployed initially in the STANDALONE mode, and user changes the deployment mode to CLUSTER, the quorum will automatically be updated to 2. When user keeps adding new nodes to the cluster, the quorum value updates depending on the number of nodes in the cluster. eg: when user adds new node to 3 node cluster(quorum=2) making it 4 nodes in the cluster, the quorum value is updated to 3. Determined by the formula, ceil((n + 1)/2)
If a node is deployed initially in the CLUSTER mode, and user changes the deployment mode to STANDALONE mode, all other nodes except for the Hazelcast master will shutdown. They will lose the connection with the existing node, and won't take any post calls, and will entries will be removed from the cluster table.
NOTE:
Active members list of the active quorum will be maintained in DB under the table cluster. Similarly, master instance entry will also be maintained in the db under the same table.
All the schedules will be paused
No post calls can be made against mangle
Task that were triggered on any node will continue to be executed until Failed/Completed
Removes its entry from the cluster table's members list
If the node was the last known master, it will remove it's entry as master from the table
All the schedules that are in the scheduled state and all the schedules that were paused because of the quorum failure will be triggered
All the tasks that are in progress will be queue to executed after 5minutes of quorum created event (This is to avoid any simultaneous Execution)
Master (Oldest member in the cluster) will take care of adding the list of active members to the cluster table, and updating itself as master
Some of the schedules will be assigned to the new the node due to migration
Some of the tasks will assigned to the new node, and they will be queued for triggering after 5mins
Existing tasks that in-progress will continue to execute in the old node
Master (Oldest member in the cluster) will take care of adding new member's entry to the list of active members to the cluster table
Let us consider 3 instances a, b, c in the CLUSTER
a leaves the cluster due to network partition
a is not able to communicate with b, c
two cluster will be created, one having only a, and the other have b and c
since for the cluster having a doesn't have quorum(2), it loses the quorum, and removes itself as active member from cluster config
cluster having nodes b and c continue execute since they have enough nodes to satisfy the quorum
Let us consider 2 instances a, b in the CLUSTER
a leaves the cluster due to network partition, a cannot communicate with b and vice versa
two clusters are created with each having one node
both the cluster loses the quorum(2)