Mauricio Magnani bio photo

Mauricio Magnani

A guy who loves technology and new challenges!

Email Twitter LinkedIn Github

Legacy Java Applications

With the advancement of dockerization of applications, Kubernetes has become a standard in the market but we must remember that there are still thousands of legacy applications that depend on certain features provided by the application servers. So if you need to use session replication you will surely need Wildfly instances to be clustered. To solve this smoothly you can use thek “Kubernetes discovery protocol for JGroups” aka “KUBE-PING”. KUBE_PING is a discovery protocol for JGroups cluster nodes managed by Kubernetes: jgroups-kubernetes

Walkthrough

I assume you already have a Kubernetes cluster so let’s just focus on WildFly settings.

The first step is to create a repository that will contain all our files:

[root@workstation ~]# mkdir wildfly20-kubeping
[root@workstation ~]# mkdir -p wildfly20-kubeping/application
[root@workstation ~]# mkdir -p wildfly20-kubeping/configuration
[root@workstation ~]# mkdir -p wildfly20-kubeping/kubernetes

Then create Dockerfile with the necessary steps for image customization:

[root@workstation ~]# vi wildfly20-kubeping/Dockerfile
FROM jboss/wildfly:20.0.0.Final

LABEL MAINTAINER Mauricio Magnani <msmagnanijr@gmail.com>

RUN /opt/jboss/wildfly/bin/add-user.sh admin redhat --silent

ADD configuration/config-server.cli /opt/jboss/

RUN /opt/jboss/wildfly/bin/jboss-cli.sh --file=config-server.cli

RUN rm -Rf /opt/jboss/wildfly/standalone/configuration/standalone_xml_history/*

ADD application/cluster.war /opt/jboss/wildfly/standalone/deployments/

EXPOSE 8080 9990 7600 8888

In the same directory, create the file “config-server.cli” which will contain the steps to configure kubeping correctly:

[root@workstation ~]# vi wildfly20-kubeping/configuration/config-server.cli
###start the server in admin-only mode, using/modifying standalone-full-ha.xml
embed-server --server-config=standalone-full-ha.xml --std-out=echo

###apply all configuration to the server
batch
#/subsystem=logging/logger=org.openshift.ping:add()
#/subsystem=logging/logger=org.openshift.ping:write-attribute(name=level, value=DEBUG)
#/subsystem=logging/logger=org.openshift.ping:add-handler(name=CONSOLE)
/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:write-attribute(name=reconnect-attempts,value=10)  
/interface=kubernetes:add(nic=eth0)  
/socket-binding-group=standard-sockets/socket-binding=jgroups-tcp/:write-attribute(name=interface,value=kubernetes)  
/socket-binding-group=standard-sockets/socket-binding=jgroups-tcp-fd/:write-attribute(name=interface,value=kubernetes)  
/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)
/subsystem=jgroups/channel=ee:write-attribute(name=cluster,value=kubernetes)
/subsystem=jgroups/stack=tcp/protocol=MPING:remove()
/subsystem=jgroups/stack=tcp/protocol=kubernetes.KUBE_PING:add(add-index=0,properties={namespace=${env.KUBERNETES_NAMESPACE},labels=${env.KUBERNETES_LABELS},port_range=0,masterHost=kubernetes.default.svc,masterPort=443})
run-batch

###stop embedded server
stop-embedded-server

Now put the “package of your application” to the directory “application”

[root@workstation ~]# cp cluster.war wildfly20-kubeping/application

The next step is to create all Kubernetes objects so we can do the tests later. I’m using the namespace “labs”:

[root@workstation ~]# vi wildfly20-kubeping/kubernetes/01-wildfly-sa-role.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jgroups-kubeping-service-account
  namespace: labs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: jgroups-kubeping-pod-reader
  namespace: labs
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: jgroups-kubeping-api-access
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jgroups-kubeping-pod-reader
subjects:
- kind: ServiceAccount
  name: jgroups-kubeping-service-account
  namespace: labs

[root@workstation ~]# vi wildfly20-kubeping/kubernetes/02-wildfly-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: wildfly
  namespace: labs
  labels:
    app: wildfly
    tier: devops
spec:
  selector:
    matchLabels:
      app: wildfly
      tier: devops
  replicas: 2
  template: 
    metadata:
      labels:
        app: wildfly
        tier: devops
    spec:
      serviceAccountName: jgroups-kubeping-service-account
      containers:
        - name: kube-ping
          image: mmagnani/wildfly20-kubeping:latest
          command: ["/opt/jboss/wildfly/bin/standalone.sh"]
          args: ["--server-config", "standalone-full-ha.xml", "-b", $(POD_IP), "-bmanagement", $(POD_IP) ,"-bprivate", $(POD_IP) ]
          resources:
            requests:
              memory: 256Mi
            limits:
              memory: 512Mi
          imagePullPolicy: Always
          ports:
            - containerPort: 8080
            - containerPort: 9990
            - containerPort: 7600
            - containerPort: 8888
          env:
            - name: POD_IP
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: status.podIP
            - name: KUBERNETES_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: KUBERNETES_LABELS 
              value: app=wildfly
            - name: JAVA_OPTS
              value: -Djdk.tls.client.protocols=TLSv1.2

[root@workstation ~]# vi wildfly20-kubeping/kubernetes/03-wildfly-service.yaml

apiVersion: v1
kind: Service
metadata:
    name: wildfly
    namespace: labs
    labels:
      app: wildfly
      tier: devops
spec:
  type: ClusterIP
  ports:
    - targetPort: 8080
      port: 8080
  selector:
    app: wildfly
    tier: devops

[root@workstation ~]# vi wildfly20-kubeping/kubernetes/04-wildfly-ingress.yaml

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
    name: wildfly-ingress
    namespace: labs
spec:
  rules:
  - host: wildfly.mmagnani.lab
    http:
      paths:
      - path: /
        backend:
          serviceName: wildfly
          servicePort: 8080

Now just run the image build and push this to the docker hub or any other “registry”:

[root@workstation ~]# docker build -t mmagnani/wildfly20-kubeping:latest . 
[root@workstation ~]# docker push mmagnani/wildfly20-kubeping

In the context of your Kubernetes cluster just create those objects:

[root@k8s-master kubernetes]# kubelet --version
Kubernetes v1.18.3
[root@k8s-master kubernetes]# pwd
/root/wildfly20-kubeping/kubernetes
[root@k8s-master kubernetes]# kubectl create -f . -n labs
[root@k8s-master kubernetes]# kubectl get pods -n labs
NAME                      READY   STATUS    RESTARTS   AGE
wildfly-cc8b9546f-df9sw   1/1     Running   0          48s
wildfly-cc8b9546f-scfdp   1/1     Running   0          48s

Check the logs, you should find two members as we only define 2 replicas:

[root@k8s-master kubernetes]# kubectl logs -f wildfly-cc8b9546f-df9sw -n labs

14:23:33,253 INFO  [org.infinispan.CLUSTER] (MSC service thread 1-2) ISPN000078: Starting JGroups channel kubernetes
14:23:33,253 INFO  [org.infinispan.CLUSTER] (MSC service thread 1-2) ISPN000094: Received new cluster view for channel kubernetes: [wildfly-cc8b9546f-scfdp|1] (2) [wildfly-cc8b9546f-scfdp, wildfly-cc8b9546f-df9sw]

As I am using “Traefik” my “ingress” is available:

In the application it is also possible to add object to the session: