Mauricio Magnani bio photo

Mauricio Magnani

A guy who loves technology and new challenges!

Email Twitter LinkedIn Github

Below are the chapters I and II:

WildFly with Mod Cluster

If you followed the previous articles, you will now have Apach Web Server HA in your environment as well as WildFly running in Domain Mode with four WildFly instances.

The first step will be to add two new properties to our WildFly instances. The properties must be added within each instance (server). So let’s start by editing the host-slave.xml file in “Subordinate 0”.

[root@server-subordinate-0 ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.Final
[root@server-subordinate-0 ~]# vim subordinate0/configuration/host-slave.xml 
##around line 89
    <servers>
        <server name="server-marketing-0" group="marketing">
            <system-properties>
                <property name="jboss.node.name" value="node-marketing-0" boot-time="true"/>
                <property name="wildfly.balancer.name" value="marketing-lb" boot-time="true"/>
            </system-properties>
        </server>
        <server name="server-accounting-0" group="accounting">
            <system-properties>
                <property name="jboss.node.name" value="node-accounting-0" boot-time="true"/>
                <property name="wildfly.balancer.name" value="accounting-lb" boot-time="true"/>
            </system-properties>
            <socket-bindings port-offset="100"/>
        </server>
    </servers>
##suppressed

Now perform the same procedure for “Subordinate 1”.

[root@server-subordinate-1 ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.Final
[root@server-subordinate-1 ~]# vim subordinate1/configuration/host-slave.xml 
##around line 89
    <servers>
        <server name="server-marketing-1" group="marketing">
            <system-properties>
                <property name="jboss.node.name" value="node-marketing-1" boot-time="true"/>
                <property name="wildfly.balancer.name" value="marketing-lb" boot-time="true"/>
            </system-properties>
        </server>
        <server name="server-accounting-1" group="accounting">
            <system-properties>
                <property name="jboss.node.name" value="node-accounting-1" boot-time="true"/>
                <property name="wildfly.balancer.name" value="accounting-lb" boot-time="true"/>
            </system-properties>
            <socket-bindings port-offset="100"/>
        </server>
    </servers>
##suppressed

As you may have noticed, every configuration of the technologies available in WildFly is done in the “Master”. Remember that we currently have two groups “marketing” and “accounting” that are using the profile “full-ha” and socket-binding “full-ha-sockets”.

Then in the “Master” edit the domain.xml file and add a new socket with our VIP defined in chapter I.

[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.Final
[root@server-domain ~]# vim master/configuration/domain.xml 
##around line 1883 // full-ha-sockets
        <outbound-socket-binding name="proxy">
            <remote-destination host="10.0.0.190" port="9090"/>
        </outbound-socket-binding>
##suppressed

Within the “full-ha” profile edit the modcluster subsystem and add the instance-id and balancer properties.

##around line 1661 // profile full-ha
        <proxy name="default" advertise-socket="modcluster" listener="ajp" proxies="proxy" balancer="${wildfly.balancer.name}">
##suppressed

Restart the wildfly service on the master and then on the subordinates.

[root@server-domain ~]# systemctl restart wildfly
[root@server-subordinate-0 ~]# systemctl restart wildfly
[root@server-subordinate-1 ~]# systemctl restart wildfly

Access the VIP IP on the Mod Cluster port and context: http://10.0.0.190:9090/mod_cluster_manager. See that our instances are now connected and ready to use:

Deploy the cluster.war application. Open the management console and go to “deployments” –> “content-repository” and upload the application. After that “deploy” –> “Deploy Content” –> choose “marketing” server group.

If the application is successfully deployed, you will see the context in “mod_cluster_manager”: http://10.0.0.190:9090/mod_cluster_manager

If you try to access the “application context” (http://10.0.0.190/cluster) through VIP you will get a 404 because the virtual host for that application has not yet been configured.

So now let’s create a new virtual host to make the application that was deployed in the “marketing” group available through VIP.

[root@apache-httpd-01 ~]# vim /etc/httpd/conf.d/virtual_host.conf
<VirtualHost *:80>
    ServerName marketing.mmagnani.lab
    ProxyPass        / balancer://marketing-lb/ stickysession=JSESSIONID|jsessionid nofailover=On
    ProxyPassReverse / balancer://marketing-lb/
</VirtualHost>
[root@apache-httpd-02 ~]# vi /etc/httpd/conf.d/virtual_host.conf
<VirtualHost *:80>
    ServerName marketing.mmagnani.lab
    ProxyPass        / balancer://marketing-lb/ stickysession=JSESSIONID|jsessionid nofailover=On
    ProxyPassReverse / balancer://marketing-lb/
</VirtualHost>

Add an entry in your DNS for the name “marketing.mmagnani.lab” pointing to the VIP which in this case is 10.0.0.190.

In the “Master” edit the domain.xml file and update the default virtual host on undertow subsystem.

[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-20.0.0.Final
[root@server-domain ~]# vim master/configuration/domain.xml 
##around line 1739
            <host name="default-host" alias="marketing.mmagnani.lab" default-web-module="cluster.war" />
##suppressed

Restart the wildfly service on the master and then on the subordinates.

[root@server-domain ~]# systemctl restart wildfly
[root@server-subordinate-0 ~]# systemctl restart wildfly
[root@server-subordinate-1 ~]#  systemctl restart wildfly

Finally open the browser and access the application URL: http://marketing.mmagnani.lab

This way our applications will respond via virtual host/VIP. So let’s test the high availability of the application.

The request has been redirected to server “server-marketing-0”. So go to “Runtime” –> “Hosts” –> “slave0” and stop “server-marketing-0”.

Open the app URL again: http://marketing.mmagnani.lab

This time the request was redirected to “node-marketing-1”. High availability is working correctly.

Unfortunately the session was not maintained, this is because we have not configured our cluster yet. So in the next step we will configure the cluster with JGroups/TCPPing.

WildFly and TCPPING

The first step is to set up a new “stack” in the profile we are using, which in this case is “full-ha”. For this in “Master” host edit the file domain.xml and add the new configuration:

[root@server-domain ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.Final
[root@server-domain ~]# vi master/configuration/domain.xml 
##around line 1574
                <channels default="ee">
                    <channel name="ee" stack="tcpping"/>
                </channels>
                <stacks>
                    <stack name="tcpping">
                        <transport type="TCP" socket-binding="jgroups-tcp"/>
                        <protocol type="org.jgroups.protocols.TCPPING">
                            <property name="initial_hosts">
                                ${wildfly.cluster.tcp.initial_hosts}
                            </property>
                            <property name="port_range">
                                ${wildfly.cluster.tcp.port_range}
                            </property>
                        </protocol>
                        <protocol type="MERGE3"/>
                        <protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
                        <protocol type="FD"/>
                        <protocol type="VERIFY_SUSPECT"/>
                        <protocol type="pbcast.NAKACK2"/>
                        <protocol type="UNICAST3"/>
                        <protocol type="pbcast.STABLE"/>
                        <protocol type="pbcast.GMS"/>
                        <protocol type="MFC"/>
                        <protocol type="FRAG2"/>
                    </stack>
##suppressed

Now in the server-group “marketing” add a new property with WildFly instance IP and port.

##around line 1924
##10.0.0.67 slave0  
##10.0.0.66 slave1
    <server-groups>
        <server-group name="marketing" profile="full-ha">
            <jvm name="default">
                <heap size="64m" max-size="512m"/>
            </jvm>
            <socket-binding-group ref="full-ha-sockets"/>
            <deployments>
                <deployment name="cluster.war" runtime-name="cluster.war"/>
            </deployments>
            <system-properties>
                <property name="wildfly.cluster.tcp.initial_hosts" value="10.0.0.67[7600],10.0.0.66[7600]"/>
                <property name="wildfly.cluster.tcp.port_range" value="0"/>
            </system-properties>
        </server-group>
##suppressed

Add the above properties to the server-group “accounting” because as we are using the same profile “full-ha” the instances will start with some issues because they do not know these properties.

##around line 1934
##10.0.0.67 slave0  
##10.0.0.66 slave1
        <server-group name="accounting" profile="full-ha">
            <jvm name="default">
                <heap size="64m" max-size="512m"/>
            </jvm>
            <socket-binding-group ref="full-ha-sockets"/>
            <system-properties>
                <property name="wildfly.cluster.tcp.initial_hosts" value="10.0.0.67[7700],10.0.0.66[7700]"/>
                <property name="wildfly.cluster.tcp.port_range" value="0"/>
            </system-properties>
        </server-group>
##suppressed

For TCPING to work correctly you will need to add a private interface to Slaves. So let’s start by editing the host-slave.xml file in “Slave 0”.

[root@server-slave-0 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.Final
[root@server-slave-0 ~]# vi slave0/configuration/host-slave.xml 
##around line 78
       <interface name="private">
            <inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
       </interface>
##suppressed

Edit the domain.conf file and add the configuration this interface on the JAVA_OPTS variable:

##around line 50
[root@server-slave-0 ~]# vi /usr/local/wildfly/wildfly-18.0.1.Final/bin/domain.conf

if [ "x$JAVA_OPTS" = "x" ]; then
   JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
   
   #Edit this line
   JAVA_OPTS="$JAVA_OPTS -Djboss.domain.base.dir=/usr/local/wildfly/wildfly-18.0.1.Final/slave0 -Djboss.host.default.config=host-slave.xml -Djboss.domain.master.address=10.0.0.68 -Djboss.bind.address=10.0.0.67 -Djboss.bind.address.private=10.0.0.67"
else
##suppressed

Now perform the same procedure for “Slave1”.

[root@server-slave-1 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.Final
[root@server-slave-1 ~]# vi slave1/configuration/host-slave.xml 
##around line 78
       <interface name="private">
            <inet-address value="${jboss.bind.address.private:127.0.0.1}"/>
       </interface>
##suppressed

Edit the domain.conf file and add the configuration this interface on the JAVA_OPTS variable:

##around line 50
[root@server-slave-1 ~]# vi /usr/local/wildfly/wildfly-18.0.1.Final/bin/domain.conf

if [ "x$JAVA_OPTS" = "x" ]; then
   JAVA_OPTS="-Xms64m -Xmx512m -XX:MaxMetaspaceSize=256m -Djava.net.preferIPv4Stack=true"
   JAVA_OPTS="$JAVA_OPTS -Djboss.modules.system.pkgs=$JBOSS_MODULES_SYSTEM_PKGS -Djava.awt.headless=true"
   
   #Edit this line
   JAVA_OPTS="$JAVA_OPTS -Djboss.domain.base.dir=/usr/local/wildfly/wildfly-18.0.1.Final/slave0 -Djboss.host.default.config=host-slave.xml -Djboss.domain.master.address=10.0.0.68 -Djboss.bind.address=10.0.0.66 -Djboss.bind.address.private=10.0.0.66"
else
##suppressed

Restart the WildFly service on the master and then on the slaves.

[root@server-domain ~]# systemctl restart wildfly
[root@server-slave-0 ~]# systemctl restart wildfly
[root@server-slave-1 ~]# systemctl restart wildfly

Now in the server logs you should see both instances forming a new cluster:

[root@server-slave-1 ~]# pwd
/usr/local/wildfly/wildfly-18.0.1.Final
[root@server-slave-1 ~]# tailf slave1/servers/server-marketing-1/log/server.log
2019-12-22 18:31:52,673 INFO  [org.infinispan.CLUSTER] (MSC service thread 1-1) ISPN000094: Received new cluster view for channel ee: [node-marketing-0|1] (2) [node-marketing-0, node-marketing-1]

Open the browser and access the application URL: http://marketing.mmagnani.lab

The request has been redirected to node “node-marketing-1”, see also that the request number in session is 8. So go to “Runtime” –> “Hosts” –> “slave1” and stop “server-marketing-1”.

Refresh the page a new request will be made: http://marketing.mmagnani.lab

The request has now been redirected to “node-marketing-0” and the number of sessions is 9. Congratulations! Session replication worked successfully.

In the next chapter we will learn how to monitor this environment using Grafana, Prometheus and AlertManager.

Best,