CrossDataCenter.md
Test Cross-Data-Center scenario (test with external JDG server)
These are temporary notes. This docs should be removed once we have cross-DC support finished and properly documented.
These steps are already automated for embedded Undertow, see Cross-DC tests section in HOW-TO-RUN.md document. For Wildfly they are not yet automated. Following instructions are related to Wildfly server.
What is working right now is:
- Propagating of invalidation messages for 
realms,usersandauthorizationcaches - sessions, offline sessions and login failures are propagated between datacenters
 
Basic setup
This is the example setup simulating 2 datacenters site1 and site2 . Each datacenter consists of 1 infinispan server and 2 Keycloak servers.
So 2 infinispan servers and 4 Keycloak servers are totally in the testing setup.
- 
Site1 consists of infinispan server
jdg1and 2 Keycloak serversnode11andnode12. - 
Site2 consists of infinispan server
jdg2and 2 Keycloak serversnode21andnode22. - 
Infinispan servers
jdg1andjdg2forms cluster with each other. The communication between them is the only communication between the 2 datacenters. - 
Keycloak servers
node11andnode12forms cluster with each other, but they don't communicate with any server insite2. They communicate with infinispan serverjdg1through the HotRod protocol (Remote cache). - 
Same applies for
node21andnode22. They have cluster with each other and communicate just withjdg2server through the HotRod protocol. 
TODO: Picture on blog
- For example when some object (realm, client, role, user, ...) is updated on 
node11, thenode11will send invalidation message. It does it by saving special cache entry to the remote cacheworkonjdg1. Thejdg1notifies client listeners in same DC (hence onnode12) and propagate the message to it. Butjdg1is in replicated cache withjdg2. So the entry is saved onjdg2too andjdg2will notify client listeners on nodesnode21andnode22. All the nodes know that they should invalidate the updated object from their caches. The caches with the actual data (realms,usersandauthorization) are infinispan local caches. 
TODO: Picture and better explanation?
- For example when some userSession is created/updated/removed on 
node11it is saved in cluster on current DC, so thenode12can see it. But it's saved also to remote cache onjdg1server. The userSession is then automatically seen onjdg2server because there is replicated cachesessionsbetweenjdg1andjdg2. Serverjdg2then notifies nodesnode21andnode22through the client listeners (Feature of Remote Cache and HotRod protocol. See infinispan docs for details). The node, who is owner of the userSession (eithernode21ornode22) will update userSession in the cluster onsite2. Hence any user requests coming to Keycloak nodes onsite2will see latest updates. 
TODO: Picture and better explanation?
Example setup assumes all 6 servers are bootstrapped on localhost, but each on different ports.
Infinispan Server setup
- 
Download Infinispan 8.2.6 server and unzip to some folder
 - 
Add this into
JDG1_HOME/standalone/configuration/clustered.xmlunder cache-container namedclustered: 
<cache-container name="clustered" default-cache="default" statistics="true">
        ...
        <replicated-cache-configuration name="sessions-cfg" mode="ASYNC" start="EAGER" batching="false">        
            <transaction mode="NON_XA" locking="PESSIMISTIC"/>		            	
        </replicated-cache-configuration>                                
       
        <replicated-cache name="work" configuration="sessions-cfg" />    
        <replicated-cache name="sessions" configuration="sessions-cfg" />       
        <replicated-cache name="offlineSessions" configuration="sessions-cfg" />        
        <replicated-cache name="actionTokens" configuration="sessions-cfg" />        
        <replicated-cache name="loginFailures" configuration="sessions-cfg" />
                
</cache-container>
- 
Copy the server into the second location referred later as
JDG2_HOME - 
Start server
jdg1: 
cd JDG1_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=1010 -Djboss.default.multicast.address=234.56.78.99 \
-Djboss.node.name=jdg1
- Start server 
jdg2: 
cd JDG2_HOME/bin
./standalone.sh -c clustered.xml -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=2010 -Djboss.default.multicast.address=234.56.78.99 \
-Djboss.node.name=jdg2
- There should be message in the log that nodes are in cluster with each other:
 
Received new cluster view for channel clustered: [jdg1|1] (2) [jdg1, jdg2]
Keycloak servers setup
- 
Download Keycloak 3.3.0.CR1 and unzip to some location referred later as
NODE11 - 
Configure shared database for KeycloakDS datasource. Recommended to use MySQL, MariaDB or PostgreSQL. See Keycloak docs for more details
 - 
Edit
NODE11/standalone/configuration/standalone-ha.xml: 
3.1) Add attribute site to the JGroups UDP protocol:
                  <stack name="udp">
                      <transport type="UDP" socket-binding="jgroups-udp" site="${jboss.site.name}"/>
3.2) Add this module attribute under cache-container element of name keycloak :
 <cache-container name="keycloak" jndi-name="infinispan/Keycloak" module="org.keycloak.keycloak-model-infinispan">
3.3) Add the remote-store under work cache:
<replicated-cache name="work" mode="SYNC">
    <remote-store cache="work" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
        <property name="rawValues">true</property>
        <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
    </remote-store>
</replicated-cache>
3.5) Add the remote-store like this under sessions cache:
<distributed-cache name="sessions" mode="SYNC" owners="1">
    <remote-store cache="sessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">   
        <property name="rawValues">true</property>
        <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
    </remote-store>
</distributed-cache>
3.6) Same for offlineSessions, loginFailures, and actionTokens caches (the only difference from sessions cache is that cache property value are different):
<distributed-cache name="offlineSessions" mode="SYNC" owners="1">
    <remote-store cache="offlineSessions" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
        <property name="rawValues">true</property>
        <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
    </remote-store>
</distributed-cache>
<distributed-cache name="loginFailures" mode="SYNC" owners="1">
    <remote-store cache="loginFailures" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="false" shared="true">
        <property name="rawValues">true</property>
        <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
    </remote-store>
</distributed-cache>
<distributed-cache name="actionTokens" mode="SYNC" owners="2">
    <eviction max-entries="-1" strategy="NONE"/>
    <expiration max-idle="-1" interval="300000"/>
    <remote-store cache="actionTokens" remote-servers="remote-cache" passivation="false" fetch-state="false" purge="false" preload="true" shared="true">
        <property name="rawValues">true</property>
        <property name="marshaller">org.keycloak.cluster.infinispan.KeycloakHotRodMarshallerFactory</property>
    </remote-store>
</distributed-cache>
3.7) Add outbound socket binding for the remote store into socket-binding-group configuration:
<outbound-socket-binding name="remote-cache">
    <remote-destination host="${remote.cache.host:localhost}" port="${remote.cache.port:11222}"/>
</outbound-socket-binding>
3.8) The configuration of distributed cache authenticationSessions and other caches is left unchanged.
3.9) Optionally enable DEBUG logging under logging subsystem:
<logger category="org.keycloak.cluster.infinispan">
    <level name="DEBUG"/>
</logger>
<logger category="org.keycloak.connections.infinispan">
    <level name="DEBUG"/>
</logger>
<logger category="org.keycloak.models.cache.infinispan">
    <level name="DEBUG"/>
</logger>
<logger category="org.keycloak.models.sessions.infinispan">
    <level name="DEBUG"/>
</logger>
- 
Copy the
NODE11to 3 other directories referred later asNODE12,NODE21andNODE22. - 
Start
NODE11: 
cd NODE11/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node11 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.100 -Dremote.cache.port=12232 -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=3000
- Start 
NODE12: 
cd NODE12/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node12 -Djboss.site.name=site1 \
-Djboss.default.multicast.address=234.56.78.100 -Dremote.cache.port=12232 -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=4000
The cluster nodes should be connected. This should be in the log of both NODE11 and NODE12:
Received new cluster view for channel keycloak: [node11|1] (2) [node11, node12]
- Start 
NODE21: 
cd NODE21/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node21 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.101 -Dremote.cache.port=13232 -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=5000
It shouldn't be connected to the cluster with NODE11 and NODE12, but to separate one:
Received new cluster view for channel keycloak: [node21|0] (1) [node21]
- Start 
NODE22: 
cd NODE22/bin
./standalone.sh -c standalone-ha.xml -Djboss.node.name=node22 -Djboss.site.name=site2 \
-Djboss.default.multicast.address=234.56.78.101 -Dremote.cache.port=13232 -Djava.net.preferIPv4Stack=true \
-Djboss.socket.binding.port-offset=6000
It should be in cluster with NODE21 :
Received new cluster view for channel keycloak: [node21|1] (2) [node21, node22]
- Test:
 
9.1) Go to http://localhost:11080/auth/ and create initial admin user
9.2) Go to http://localhost:11080/auth/admin and login as admin to admin console
9.3) Open 2nd browser and go to any of nodes http://localhost:12080/auth/admin or http://localhost:13080/auth/admin or http://localhost:14080/auth/admin . After login, you should be able to see
the same sessions in tab Sessions of particular user, client or realm on all 4 servers
9.4) After doing any change (eg. update some user), the update should be immediatelly visible on any of 4 nodes as caches should be properly invalidated everywhere.
9.5) Check server.logs if needed. After login or logout, the message like this should be on all the nodes NODEXY/standalone/log/server.log :
2017-08-25 17:35:17,737 DEBUG [org.keycloak.models.sessions.infinispan.remotestore.RemoteCacheSessionListener] (Client-Listener-sessions-30012a77422542f5) Received event from remote store. 
Event 'CLIENT_CACHE_ENTRY_REMOVED', key '193489e7-e2bc-4069-afe8-f1dfa73084ea', skip 'false'
This is just a starting point and the instructions are subject to change. We plan various improvements especially around performance. If you have any feedback regarding cross-dc scenario, please let us know on keycloak-user mailing list referred from Keycloak home page.