You are on page 1of 449

Table of Contents

WildFly Administration Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1


Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
The Author of the Book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  2
The reviewers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3
What this book covers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  3
Who this book is for . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4
How to Contact Us . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4
Piracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  4
Book Version . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
Conventions used in this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  5
1. Chapter 1: Getting started with WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
1.1. What is new in WildFly ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
1.1.1. Changes introduced in WildFly 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  7
1.1.2. Changes introduced in WildFly 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  8
1.1.3. Changes introduced in WildFly 10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9
1.1.4. Changes introduced in WildFly 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  10
1.1.5. Changes introduced in WildFly 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  10
1.1.6. Changes introduced in WildFly 13 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11
1.1.7. Changes introduced in WildFly 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11
1.1.8. Changes introduced in WildFly 15 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  11
1.1.9. Changes introduced in WildFly 16 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12
1.1.10. Changes introduced in WildFly 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  12
1.1.11. Changes introduced in WildFly 18 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13
1.1.12. Changes introduced in WildFly 19 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  13
1.1.13. Changes introduced in WildFly 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  14
1.2. Installing WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  14
1.3. An in-depth look into the application server file system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  15
1.4. Starting WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  16
1.4.1. Setting the JBOSS_HOME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  17
1.5. Your first task: Create an Administrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  18
1.5.1. Creating an user in non-interactive mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  19
1.6. Stopping WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  20
1.6.1. Stopping WildFly running on a remote host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  21
1.7. Handling start-up issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  21
1.8. Installing WildFly as Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  23
1.8.1. Installing WildFly as a Service on Linux . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  23
1.8.1.1. Installing WildFly as a Service using init.d . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  24
1.8.1.2. Installing WildFly as a Service using systemd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25
1.8.2. Installing WildFly as a Service on Windows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  26
2. Chapter 2: Core Server configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  27
2.1. The two available server modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  27
2.2. Understanding the server configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  28
2.2.1. Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  28
2.2.2. Paths . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  29
2.2.3. Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30
2.2.4. Socket binding groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  30
2.2.5. System-Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  31
2.2.6. Profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32
2.3. Configuring WildFly in Standalone mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  32
2.3.1. Configuring JVM settings in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33
2.3.2. Configuring Network Interfaces in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  33
2.3.3. Configuring Socket Bindings in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  34
2.3.4. Configure Path references in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  35
2.3.5. Configuring System Properties in Standalone Mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  35
2.4. Configuring WildFly in Domain mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  36
2.4.1. Configuring the Domain Controller – Part 1: domain.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . .  37
2.4.2. Configuring the Domain Controller – Part 2: host.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  38
2.4.3. Configuring the Host Controllers (host.xml) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  39
2.4.4. Domain breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  42
2.5. Managing the WildFly Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  43
2.5.1. Mananaging the Domain Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  44
2.5.2. Mananaging the Domain Hosts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  44
2.5.2.1. Managing the Host controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  46
2.5.3. Managing the Server Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47
2.6. Domain Controller Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  47
2.6.1. Using Multiple Protocols to reconnect to the Domain Controller . . . . . . . . . . . . . . . . . . . . . .  49
2.6.2. Using Multiple Hosts in the Discovery Opions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  50
2.7. Standalone mode vs Domain mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  50
3. Chapter 3: Server Management with HAL Management console . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  51
3.1. Connecting to the HAL console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  51
3.2. Varying the Server Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  52
3.3. Gathering Runtime statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  54
3.4. Adding custom Response Headers to the HTTP management interface. . . . . . . . . . . . . . . . . . . .  55
3.5. Managing the Domain with HAL Management Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  56
3.5.1. Varying your Domain setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  58
3.5.2. Configuring Domain JVM Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  59
3.5.2.1. Configuring Host JVM Settings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  60
3.5.2.2. Configuring Server Groups JVM Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  61
3.5.2.3. Configuring Server JVM Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  61
3.6. Using the HAL Management console’s Management Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  62
3.7. Configuring Macros for frequent management operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  63
4. Chapter 4: Server Management with the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  65
4.1. Starting the Command Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  65
4.1.1. Recovering your server configuration using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66
4.2. Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66
4.3. Build up the CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  68
4.3.1. Determine the resource address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  68
4.3.2. Reading attributes of resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  68
4.3.3. Writing attributes of resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  69
4.3.4. Adding new resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  69
4.3.5. Reading children resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  70
4.3.6. Extra operations available on resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71
4.4. Enabling properties resolution in the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71
4.5. Detecting active operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  72
4.6. Tracing CLI commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  73
4.6.1. In-memory configuration changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  73
4.7. Running the CLI in graphical mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  74
4.7.1. Adding resources in graphical mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  75
5. Chapter 5: Advanced CLI features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  77
5.1. Using CLI batch mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  77
5.1.1. More about batch commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  78
5.2. Using batch deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  79
5.3. Applying patches to your configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  80
5.4. Taking snapshots of your configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  82
5.5. Running the CLI in offline mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  84
5.6. Suspending and resuming the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  86
5.7. Graceful shutdown of the Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  87
5.8. Conditional execution with the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  88
5.9. Migration of legacy systems with the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  88
6. Chapter 6: Deploying applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91
6.1. File system deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91
6.1.1. Mode 1: Auto-deploy mode: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91
6.1.2. Mode 2: Manual deploy mode: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  91
6.1.3. Configuring the Deployment scanner attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  92
6.2. Deploying using the Web interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  93
6.2.1. Standalone Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  93
6.2.2. Domain Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  95
6.2.2.1. Managing your application status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  98
6.3. Deploying the application using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  99
6.3.1. Manipulating exploded deployments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  100
6.3.2. Listing the module dependencies of a deployed application . . . . . . . . . . . . . . . . . . . . . . . .  100
6.3.3. CLI Domain deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  102
6.4. Deploying applications using Maven . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  102
6.4.1. Domain Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  103
7. Chapter 7: Configuring Database connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  104
7.1. Creating a Datasource using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  105
7.1.1. Creating a Datasource in Domain mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  106
7.1.2. Creating an XA Datasource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  106
7.1.2.1. Enabling XA transactions on the DB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  107
7.2. Configuring a Datasource using the Admin Console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  108
7.3. Deploying a Datasource as a resource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  112
7.3.1. Packaging Datasources in your applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  113
7.4. Configuring Datasources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  114
7.4.1. Configuring the Datasource pool attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  114
7.4.2. Configuring flush strategies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  115
7.4.3. Protecting Datasource credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  116
7.4.3.1. Step 1: Generate the encrypted password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  117
7.4.3.2. Step 2: Create the Security Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  117
7.4.3.3. Step 3: Let your datasource use the Security Domain: . . . . . . . . . . . . . . . . . . . . . . . . . .  118
7.4.4. Masking your Datasource credentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  119
7.4.5. Using System Properties in your deployable data sources . . . . . . . . . . . . . . . . . . . . . . . . . .  119
7.4.6. Configuring Multi Datasources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  120
7.5. Policies for creating/destroying connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  120
7.5.1. Configuring the incrementer capacity policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  120
7.5.2. Configuring the decrementer capacity policy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  122
7.6. Gathering Datasource runtime statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  123
7.6.1. Detecting leaked connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  125
7.7. Configuring Agroal Datasource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  126
7.7.1. Creating an Agroal XA Datasource . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  127
8. Chapter 8: Configuring Undertow Webserver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  129
8.1. Entering Undertow Web server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  129
8.2. Configuring Undertow Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  130
8.2.1. Writing a Response Header with a filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  130
8.2.2. Adding a connection limit filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  131
8.2.3. Adding a gzip filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  131
8.2.4. Adding an error filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  132
8.2.5. Adding a custom filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  132
8.3. Configuring Undertow Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  133
8.3.1. Configuring a File based Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  133
8.3.2. Creating a Reverse Proxy Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  134
8.4. Configuring Undertow Listeners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  135
8.4.1. Configuring the Web server Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  137
8.4.2. Configuring a custom Worker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  138
8.4.3. Other listeners attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  139
8.5. Configuring Undertow Buffer Pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  140
8.6. Configuring Virtual Hosts in Undertow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  141
8.7. Configuring the Servlet Container and JSP Settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  143
8.8. Configuring Undertow’s access logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  146
8.8.1. Writing access logs in JSON format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  147
8.9. Gathering statistics about Web applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  148
8.10. Configuring HTTP/2 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  148
8.11. Configuring EJB calls over Undertow’s HTTP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  148
9. Chapter 9: Configuring the Enterprise subsystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  150
9.1. Configuring the ejb subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  150
9.1.1. Configuring the EJB Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  150
9.1.2. Configuring the MDB delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  152
9.1.2.1. Configuring MDB Group delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  153
9.1.2.1.1. Attaching an MDB to multiple Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  154
9.1.3. Configuring the Stateful Session Bean cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  154
9.1.3.1. Enabling Passivation for Stateful Session Beans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  155
9.1.3.2. Disabling Passivation for a single deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  156
9.1.4. Timeout policies for EJB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  157
9.1.4.1. Configuring the Stateful Session Bean timeout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  157
9.1.4.2. Configuring the Access timeout for SFSBs and Singleton beans . . . . . . . . . . . . . . . . . .  157
9.1.5. EJB3 Thread pool configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  158
9.1.5.1. Configuring the EJB thread pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  159
9.1.5.2. EJB Thread pool optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  160
9.1.5.3. Gathering runtime statistics of the thread pool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  161
9.1.6. Configuring Interceptors at EJB Container level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  161
9.1.7. Configuring Remote EJB Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  163
9.1.8. Enabling EJB statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  164
9.1.9. Consuming messages from an external Messaging Provider . . . . . . . . . . . . . . . . . . . . . . . .  165
9.1.9.1. Consuming messages from a Broker which uses a different Protocol . . . . . . . . . . . .  165
9.1.9.2. Consuming messages from an external ArtemisMQ Broker . . . . . . . . . . . . . . . . . . . . .  168
9.1.9.2.1. Coding JMS Consumers and JMS Producers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  170
9.2. Configuring the ee subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  172
9.2.1. Managing Jakarta EE Application Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  173
9.2.2. Managing EE Concurrency Utilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  174
9.2.3. Managing Default bindings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  178
9.3. Configuring the jaxrs subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  179
9.4. Configuring the singleton subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  180
9.4.1. Defining a Quorum for Singleton . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  181
9.5. Configuring the naming subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  181
9.5.1. Naming Alias . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  181
9.6. Configuring the batch-jberet subsystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  182
9.7. Configuring the mail subsystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  183
10. Chapter 10: Configure Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  186
10.1. WildFly default logging configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  186
10.2. Configuring Log Handlers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  187
10.2.1. Configuring the Periodic Rotating Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  187
10.2.1.1. Changing the path where the log is written . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  188
10.2.1.2. Formatting the log output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  189
10.2.1.3. Filtering the logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  189
10.2.2. Adding a new Handler: the Size Rotating Handler . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  190
10.2.2.1. Adding the handler from the Web console . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  190
10.2.2.2. Adding the handler from the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  191
10.2.3. Creating a Custom Handler that writes logs to the Database . . . . . . . . . . . . . . . . . . . . . . .  191
10.2.4. Creating a Custom Handler that writes logs via Socket . . . . . . . . . . . . . . . . . . . . . . . . . . . .  192
10.2.5. Configuring Handlers to be asynchronous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  193
10.3. Configuring the Root Logger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  194
10.4. Configuring Logging Categories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  195
10.5. Other Logging configuration files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  196
10.5.1. Using Log4j to trace your application logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  196
10.5.2. Disabling the core logging API . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  197
10.6. Other ways to read the log files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  197
10.6.1. Reading Logs with the Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  197
10.6.2. Reading logs using the HTTP channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  198
11. Chapter 11: Configuring JMS Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  200
11.1. ActiveMQ Artemis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  200
11.1.1. Artemis ActiveMQ architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  200
11.1.2. Socket Management in ActiveMQ Artemis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  201
11.1.3. Starting WildFly with JMS Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  202
11.2. Configuring JMS Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  203
11.2.1. Additional properties you can set on Connectors and Acceptors . . . . . . . . . . . . . . . . . . .  204
11.2.2. Switching to Netty sockets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  205
11.3. Creating JMS Destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  206
11.3.1. Built-in Queues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  209
11.3.2. Creating Queues and Topics using the Command Line Interface . . . . . . . . . . . . . . . . . . .  209
11.3.2.1. Creating deployable JMS destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  209
11.3.3. Customizing JMS destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  210
11.4. Configuring Message Persistence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  212
11.4.1. Configuring File system journal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  212
11.4.1.1. Changing the location where the Journal is persisted. . . . . . . . . . . . . . . . . . . . . . . . . .  213
11.4.1.2. Configuring Journal’s Max Disk Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  214
11.4.1.3. Configuring Message Paging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  214
11.4.1.4. Configuring the paging folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  215
11.4.2. Configuring JDBC Storage for messages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  216
11.4.2.1. Varying the default Journal Table Names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  217
11.5. Routing Messages to other destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  217
11.5.1. Diverting messages to other destinations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  217
11.5.2. Creating a Bridge between two ActiveMQ Artemis servers . . . . . . . . . . . . . . . . . . . . . . . . .  219
11.5.2.1. ActiveMQ Artemis target configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  219
11.5.2.2. ActiveMQ Artemis source configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  220
11.5.2.3. HA of Bridges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  222
11.6. JMS Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  223
11.6.1. JMS Cluster configuration using Data replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  225
11.6.1.1. Verifying that the backup server is synchronized with the live server . . . . . . . . . .  228
11.6.2. JMS Cluster configuration using Shared Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  229
11.6.3. Server Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  232
11.6.3.1. Broadcast Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  232
11.6.3.2. Discovery Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  233
11.6.4. Configuring a static discovery of cluster nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  234
11.6.5. JMS Cluster behind a load balancer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  235
12. Chapter 12: Classloading and modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  236
12.1. What are modules ? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  236
12.2. Configuring static modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  237
12.2.1. How to install a new module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  237
12.2.1.1. Example: How to install Jython library as a module . . . . . . . . . . . . . . . . . . . . . . . . . . .  238
12.2.1.2. How to use an installed module in your application . . . . . . . . . . . . . . . . . . . . . . . . . . .  239
12.2.1.3. How to turn your modules in a global module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  239
12.2.1.4. How to use global directories for your modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  240
12.2.1.5. How to deploy extension-type dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  241
12.3. Configuring dynamic modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  241
12.3.1. How to use dynamic modules in your applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  242
12.4. Configuring module Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  242
12.4.1. Implicit dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  243
12.4.2. Explicit dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  244
12.5. Advanced Classloading policies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  244
12.5.1. How to prevent your modules from being loaded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  245
12.5.2. How to prevent a subsystem from being loaded . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  246
12.5.3. Configuring classloading isolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  247
12.5.4. Sticking to Java EE compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  248
12.6. Provisioning WildFly using Galleon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  248
12.6.1. Getting started with Galleon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  249
12.6.2. Exploring Galleon command line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  251
12.6.3. Installing different versions of a feature-pack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  252
12.6.4. Choosing the layers to include in the installations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  253
13. Chapter 13: Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  255
13.1. WildFly clustering building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  255
13.2. Clustering standalone nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  256
13.2.1. Clustering standalone servers on different machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  256
13.2.2. Clustering standalone servers on the same machine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  256
13.3. Configuring a cluster of domain nodes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  257
13.3.1. Enabling clustering services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  258
13.3.2. Configuring HTTP Session in a cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  259
13.3.2.1. Configuring HTTP Session Granularity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  260
13.3.2.2. Configuring HTTP Session Affinity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  260
13.3.2.3. Using a custom session management profile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  261
13.3.2.4. Defining Session Management Profile at application level . . . . . . . . . . . . . . . . . . . . .  261
13.3.2.5. Using jboss-web.xml to manage max-active-sessions . . . . . . . . . . . . . . . . . . . . . . . . . .  262
13.3.2.6. Storing HTTP Session Data in a remote Infinispan cluster . . . . . . . . . . . . . . . . . . . . .  262
13.4. Configuring the Cluster transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  264
13.4.1. Changing the Protocol Stack used by JGroups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  266
13.4.2. Configuring a full TCP stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  266
13.4.2.1. Legacy tcpping configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  267
13.4.3. Other JGroups stacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  268
13.4.4. Configuring the Transport Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  270
13.4.5. Configuring the Protocol Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  271
13.5. Configuring Clustering Caches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  272
13.5.1. Configuring the Cache Container top level attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  273
13.5.2. Configuring the Cache Container Transport . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  275
13.6. Configuring ejb and web Cache containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  276
13.6.1. Configuring a Replicated cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  276
13.6.1.1. Creating a replicated cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  277
13.6.2. Configuring a Distributed cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  277
13.6.2.1. Providing hints to the Distributed cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  278
13.6.2.2. Adding L1 cache to a distributed cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  278
13.6.3. Configuring ejb and web Cache containers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  279
13.6.3.1. Configuring cache eviction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  279
13.6.3.2. Configuring cache expiration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  280
13.6.3.3. Configuring locking for entries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  280
13.6.3.4. Configuring EJB and Web application cache Storage . . . . . . . . . . . . . . . . . . . . . . . . . .  282
13.6.3.5. Using a JDBC Cache store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  283
13.6.3.6. Example: Defining a JDBC Cache Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  283
13.6.4. Controlling Passivation of HTTP Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  284
13.7. Configuring hibernate Cache Container . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  284
13.7.1. Configuring Hibernate cache for Entities and Collections . . . . . . . . . . . . . . . . . . . . . . . . . .  285
13.7.1.1. Configuring eviction for hibernate cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  286
13.7.1.2. Configuring expiration for hibernate cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  286
13.7.1.3. Configuring locking for hibernate cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  287
13.7.1.4. Configuring Hibernate cache for queries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  288
13.7.1.5. Configuring the Timestamp cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  288
14. Chapter 14: Load balancing applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  290
14.1. Configuring Apache mod_jk . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  290
14.1.1. Configuring Apache Web server side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  290
14.1.2. Configuring WildFly to receive AJP requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  292
14.2. Configuring mod_cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  293
14.2.1. Undertow as mod_cluster Front end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  293
14.2.1.1. Configuring the Back end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  294
14.2.1.2. Configuring the Front end . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  294
14.2.1.3. Testing Undertow’s mod_cluster load balancer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  295
14.2.2. Manually configuring the Undertow filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  296
14.2.3. Advanced mod_cluster configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  297
14.2.4. Mod-Cluster Multiplicity. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  297
14.2.4.1. How to configure mod_cluster to exclude a Web context . . . . . . . . . . . . . . . . . . . . . .  300
14.2.4.2. Configuring Sticky Sessions with mod_cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  301
14.2.4.3. Configuring Ranked Loadbalancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  301
14.2.4.4. Configuring Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  302
14.2.4.5. Configuring Initial Load . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  304
14.2.5. Configuring mod_cluster on Apache httpd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  304
14.2.5.1. Using a static list of httpd proxies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  306
14.2.6. Troubleshooting mod_cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  307
14.2.6.1. Check multicast communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  308
14.2.6.2. Switch additional display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  308
14.3. Load balancing EJB clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  309
15. Chapter 15: Securing WildFly with Elytron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  311
15.1. Elytron building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  311
15.1.1. Default Security Domain and Security Realms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  312
15.2. How to enable Elytron for Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  313
15.3. Elytron Realms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  314
15.3.1. Configuring a File System Security Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  315
15.3.1.1. Testing Elytron Security Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  316
15.3.1.2. Using other options for storing the password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  317
15.3.1.3. Using Elytron in parallel with the Legacy security subsystem . . . . . . . . . . . . . . . . . .  318
15.3.2. Converting legacy property files into Elytron FileSystemRealm . . . . . . . . . . . . . . . . . . . .  318
15.3.2.1. Using the Elytron tool against a Descriptor File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  319
15.3.3. Configuring a JDBC Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  319
15.3.3.1. Alternative Password Mappers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  321
15.3.4. Configuring an LDAP Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  324
15.3.5. Configuring a SASL Based Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  327
15.3.5.1. Configuring the EJB Server side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  328
15.3.5.2. Configuring the EJB Client side . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  329
15.3.5.2.1. Masking the user’s password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  329
15.3.5.2.2. Verifying the client identity with a keystore . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  331
15.3.5.3. Securing SOAP Web services with Elytron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  332
15.3.6. Using Client attributes to determine a Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  334
15.3.7. Troubleshooting Authentication issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  335
15.4. Securing Management interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  335
15.5. Configuring SSL/TSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  335
15.5.1. Creating your own certificates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  336
15.5.2. Configuring One-Way SSL / HTTPS for WildFly applications . . . . . . . . . . . . . . . . . . . . . . .  338
15.5.2.1. Using the CLI security command to configure One-Way SSL / HTTPS . . . . . . . . . . .  340
15.5.2.2. Enabling TSL 1.3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  342
15.5.3. Configuring Mutual SSL Authentication for WildFly applications . . . . . . . . . . . . . . . . . .  342
15.5.3.1. Importing Client certificates on your browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  344
15.6. Configuring SSL for Management interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  344
15.7. Using certificates from Let’s Encrypt in WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  347
15.7.1. Using WildFly CLI as agent to request a certificate from Let’s Encrypt . . . . . . . . . . . . . .  348
15.8. Configuring OpenSSL as SSL provider . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  351
15.9. Configuring Server Name Indication (SNI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  351
15.10. Configuring Java Authentication Service Provider Interface (JASPI) . . . . . . . . . . . . . . . . . . .  353
15.11. Using Credential Stores to store sensitive data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  353
15.11.1. Example: securing your Datasource password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  353
15.11.2. A shortcut to add entries in your Credential Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  355
15.11.3. Configuring the Credential Store Offline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  355
15.12. An overview of Jakarta EE Security API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  356
15.12.1. Secure web authentication with Jakarta EE Security API . . . . . . . . . . . . . . . . . . . . . . . . .  356
15.12.2. Managing Identity Stores with Jakarta EE 8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  358
16. Chapter 16: WildFly’s legacy security model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  361
16.1. Security building blocks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  361
16.1.1. Configuring Security Realms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  361
16.1.1.1. The Management Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  362
16.1.1.2. The Application Realm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  362
16.2. WildFly Security Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  363
16.2.1. Security under the hood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  365
16.2.2. Using the RealmDirect login module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  365
16.2.2.1. Adding new Application users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  366
16.2.2.2. Defining the roles into your applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  367
16.2.3. Database Login module . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  370
16.2.3.1. Using encrypted database passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  371
16.2.4. LDAP Login module configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  372
16.2.5. Login not working? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  373
16.2.6. Auditing Security Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  374
16.3. Management Security with Login Modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  375
16.4. Management Security with LDAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  375
16.5. Enabling the Secure Socket Layer on WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  376
16.5.1. Securing Web applications with SSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  377
16.5.1.1. How to secure the application server with a CA signed certificate . . . . . . . . . . . . . .  378
16.6. Encrypting the Management Interfaces channel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  379
16.7. WildFly support for HTTP/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  380
16.7.1. Setting up HTTP/2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  381
17. Chapter 17: RBAC and other Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  382
17.1. Configuring Role Based Access Control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  382
17.1.1. Enabling RBAC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  383
17.1.2. Using groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  387
17.1.3. Defining Scoped Roles for Domain mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  389
17.1.3.1. Server Group-scoped roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  390
17.1.3.2. Host-scoped roles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  391
17.1.4. Configuring Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  392
17.1.4.1. Configuring Sensitivity Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  392
17.1.4.2. Configuring Application Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  394
17.2. Configuring Security Manager on WildFly . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  396
17.2.1. Running WildFly with a Security Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  396
17.2.2. Coding Permissions in the configuration file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  398
17.2.3. Restricting permissions at module level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  398
18. Chapter 18: Taking WildFly in the cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  400
18.1. Getting Started with Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  400
18.1.1. Installing Docker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  401
18.2. Running WildFly images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  402
18.3. Extending WildFly’s image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  404
18.3.1. Deploying applications the top of WildFly image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  405
18.4. Getting started with Red Hat OpenShift . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  406
18.4.1. Installing Red Hat Code Ready Containers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  407
18.4.1.1. Starting OpenShift cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  408
18.4.1.1.1. Troubleshooting CRC installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  410
18.4.2. OpenShift quick reference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  411
18.5. Deploying WildFly on CRC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  412
19. Chapter 19: Configuring MicroProfile capabilities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  418
19.1. Managing the MicroProfile Config. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  418
19.1.1. ConfigSources in microprofile-config-smallrye subsystem . . . . . . . . . . . . . . . . . . . . . . . . .  419
19.1.2. ConfigSource from Class. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  422
19.1.3. ConfigSources in microprofile-config.properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  424
19.2. Managing MicroProfile Health Checks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  425
19.2.1. Health Checks from the Command Line Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  425
20. Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  426
20.1. jboss-deployment-structure.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  426
20.2. jboss-ejb3.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  427
20.3. jboss-web.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  430
20.4. jboss-app.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  431
20.5. jboss-permissions.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  431
20.6. ironjacamar.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  432
20.7. jboss-client.xml. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  432
20.8. jboss-webservices.xml . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  432
20.9. JMS Deployment descriptors (*-jms.xml) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  433
20.10. Datasource Deployment descriptors (*-ds.xml) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  433
WildFly Administration Guide
Author : Francesco Marchioni

© ItBuzzPress 2020

Foreword
WildFly is the latest version of the popular open source JBoss application server. It is exceptionally
lightweight, featuring unparalleled speed and supporting the latest standards, including Java EE 8.
I’m Brian Stansberry, the project lead for WildFly Application Server. Formerly I was technical lead
for Operations, Administration and Management (OA&M) functionality since the start of the JBoss
AS 7 project. So as you can imagine this book’s topic is near and dear to my heart. One of the biggest
priorities of AS 7, WildFly and JBoss Enterprise Application Platform 6 has been to improve the
application server’s manageability, and after a lot of dedication, sleepless nights and coffee I feel
we’ve come a long way. I hope after reading this book you’ll agree. The biggest improvement in
WildFly 8 over AS 7 besides the new Java EE 8 API compatibility is in the OA&M area with the
addition of fine-grained role based administrative access control, a feature that is a focus of the
Security chapter of this book.

I first heard about the author when he authored "JBoss AS 5 Development" in 2010 and was a JBoss
Community Recognition Award Winner for his application server documentation. For many years
now he has been an active and important part of the JBoss Application Server and WildFly
community, consistently producing high quality documentation covering the application server and
middleware in general.

I was very pleased to hear that Francesco was planning to write a book on WildFly 8. High quality
books like this one are critical to the success of open source software, and Francesco has the
expertise to cover the topic well and a great reputation for doing an excellent job.

I hope you’ll find this WildFly Administration Guide as thorough and well written as I did. WildFly’s
web console and its command line interface (CLI) administration tool are well covered, as are all of
the key areas of application server administration. This book definitely belongs on the bookshelf of
anyone administering WildFly or developing application for it.

Brian Stansberry

Preface
WildFly is the continuation of the release cycle of the application server community edition, which
was previously known as JBoss AS 7. The last official release of JBoss AS 7 was the 7.1.1.Final,
although a more recent 7.2 version is available as source to be built on Github
(https://github.com/jbossas/jboss-as/archive/7.2.0.Final.tar.gz) source code repository

You might wonder why the application server changed its popular name. Actually, there’s more
than one reason for this change, the first one being to avoid confusion between the commonly
referred community version (JBoss AS) and the Enterprise version supported by Redhat. Besides
this, in the last years lots of new projects grew up in the JBoss.org site which included the "JBoss"

1
brand in it (e.g. JBoss ESB). For this reason, the term "JBoss" was often misused sometimes to mean
the application server sometimes else to mean a brand of products.

The rename applies, however, only for the JBoss Application Server community edition. The
licensed version is still named JBoss Enterprise Application Platform (JBoss EAP). So from now on,
when someone refers to WildFly, we clearly know they are talking about the Community project
and specifically the application server project.

Besides the new brand name, the WildFly application server follows the same path traced by JBoss
AS 7: this means a truly modular and lightweight kernel with advanced management capabilities.
In addition to this, the new application version supports the latest changes in terms of Java EE
technology, offering richer management capabilities, a more advanced security control and some
important updates as well in the Web server tier.

Red Hat, Red Hat Enterprise Linux, JBoss, are trademarks of Red Hat, Inc., registered in the
United States and other countries.

Linux ® is the registered trademark of Linus Torvalds in the United States and other
countries.

Java ® is a registered trademark of Oracle and/or its affiliates.

Red Hat OpenShift is a trademark of Red Hat, Inc., registered in the United States and other
countries.

The Author of the Book


Francesco Marchioni has been working for Red Hat since 2014. He has joined the JBoss
community in early 2000 when the application server was a mere EJB container, running release
2.x.

In 2008 he started an IT portal focused on JBoss products

(http://www.mastertheboss.com) which is pleased to serve an average of 8000 daily visits.

He has authored the following titles:

JBossAS 5 Development, Packt Publishing (December 2009)

JBoss AS 5 Performance Tuning, Packt Publishing (December 2010)

JBoss AS 7 Configuration, Deployment, and Administration, Packt Publishing (December 2011)

Infinispan Data Grid Platform, Packt Publishing (June 2012) co-authored with Manik Surtani
(Infinispan Project lead)

JBoss AS 7 Development, Packt Publishing (June 2013)

Enterprise Application Server CookBook, ItBuzzPress (September 2013)

WildFly Performance Tuning (December 2018)

2
Practical Java EE Development on WildFly (June 2014 – Updated on April 2018)

Hands-On Cloud-Native Applications with Java and Quarkus (December 2019)

In March 2018 Francesco published his first sci-fi novel named Chronicles from a Simulated World
which covers in an earthy and rational style the discussion about the Simulation Hypothesis.

The reviewers
Jaikiran Pai works at Red Hat and is part of the JBoss AS and EJB3 development team. In his role as
a software developer, Jaikiran has been mainly involved in Java language and Java EE technologies.
Since 2004 he started working in a software company in Pune, India where he developed an
interest in JBoss Application Server and has been active in the JBoss community ever since.
Subsequently, he joined Red Hat to be part of the JBoss EJB3 team.

What this book covers


Chapter 1, Installing WildFly covers the installation of the server platform and introduces the
reader to the most significant changes from the earliest release of the application server up to the
latest one.

Chapter 2, Basic server configuration discusses the core configuration of the application server
both in standalone mode and in domain mode, including detailed steps to setup a suggested domain
configuration topology.

Chapter 3, Server Management with the Web console covers the Web based administration
interface that can be used to manage the application server.

Chapter 4, Server Management with the CLI introduces the reader to the Command Line
Interface which is the recommended management tool.

Chapter 5, Advanced CLI features covers some advanced tasks that can be pursued with the CLI
such as batch scripts, suspending server execution, executing commands in offline mode and more.

Chapter 6, Deploying applications encompasses in detail all available options to deploy Java
Enterprise applications on the application server.

Chapter 7, Database connectivity, is about configuring connections to relational databases by


installing JDBC Drivers and Datasources.

Chapter 8, Configuring Undertow discusses about the new fast and furious Web server
implementation named Undertow

Chapter 9, Configuring the Enterprise subsystems covers the core subsystems which are the
backbone of Enterprise applications (ejb, ee, jaxrs, singleton, naming, batch-jberet, mail).

Chapter 10, Configuring Logging covers the configuration of log subsystem, including all available
log handlers, and best practices to log to your own requirements

Chapter 11, JMS Configuration is about the nuts and bolts of WildFly’s JMS provider which is now

3
ActiveMQ Artemis messaging system.

Chapter 12, Application Server classloading is a deep dive into the application server’s modular
kernel and how to configure it to load libraries needed by your applications. The chapter also
covers the Galleon tool and how it can be used to provision custom versions of the application
server.

Chapter 13, Clustering covers the application server clustering capabilities that serve as an
essential component to provide scalability and high availability to your applications.

Chapter 14, Load Balancing Web Applications discusses the other key concern of clustering, that
is the ability to make several servers participate in the same service and do the same work.

Chapter 15, Securing WildFly with Elytron covers the new Elytron Security subsystem

Chapter 16, Legacy Security covers the foundation of the application server Security framework
using the Legacy Security Framework

Chapter 17, RBAC and other Constraints covers aspects of the application server security, such as
Role Based Access Control which are not specific to the security framework adopted
(Elytron/legacy)

Chapter 18, Taking WildFly in the cloud shows how to deploy the application server in the cloud,
including some basic container tasks and advanced tactics.

Chapter 19, Configuring MicroProfile capabilities introduces the new MicroProfile extensions,
which is an essential feature if you are developing portable services to be executed in container
environments.

Who this book is for

This book is especially suited for Java system administrators that are going to manage the new
release of the application server. Developers, and application testers will be as well more
productive after learning this book. Prior knowledge of the earlier version of the application server
is not required, although that could make easier to understand some core concepts contained in
this book.

How to Contact Us

Please address comments and questions concerning this book to the publisher:
info@itbuzzpress.com. We have created a web page for this book, where we list errata, examples,
and any other information. You can access this page at:
http://www.itbuzzpress.com/news/wildflyadmin-errata.html

For more information about our books, and future projects see our website at:
http://www.itbuzzpress.com

Piracy

The uploading/downloading of copyrighted material without the express written consent of the

4
content copyright holder is strictly forbidden. Piracy is an illegal act that you may aspire to
reconsider. Besides this, piracy is not a victimless crime! It is financially damaging and personally
hurtful to company employees and their families. Legitimate users suffer as well. We appreciate
your help in protecting the valuable content of this book.

Book Version

This is the version 1.8 of the book – Updated 15 June 2020

Conventions used in this book

This book contains lots of script files and commands to be executed on your machine. Much effort
has been put to make the code as much as readable as possible.

The following script snippet (in a blizzard blue) identifies a command to be executed on your
operating system’s shell:

$ ./jboss-cli.sh

As you can see from the prompt, we have assumed that you are executing on a Linux/Unix
machine. At the beginning of the book, we have also provided the equivalent Windows syntax of
some core commands:

C:\Users\jboss\wildfly-20.0.0.Final\bin jboss-cli.bat

To avoid being repetitive, we have however used the Linux shell syntax for the rest of the book.

Within the book, you will find also some gray-filled block of code like the following one:

[disconnected /] patch apply /tmp/wildfly-9.0.1.Final.patch

"outcome" :
"success",

"result" : {}

This piece of code identifies a command to be executed within the application server’s Command
Line Interface (Using the CLI). Therefore, executing this command in the operating system’s shell
will obviously return an error.

5
This book is dedicated is lovingly dedicated to all people that helped me to find my verse in the
powerful play that’s life.

6
1. Chapter 1: Getting started with WildFly
WildFly is a Java middleware product also known as application server. The word "application
server" has been coined in relation to Java Enterprise application; you can think of it as it’s a piece
of Java software where your application can be provisioned using the services provided by the
application server. Within this book you will learn how to configure these services, how to govern
them with authorization/authentication policies and how to extend the capabilities of the
application server.

Our journey through the application server will begin with the initial setup of your environment
and some basic administrative tasks. More in detail this chapter will cover the following topics:

• A brief introduction to the changes and enhancements introduced in WildFly

• How to install and verify the installation of the application server

• How to create a management user which will be in charge to handle server administration

• Installing the application server as a service using Windows or Linux environment

1.1. What is new in WildFly ?


This chapter contains all the changes which have been incorporated in WildFly up to the latest
release. It contains the enhancements provided in each server release so that you can plan easily
'diff' the changes from one release to another.

1.1.1. Changes introduced in WildFly 8

The first release of WildFly application server introduced several important changes from the
former JBoss AS 7 platform. Changes were equally split into the development area and into the
administration of the server. Here is a break down of the most significant news:

• Java EE 7 API support: The Java Enterprise API v. 7 has been fully integrated into the
application server. Some of the major enhancements included in the application server include:

• Java API for JSON Processing 1.0 (JSON-P): This API elevates the capabilities of JSON based
applications by defining a new API to parse, generate, transform and query JSON documents.
Therefore, you will be able to build a JSON object model (just like you did with DOM for XML
based applications) and consume them in a streaming fashion (as you did with XML using
StAX).

• Batch Application API 1.0: this API has been designed to standardize batch processing for Java
applications. You can think of it as a replacement for your older, bulk, long running procedures
that were managed by shell scripting or dated languages such as COBOL. The new Batch API
provides a rich programming model oriented to batch scripting which allows defining, partition
and forking the execution of jobs.

• Concurrency Utilities for Java EE 1.0: this API is an extension to the Java SE Concurrency
Utility (JSR-166) which aims to provide a simple and standard API for using Concurrency from
Java Enterprise components preserving the container integrity. This API can be used along with
asynchronous processing APIs in Servlets or for creating custom executors in advanced use

7
cases.

• Other API enhancements: besides the additions mentioned so far, there are further
enhancements in existing areas such the JAX-RS 2.0, which now includes a Client API for async
processing, a matching Server side asynchronous HTTP response and the addition of Filter and
Interceptors for proxying REST communications. Another area of improvement is the JMS 2.0
API, which now delivers a JMSContext resource as a wrapper for JMS Connection, Session and
Message Producer objects, and several enhancements such as the simplified ConnectionFactory
injection (which has finally a platform default) or the inclusion of delayed delivery and async
send. Other minor improvements are spread across the entire API (e.g. EJB 3.2, Servlet 3.1, EL
3.0, CDI 1.2 etc.). If you want to learn more details about it please consult the official Java EE 7
tutorial at: http://docs.oracle.com/javaee/7/tutorial/doc/

• Role Based Access Control: before WildFly 8, administrative users were not associated with a
particular role; in other words, once created a Management user then you were entitled to
perform any change to the server configuration like a classic super user. Now you can associate
each Management user with one role and even configure constraints, which allow you to tweak
the behavior of roles.

• New Web Server: WildFly has switched to a different Web Server implementation named
Undertow, which is the embedded Web server providing both blocking and non-blocking API
based on NIO. Besides the API enhancements, the Undertow Web server can provide better
flexibility thanks to its composition based architecture that allows you to build a Web server by
combining small single purpose handlers.

• Richer Management Interfaces: WildFly includes a richer set of management commands,


which have been added to the Command Line Interface such as the ability to patch the module
baseline, thus avoiding costly server installations in order to solve some issues. Also, the Web
Administration Console has been greatly improved allowing full management of the application
server subsystem along with a comprehensive set of performance indicators.

• Simplified socket management: WildFly 8 reduced the number of ports by multiplexing


invocations over the HTTP channel; therefore, administrators and your security staff will spend
less time in setting up firewall policies.

1.1.2. Changes introduced in WildFly 9

The release 9 of WildFly application server introduced several bug fixes and also some interesting
management enhancements in the platform. Here is the list of the most interesting ones:

• WebSocket 1.0: Before the advent of HTML 5, the traditional request-response model used in
HTTP meant that the client requested resources and the server provided responses. Therefore,
unless you are continuously polling the server, there is no way to way to provide dynamic
changes to your Web pages. The WebSocket protocol addresses these limitations by providing a
full-duplex communication channel between the client and the server without any latency
problem. Combined with other client technologies, such as JavaScript and HTML5, WebSocket
enables web applications to deliver a richer user experience.

• Front-end load balancer support: using an external Web server (like Apache) as load balancer
is now optional. Now you can configure WildFly’s web server (Undertow) to balance requests to
a cluster of WildFly servers through the mod_cluster protocol.

8
• Improved Datasource configuration: the datasource subsystem reflects the changes in the
pool policy introduced by IronJacamar Project 1.2.4 which contains a reworked set of policies
and a connection tracer to detect leaks in the pool.

• Improved Web console: The Web administration console includes a new improved UI layout
and several additional capabilities such as Datasource templates, enhanced subsystem
configuration or improved log viewer.

• CLI Suspend mode: It is now possible to put the application server in suspend mode, to allow
the termination of current sessions before shutting down the server. The suspend mode is
reversible so that is can also return the server in running mode.

• Offline management The Command Line interface allows the management of resources
without a running server.

• HTTP/2 Support: Undertow includes support for the new HTTP/2 standard which reduces
latency by compressing headers and multiplexing many streams over the same TCP connection.

1.1.3. Changes introduced in WildFly 10

In this application server release, a major restructuring began that will continue in the 1x releases.
These changes are both related to some single subsystems (such as the messaging subsystem) but
also to the whole server infrastructure. Expect some further changes in the 11 release of the
application server which will include a re-shaped security subsystem.

Here are the most significant changes in the release 10 of WildFly:

• New Messaging subsystem: the new messaging provider embedded in WildFly 10 is Apache
Artemis MQ which is derived from the HornetQ project, recently donated to the Apache
foundation. The new messaging provides retains compatibility with the former HornetQ while
providing several new features.

• Capabilities: Beginning with WildFly 10 the application server’s management layer includes a
mechanism for allowing different parts of the system to integrate with each other in a loosely
coupled manner. This happens thanks to a new component called "Capability". Typically a
Capability works out by registering a service with the WildFly’s ServiceContainer, and then
dependent capabilities depend on that service. The WildFly Core management layer
orchestrates registration of those services and service dependencies by providing a means to
discover service names. Discussing Capabilities is beyond the scope of this book.

• Improved ejb subsystem: the ejb pooling configuration has been revamped so that now it
includes multiple self-tuning policies applicable to Stateless EJB and to Message Driven Beans.
An advanced delivery policy (group based) can now be used by Message Driven Beans.

• Migration from legacy subsystems: an automatic CLI-based migration procedure has been
added to help users migrate the former legacy systems (jbossweb, messaging, jacorb) into
WildFly 10.

• Updated Hibernate API: the most relevant change for developers is the introduction of
Hibernate 5 API that includes several additional improvements spanning from performance
optimization (mainly due to bytecode enhancement), the use of generics in Hibernate Native,
and an improved SPI for second-level cache providers. This topic, being focused on the
development of applications, is not in the scope of this book.

9
In terms of JDK, WildFly has discontinued support for Java 7. Hence, you need a
Java 8 or newer environment on your machine. If you are porting the former
 startup scripts, you have to replace the deprecated JVM parameter named
-XX:MaxPermSize with the new -XX:MaxMetaspaceSize.

1.1.4. Changes introduced in WildFly 11

The release 11 of WildFly application server introduced several important changes spanning from
the new security infrastructure to simplified client naming lookup. This version includes also
several management enhancements to ease server administration. More in detail, this is a break
down of the latest significant news:

• New Security Infrastructure: a long-awaited change is the new Security Provider named
Elytron which will be able to unify the whole security infrastructure in a single subsystem.
Elytron will bring also advanced capabilities such as privilege propagation across multiple
service invocations, pre-request TLS verification, identity switching, pre-request TLS
verification, and rich security policies. Finally it also improves the overall extensibility of the
system with tight integration with other SSO / IDP frameworks such as KeyCloak.

• SSL enhancements: You can switch from the JVM internal implementation of SSL to your own
OpenSSL library available on your system. This library can, in turn, be used (for versions
greater than 1.0.2) to support HTTP/2

• EJB made easier: several enhancements have been included to simplify the discover of EJB
resources thanks to a new naming library. Also the EJB naming proxies now allow more
advanced strategies such as dynamic discovery or point to point communication from proxies
to EJB.

• RMI over HTTP In order to allow load balancing of EJB request through a standard HTTP
request (which can be leveraged by any load balancer) you can now opt for a pure HTTP
communication for EJBs.

• New Load balancing Profile: If you are planning to use WildFly as front load balancer to a set
of WildFly backend, then you can use one out of the box configuration named standalone-load-
balancer.xml

• Graceful shutdown of the Server: the application server is now able to start in suspend mode
plus a set of improvements are available to handle distributed transactions when a graceful
shutdown has been issued.

• Management enhancements: a consistent number of enhancements have been added to the


Web console in many areas plus you the CLI tab completion shell is now able to complete
attributes in case a capability for it is available.

• Remote managed deployments: you are now able to update remote managed deployments by
including content items such as HTML or JSP files without a full application redeployment

1.1.5. Changes introduced in WildFly 12

The 12th release of WildFly includes the following enhancements:

• Java EE 8 Profile: the application server includes now a Java EE8 configuration which can be

10
activated at start-up

• New thread pool strategy: A new thread pooling strategy is available. This allows reducing the
number of threads active at any given time, which helps conserve system resources.

• Other Minor enhancements: (MicroProfile REST Client 1.0 is now supported, Java 9
compatibility has been improved and CLI scripts can now take advantage of loops with
variables)

1.1.6. Changes introduced in WildFly 13

The 13th release of WildFly includes provisioning enhancements for the application server and UI
upgrades along with some core libraries upgrades:

• Galleon project: WildFly can now be internally provisioned using the project Galleon which
allows to provision the desired installation at a specified location, install additional and/or
uninstall existing feature-packs, export the currently provisioned specification to a file with the
purpose to reproduce it on a different machine.

• New Web Console: a new version of the Web management console (HAL) is available which
uses PatternFly as technical stack instead of GWT. The new version of the Web console enhances
the existing features and adds support for many new subsystems and attributes.

• Other enhancements: Infinispan has been updated to version 9.2 and Hibernate to version 5.3.

1.1.7. Changes introduced in WildFly 14

The 14th release of WildFly includes several enhancements. The most important one is the Java EE
8 full compatibility so now all default configuration include the EE8 APIs. Additionally, the
following enhancements have been included:

• Agroal Datasource: the application server can now be configured to use a JCA-less connection
pool with increased performance and low memory footprint.

• MicroProfile Capabilities: This version of the application server includes support for some
Eclipse MicroProfile capabilities such as the Configuration MicroProfile which enhances the
application’s configuration capabilities, a MicroProfile for Server Health checking and an API
for accessing an OpenTracing compliant Tracer object within your JAX-RS application.

• Mod-Cluster Multiplicity: Mod-cluster has now been enhanced to support multiple web server
configurations by declaring and referencing multiple mod-cluster configurations within its
subsystem.

1.1.8. Changes introduced in WildFly 15

The 15th release of the application server contains some additions for monitoring the application
server and some global HTTPS and SSL enhancements. More in detail:

• New metrics subsystem: The application server development is happening with a view to
containers. In order to observe the application server (or fraction of it) in a container
environment, it is crucial to gather metrics. The new 'metrics' subsystem allows to collect
WildFly and application metrics and make available to monitoring systems like Prometheus.

11
• SNI Support for HTTPS Listeners: Java 8 has introduced for server-side SNI support, which is a
feature that extends the SSL/TLS protocols to indicate what server name the client is attempting
to connect to during handshaking. Now we can configure Undertow with more than 1 virtual
servers and users are able to use a different server certificate for each virtual server.

• Support for registering a single JVM / server-wide default SSLContext: by registering a


global SSLContext libraries used within the application server can make use of a managed
SSLContext instead of relying on one automatically being created through the standard system
properties.

• JASPI Integration with Elytron: The Elytron subsystem now contains the configuration
required to support JASPI to the Elytron subsystem and general JASPI integration.

1.1.9. Changes introduced in WildFly 16

The 16th release of the application server continues its path towards a more agile application
server, with further improvements in the Galleon tool, to provide customized version, with
minimal memory foot-print, of the application server. Some other notable items:

• Messaging subsystem improvements: You can configure MDBs to join multiple delivery
groups, enabling delivery when all groups are active. You can now reference remote Artemis
servers through Java EE 8 resource definitions. Finally, you can impose a limit on the amount of
disk space used by the Artemis journal.

• Clustering Improvements: The load balancer can now be configured to use a ramp up period
before allowing the maximum traffic. Other minor improvements in the HA Singleton
deployments have been added as well.

• Other Improvements: The command line has been enriched with a command to list modules
linked by a deployed applications. It is also possible to query for long-running management
operations and stop potentially locked operations. Some minor improvements have been added
also in Elytron such as 'silent mode' for HTTP Basic authentication mechanism and an utility
script to migrate legacy properties files to Elytron.

1.1.10. Changes introduced in WildFly 17

The 17th release of the application server contains several improvements, especially with regards
to clustering. Some notable items:

• HTTP Clustering improvements: The distributable-web subsystem has been added to ha


configuration to manage a set of session management profiles that encapsulate the
configuration of a distributable session manager. Also, when enabling session sharing for WARs
within an EAR, it is possible to indicate whether a distributable or non-distributable session
manager should be used.

• HA singleton service notification: Applications can register listeners to receive notifications


when the HA singleton service starts and stops.

• Messaging enhancements: It is possible for JMS Clients to address an HTTP load balancer, as an
alternative to communicate directly with the servers behind the load balancer. Then, you can
now configure a timeout for the embedded messaging broker when opening journal files.
Finally, the configuration of connections to remote AMQ brokers has been enhanced.

12
• Other Improvements: Web access logs in JSON format has been enhanced to use a formatted
structure. Encoding of hashes, passwords and salts has been improved for Elytron JDBC security
realms. A new option has been added to enable property resolution in the CLI. Finally, the title
of the HAL management console can now be customized by the user.

Besides it, it is worth mentioning that WildFly 17.0.1 was certified as a Jakarta EE 8 compatible
implementation.

1.1.11. Changes introduced in WildFly 18

The 18th release of the application server continues its path towards the alignment with Jakarta EE
projects and Microprofile standards. A large number of updates have been added into the Elytron
subsystem, the core Enterprise subsystems and as well in the Clustering area. Some highlights
include:

• Elytron improvements: Several improvements have been added to support SSL certificate
revocation using OCSP, audit logging enhancements, mapping of X509 certificates to an
underlying identity has been enhanced. Also, new CLI commands have been added to support
obtaining certificates from the Let’s Encrypt certificate authority. Elytron now supports masking
passwords in the Elytron client’s XML configuration. Finally, the certificate authority used by a
certificate-authority-account resource is now configurable.

• EJB improvements: It is possible now to configure at subsystem level client-side interceptors


and server-side interceptors. Also, the configuration and tactics of thread pools used in the EJB3
subsystem has been improved.

• RESTEasy improvements: An HTTP proxy can be now set up just by using properties on the
Client Builder. Also, RESTEasy now gives users the ability to use optional typed parameters, and
eliminate all null checks.

• Messaging improvements: Additionally attributes are available to check whether any backup
server is synchronized with the live server and to indicate the Artemis journal type being used.
New statistics and metrics are also available for JMS bridges and Resource Adapter thread
pools.

• Clustering improvements: Ranked routing has been added to Clustered web applications to
allow the ability to annotate the JSESSIONID with multiple routes, ranked in order of
preference.

1.1.12. Changes introduced in WildFly 19

The 19th release of the application server is mostly focused on the new release of Microprofile (3.2)
specification which now includes support for new API. Some interesting security enhancements
have been included in the Elytron subsystem, in the Deployer and in Managed Executors / Thread
pools statistics. More in detail:

• MicroProfile 3.2 improvements: WildFly now includes support for the Fault Tolerance API
(2.0), JWT Authentication (1.1) and OpenAPI (1.1). Besides it, the Health Check API has been
updated to version 2.1, and the Metrics to the 2.2 version.

• Deployment enhancement: You have now the ability to apply certain JBoss module libraries to
all deployments running in a server.

13
• Executors improved statistics: The Managed Executors / Thread pools in the EE subsystem are
now capable to emit runtime statistics.

• Elytron and Web services: It is now possible to integrate Elytron security layer with RESTEasy
and SOAP Web services.

• RESTEasy CLI configuration: This enhancement allows us to change RESTEasy settings via CLI.

1.1.13. Changes introduced in WildFly 20

The 20th release of the application server updates to the latest Microprofile (3.3) specification. Some
interesting security enhancements have been included in the Elytron subsystem, in the ejb
subsystem and in the configuration of Microprofile applications.

• Elytron improvements: adds the ability to automatically add a credential to a previously


defined credential store by specifying both the store and clear-text attributes for a credential-
reference. In addition, the elytron subsystem allows now to achieve a regex based mapping of
security roles. Finally, new attributes (such as remote client IP address) can be used to make
authorization decisions.

• EJB enhancements: A global Stateful Bean timeout has been added to simplify the cache
management of multiple EJBs in a cluster. Also, it is now possible to refresh EJB timers
programmatically when using database-data-store for persistence. Finally, a large number of
statistics about ejbs deployments are now exposed through the /deployment path.

• MicroProfile tooling enhancement: Besides the update to the new version (3.3), a CLI script
has been added to the server’s docs/examples directory to allow users to migrate to the
standalone-microprofile.xml configuration.

1.2. Installing WildFly


The pre-requisite to the Application Server installation is that you have installed a JDK on your
machine.

 The minimal JDK support required by the current version of WildFly is JDK 8.

The JDK can be either downloaded from Oracle site at


http://www.oracle.com/technetwork/java/javase/downloads/index.html or you can use an open
source implementation of it called OpenJDK http://openjdk.java.net/

Once installed the JDK, you have to set the JAVA_HOME environment variable accordingly.

Windows users: Right click on the My Computer icon on your desktop and select properties. Then
select the Advanced Tab contained in the Environment Variables button. Under System Variable,
click New. Enter the variable name as JAVA_HOME and value the Java install path. Click OK and Click
Apply Changes.

Linux users: Enter in your .bash_rc (or equivalent) script the following setting (substitute with the
actual JDK installation path):

14
export JAVA_HOME=/usr/java/jdk-9

Done with JDK installation, let’s move to the application server. WildFly can be downloaded from
http://www.wildfly.org by following the Downloads link in the home page. Once downloaded,
extract the archive to a folder and you are done with the installation.

$ unzip wildfly-20.0.0.Final.zip

Through this book we will refer to JBOSS_HOME as the location where you have

 installed WildFly. As you will see later in this chapter, it is not however
mandatory to set this variable on your operating system to run WildFly.

1.3. An in-depth look into the application server file


system
After unzipping the application server, the following file structure will be available on your file
system:

As you can see, the WildFly file system is divided into two main parts: the first one, which is
pertinent to a standalone server mode and the other that is dedicated to domain server mode.
Common to both server modes is the modules directory, which is the heart of the application server.

15
Following here are some details about the application server folders:

• appclient: contains configuration files, deployment content, and writable areas used by the
application client container run from this installation.

• bin: contains start up scripts, start-up configuration files and various command line utilities like
vault.sh, add-user.sh. Inside the client subfolder, you can find a client jar for use by non-maven
based clients. The other folders (service, init.d) are used respectively to install WildFly as a
Service on Windows and Linux machines.

• docs/contrib/scripts: In this folder you can find user contributed scripts & services for running
WildFly as a service on various operating systems.

• docs/examples: contains the enable-elytron.cli script and, in the config folder, some sample
standalone configurations (such as standalone-minimalistic.xml).

• docs/licenses: contains the licenses for the libraries bundled in the application server.

• docs/licenses-keycloak: contains the licenses for keycloak’s adapter that can be plugged into
the application server.

• docs/schema: contains the XML schema definition files

• modules: contains all the modules installed on the application server.

• standalone: contains configuration files, deployment content, and writable areas (such as logs)
used by the single standalone server run from this installation.

• domain: contains configuration files, deployment content, and writable areas (such as logs)
used by the servers which are part of a domain.

• welcome-content: contains content related to the default (ROOT) web application.

1.4. Starting WildFly


The application server ships with two server modes: standalone and domain mode. The difference
between the two modes is not about the capabilities available but is related to the management of
the application server: in particular, the domain mode is used when you run several instances of
WildFly and you want a single point where you can manage servers and their configuration.

In order to start WildFly using the default configuration in "standalone" mode, change the directory
to $JBOSS_HOME/bin and issue:

$ ./standalone.sh

To start the application server using the default configuration in "domain" mode, change directory
to $JBOSS_HOME/bin and execute:

$ ./domain.sh

When starting in standalone mode, you should find in your console something like this, at the end
of start up process:

16
19:14:51,148 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http
management interface listening on http://127.0.0.1:9990/management
19:14:51,148 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console
listening on http://127.0.0.1:9990
19:14:51,149 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full
20.0.0.Final (WildFly Core 12.0.1.Final) started in 5615ms - Started 314 of 535
services (321 services are lazy, passive or on-demand)

You can verify that the server is reachable from the network by simply pointing your browser to
the application server’s welcome page, which is reachable by default at the following address:
http://localhost:8080

1.4.1. Setting the JBOSS_HOME

Although the application server has been renamed to WildFly, the home directory of it can still be
set through the JBOSS_HOME environment variable. Setting the JBOSS_HOME configuration
variable is not a mandatory step. By defining it in your bootstrap file (or in your user’s profile), you
are specifying the folder where WildFly distribution is located. The impact on your administration
is that you can use the standalone/domain startup script from a different location than the server
distribution. The reverse side of the coin is that this can lead to confusion your server
administrator especially if you have this variable buried in one of the many Linux configuration
files.

Whether you decide or not to set the JBOSS_HOME, here is how Linux users could set the variable
to point to a WildFly installation:

17
$ export JBOSS_HOME=/opt/wildfly-20.0.0.Final

On the other hand, Windows users can set the JBOSS_HOME as in this example:

set "JBOSS_HOME=/C:/jboss/wildfly-20.0.0.Final"

If you want to set the variable permanently on Windows, you have to go through

 your System Settings (In the Control Panel), click the Advanced System Settings
link and add the Environment Variable from there.

1.5. Your first task: Create an Administrator


If you want to manage the application server configuration using its management instruments, you
need to create a management user.

In order to create a new user, just execute the add-user.sh/add-user.bat, which is located in the bin
folder of the application server’s home. Here’s a transcript of the creation of a management user:

18
$ ./add-user.sh

What type of user do you wish to add?


 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): a

Enter the details of the new user to add.


Using realm 'ManagementRealm' as discovered from the existing property files.
Username : wildflyadmin
Password requirements are listed below. To modify these restrictions edit the add-
user.properties configuration file.
. . . .
Password :
Re-enter Password :
What groups do you want this user to belong to? (Please enter a comma separated list,
or leave blank for none)[ ]:

About to add user 'wildflyadmin' for realm 'ManagementRealm'


Is this correct yes/no? yes
Added user 'wildflyadmin' to file
'/home/jboss/wildfly-20.0.0.Final/standalone/configuration/mgmt-users.properties'
. . . .
Is this new user going to be used for one AS process to connect to another AS process?
e.g. for a slave host controller connecting to the master or for a Remoting connection
for server to server EJB calls.
yes/no? yes
To represent the user add the following to the server-identities definition <secret
value="RXJpY3Nzb24xIQ==" />

In the above example, we have created a management user named "wildflyadmin" which belongs
to the ManagementRealm and is not part of any group of users. Also, mind to answer the last
question with "yes" or "y" to indicate that the user will be used to connect to the domain controller
from the host controller. The generated secret value is the Base64-encoded password of the new
created user and we will use it when setting up a Domain of application servers.

In WildFly there is a strict control over your passwords. If you want to loosen or strengthen the
password checks, you can edit the add-user.properties file, which is contained in the bin folder of
your server distribution.

1.5.1. Creating an user in non-interactive mode

You can also create users using non-interactive mode. In the following example, we are adding a
management (-m flag) user by issuing:

$ ./add-user.sh -m -u administrator1 -p password1!

If you need adding an application user, you need to include as well the -a flag as in the following

19
example, where we are setting as well a group to which the user belongs:

$ ./add-user.sh -a -u applicationuser1 -p password1! -g guest

Bear in mind that creating users in this way exposes your user credentials in the shell history and
maybe process table, if you are using a Linux/Unix machine.

1.6. Stopping WildFly


The simplest way to stop the application server is by sending an interrupt signal with Ctrl+C to the
server console. Linux/Unix users might as well have a look at the process table with the "ps"
command and issue a "kill" to stop the application server.

On the other hand, the recommended approach is to use the Command Line Interface (CLI)
interface to issue an immediate shutdown command. The CLI interface can be started from the
$JBOSS_HOME/bin folder of your installation:

$ ./jboss-cli.sh

Windows user will start the CLI using the equivalent batch file:

jboss-cli.bat

Once there, issue the connect command:

[disconnected /] connect

Connected to localhost:9990

Now issue the shutdown command that will stop the application server:

[localhost:9990 /] shutdown

You can optionally include the --restart=true parameter to trigger a server restart:

[standalone@localhost:9990/] shutdown --restart=true

Additionally, we will learn a first CLI trick that is executing a command in no-interactive mode. So
here is how to shut down the application server with a single command line:

$ ./jboss-cli.sh -c --command=shutdown

20
1.6.1. Stopping WildFly running on a remote host

If you are connecting to a remote WildFly instance, then a password will be requested when you
issue the CLI command:

[disconnected /] connect 192.168.10.1

Username: wildflyadmin
Password:

Connected to 192.168.10.1:9990

Once connected, we will issue the shutdown command just like we did from the local host:

[192.168.10.1:9990 /] shutdown

1.7. Handling start-up issues


A quite common cause of start-up issues is an existing service that is bindings the server ports. This
should be clearly evident from the server logs when you see this message:

21
15:15:14,738 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4)
MSC000001: Failed to start service
org.wildfly.undertow.listener.default:
org.jboss.msc.service.StartException in service
org.wildfly.undertow.listener.default: **Address already in use /127.0.0.1:8080
  at org.wildfly.extension.undertow.ListenerService.start(ListenerService.java:179)
  at
org.jboss.msc.service.ServiceControllerImpl$StartTask.startService(ServiceControllerIm
pl.java:1714)
  at
org.jboss.msc.service.ServiceControllerImpl$StartTask.execute(ServiceControllerImpl.ja
va:1693)
  at
org.jboss.msc.service.ServiceControllerImpl$ControllerTask.run(ServiceControllerImpl.j
ava:1540)
  at
org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnabl
e.java:35)
  at
org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:1985)
  at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.jav
a:1487)
  at
org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1378
)
  at java.lang.Thread.run(Thread.java:745)

In this case, you should be able to find the service that is locking the ports used by the application
server which are by default the 8080 and the 9990. Most Unix systems have the built-in fuser
command that returns the process which is engaging a port :

$ fuser -v -n tcp 8080


PORT
USER PID ACCESS COMMAND
8080/tcp:
francesco 7148 F.... java

On Windows, you can use the netstat command to get network information about the running
processes.

C:>netstat -ao

Other common start up issue may happen if you have manually changed the configuration file, and
corrupted its XML schema. Manually changing the XML configuration file it is discourage, however
you can find the cause (and the line number) by searching through the startup logs:

22
OPVDX001: Validation error in standalone.xml -------------------------------
| 34: <management>
| 35: <security-realms>xx
| 36: <security-realm name="ManagementRealm">
| ^^^^ Received non-all-whitespace CHARACTERS or CDATA event in nextTag()
|
| 37: <authentication>
| 38: <local default-user="$local" skip-group-loading="true"/>
| 39: <properties path="mgmt-users.properties" relative-
to="jboss.server.config.dir"/>
|
| The primary underlying error message was:
| > Received non-all-whitespace CHARACTERS or CDATA event in nextTag().
| > at [row,col {unknown-source}]: [36,12]

1.8. Installing WildFly as Service


WildFly can be as well installed as a service and allowed to be started at boot time. In order to do
that, you need to use some script files, which are contained within the server. The next two sections
(one for Linux users and one for Windows users) discuss about it:

1.8.1. Installing WildFly as a Service on Linux

In order to start an application as a service on Linux you can use two main approaches:

• Use the init.d daemon

• Use the systemd daemon

Before seeing in detail each approach, we need some preliminary steps to be completed.

First off, we will be unzipping Wildfly distribution in the default path used by the service scripts:

$ sudo unzip wildfly-20.0.0.Final.zip -d /opt

Next, let’s add the wildfly user and group, which will own the installation folder of the application
server:

$ sudo groupadd -r wildfly


$ sudo useradd -r -g wildfly -d /opt/wildfly -s /sbin/nologin wildfly
$ sudo chown -R wildfly:wildfly /opt/wildfly-20.0.0.Final

We will then create a symbolic link to point to WildFly installation so that we can easily switch to
newer released by simply updating this link:

$ sudo ln -s /opt/wildfly-20.0.0.Final /opt/wildfly

23
Done with the initial steps. Now let’s complete the actual service installation.

1.8.1.1. Installing WildFly as a Service using init.d

The init daemon is the older approach which is however available in all Linux Distributions. The
scripts which are needed to install WildFly as a service are located under the
$JBOSS_HOME/docs/contrib/scripts/init.d folder. If you look into this folder, you will find the
following files:

• wildfly-init-redhat.sh: this file needs to be used for Red Hat Enterprise-like Linux distributions
(e.g. RHEL, Centos)

• wildfly-init-debian.sh: this file needs to be used for Debian-like Linux distributions (e.g.
Debian, Ubuntu)

• wildfly.conf: this file overrides the configuration used by the above init files

As first step, copy the shell script, which is required by your Linux distribution into the /etc/init.d
folder. For example, if we were to install WildFly as a service on RHEL:

$ sudo cp wildfly-init-redhat.sh /etc/init.d/wildfly

Next, we will copy as well the wildfly.conf configuration file in the location where the startup
script expects it:

$ sudo cp wildfly.conf /etc/default/wildfly

Within the wildfly.conf file adjust the settings in order to fit your installation:

#Location of Java
JAVA_HOME=/usr/java/jdk9

# Location of WildFly
JBOSS_HOME=/opt/wildfly

# The username who should own the process.


JBOSS_USER=wildfly

# The mode WildFly should start, standalone or domain


JBOSS_MODE=standalone

# Configuration for standalone mode


JBOSS_CONFIG=standalone.xml

You can now start WildFly service as follows:

$ sudo service wildfly start

24
And here’s how to stop WildFly as a service:

$ sudo service wildfly stop

Once that you have verified that your service starts correcly we will use the chkconfig command to
manage WildFly start up at boot: the first command will add the wildfly shell script to the chkconfig
list:

$ sudo chkconfig --add wildfly

The second one, sets the boot levels where the service will be started:

$ sudo chkconfig --level 2345 wildfly on

Depending on the version of your Linux installation, it might be needed to


perform some troubleshooting in case of errors. If you hit any failure at start, we
recommend using the journalctl utility to debug the issue:

$ journalctl -xe

 For example, to verify if the SELinux needs to be fined tuned to allow the start of
the service, it is worth trying to temporarly disable it and see if that solves the
issue:

$ sudo setenforce 0

1.8.1.2. Installing WildFly as a Service using systemd

Similar to init, systemd is the parent of all other processes directly or indirectly and is the first
process that starts at boot. Systemd allows a simpler boot process with a minimal configuration
required to start the application server at boot. Assumed that you have completed the preliminary
steps discussed in Installing WildFly as a Service on Linux, then let’s complete the specific steps to
configure systemd.

Create a folder named /etc/wildfly:

$ sudo mkdir /etc/wildfly

Now copy into /etc/wildfly the scripts available in the folder


$JBOSS_HOME/docs/contrib/scripts/systemd:

25
$ sudo cp wildfly.conf /etc/wildfly/
$ sudo cp wildfly.service /etc/systemd/system/
$ sudo cp launch.sh /opt/wildfly/bin/
$ sudo chmod +x /opt/wildfly/bin/launch.sh

We are done. In order to start WildFly as a service issue:

$ sudo systemctl start wildfly.service

Conversely, to stop WildFly you will need:

$ sudo systemctl stop wildfly.service

In order to enable the service at boot at every startup you need to issue:

$ sudo systemctl enable wildfly.service

1.8.2. Installing WildFly as a Service on Windows

Installing WildFly as a service on Windows is pretty simple as it’s not necessary to install any third
party native library because WildFly already ships with all you need. So move to the
$JBOSS_HOME/docs/contrib/scripts/service folder.

If you want to install WildFly as a service in standalone mode simple issue:

service install

Now you can use the Windows Services tab in order to manage the service start/stop

As an alternative you can use the service command to perform basic service management
(start/stop/restart). Example:

service restart

Installing WildFly in domain mode requires that you specify some additional settings such as the
Domain controller (default 127.0.0.1 on port 9990) and the host name we are going to start (default
"master").

service install /controller localhost:9990 /host master

26
2. Chapter 2: Core Server configuration
This chapter encompasses the core topic for an administrator: the application server configuration.
The server configuration is centralized in a single XML file with a variable set of services
configured in it depending on your server mode. Therefore, in order to grasp the basics of server
configuration we will start learning the following topics:

• At first, we will introduce the two available server modes: standalone mode and domain mode.

• Next, we will have an overview of the server configuration file and its main components.

• Then, our focus will move to the standalone server configuration file

• Finally, we will enter into the details of domain configuration.

2.1. The two available server modes


As we have already announced, the application server can be run in two different server modes; if
you are arriving from an AS7 background, you will find the two concepts unchanged for you, whilst
for older JBoss AS users it’s a brand new thing to learn. The difference between the two server
modes can be summarized in the following bullets:

• In "standalone" mode each application server instance is an independent process (similar to


earlier JBoss AS versions; e.g., 4, 5, or 6). The standalone configuration files can be located under
the $JBOSS_HOME/standalone/configuration folder of the application server.

• In "domain" mode, you can run multiple application servers and manage them from a central
point. A domain can span multiple physical (or virtual) machines. Each machine can host
several instances of the application server, which are under the control of a Host Controller
process. The configuration files, in domain mode, can be located under the
$JBOSS_HOME/domain/configuration folder.

In the following sections, we will learn how the application server configuration file is structured
and how it can be customized. In order to do that, we will be using the following management
instruments:

• The Administration Console: this is an intuitive Web application, which is part of the WildFly
distribution and allows managing the core components of your server configuration, deploying
new applications and querying for runtime statistics as well. This management instrument is
suited for beginners to intermediate users that want to get quickly into the heart of the
application server. If you are arriving from an older JBoss server distribution, this is the core
and only Web Administrative channel since the older jmx-console is no more part of the
application server distribution.

• The Command Line Interface: this is a terminal-based instrument that allows a more
advanced management of the application server, allowing to access a wider range of options
and properties and inspecting as well all the available runtime statistics. In this chapter, we will
provide a first taste of its power, while we will go more in deep in Chapter 4: Server
Management with the CLI, which is fully dedicated to this management interface.

27
Actually one more option exist for changing the configuration file, which is
manually editing the XML configuration file. This is however discouraged as it
 can lead to a failure in the server boot if you insert inconsistent data in it. We will
resort to this option in just a few exceptional circumstances.

2.2. Understanding the server configuration file


All the above-mentioned Management interfaces operate on the application server configuration
file. Although the application server ships with several built-in configurations (both for the
standalone mode and for the domain mode) only one is used at server start up. Configuration files
are based on a tree-like structure that contains, at the root element, the server definition and a set
of elements, which are displayed in the following picture:

In the following sections, we will have an initial look at the individual elements that are contained
within the server definition that, taken as a whole, make up the server configuration. Next, we will
also learn how to configure them using the Administration Console.

2.2.1. Extensions

Most of the application server capabilities are provided by means of extensions. As a matter of
fact, most of the modules in the WildFly codebase are extension implementations, with each
extension providing support for some aspects of the Java EE specifications or for core server
capabilities.

Extensions need to implement an interface (org.jboss.as.controller.Extension) that allows

28
integration with WildFly’s core management layer. Via that mechanism, extensions are able to be
part of the application server core configuration, to register resources, install services into
WildFly’s service container and register deployment units.

If you are on the hook to build custom WildFly additions, we recommend checking the following
wiki: Ship Your WildFly Additions via Galleon Feature Packs.

In most cases, it is enough to know that if you want a particular extension to be available, you have
to include an <extension/> element (and specify the module name) in the domain.xml or
standalone.xml file.

<extensions>

  <extension module="org.jboss.as.clustering.infinispan"/>
  <extension module="org.jboss.as.connector"/>
  <extension module="org.jboss.as.deployment-scanner"/>
  <extension module="org.jboss.as.ee"/>
  <extension module="org.jboss.as.ejb3"/>

  . . . . . . .

</extensions>

2.2.2. Paths

A path is a logical name for a file system path, which can be included as a section in the server
configuration file. Other sections of the configuration can then reference those paths by their
logical name, rather than having to include the full details of the path (which may vary on different
machines). For example, you can declare the following path in your configuration, which points to
the folder /home/wildfly/logs:

<path name="log.dir" path="/home/wildfly/logs" />

You can then reference your path in other part of the configuration file, such as in the logging
subsystem:

<file relative-to="log.dir" path="server.log"/>

In the above example, the file logger will trace output in the folder /home/wildfly/logs/server.log

A path can be as well relative to an existing path definition such as in the following example that
references the server’s data directory:

<path name="logdata.dir" path="example" relative-to="jboss.server.data.dir"/>

29
When a relative-to parameter is provided, the final path will be made up of the relative-to folder,
combined with the path element. (For example, if the application server is installed in /opt folder
then the above path could translate to /opt/wildfly-20.0.0.Final/standalone/data/example for a
standalone configuration)

2.2.3. Interfaces

The interfaces section contains the network interfaces/IP addresses or host names where the
application server can be bound. By default, the application server defines two available network
interfaces: the management and the public interface. The management interface is used to
provide management connectivity to the application server (for example via the CLI shell). The
public interface is used to provide access to the application server services.

<interfaces>
  <interface name="management">
  <inet-address value="${jboss.bind.address.management:127.0.0.1}"/>
  </interface>
  <interface name="public">
  <inet-address value="${jboss.bind.address:127.0.0.1}"/>
  </interface>
</interfaces>

Other server configuration might include additional interfaces. For example, "ha"

 server profiles include the "private" interface which is used by JGroups to


manage the cluster communication.

In the above snippet, the management and public interfaces are bound to the application server
system properties jboss.bind.address.management and jboss.bind.address respectively. These
properties can be overridden on the startup script of the application server as in the following
example:

$ ./standalone.sh -Djboss.bind.address=192.168.0.1
-Djboss.bind.address.management=192.168.0.1

Please note that the jboss.bind.address property can be substituted with the -b

 alias; much the same way the jboss.bind.address.management can be replaced


with the -bmanagement option.

2.2.4. Socket binding groups

A socket binding makes up a named configuration of a socket. Within this section, you are able to
configure the network ports, which will be open and listening for incoming connections. As we
have just seen, every socket binding group references a network interface through the default-
interface attribute:

30
<socket-binding-group name="standard-sockets" default-interface="public"
  port-offset="${jboss.socket.binding.port-offset:0}">
  <socket-binding name="management-http"
  interface="management" port="${jboss.management.http.port:9990}
"/>
  <socket-binding name="management-https"
  interface="management" port="
${jboss.management.https.port:9993}"/>
  <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
  <socket-binding name="http" port="8080"/>
  <socket-binding name="https" port="${jboss.https.port:8443}"/>
. . . .
</socket-binding-group>

The jboss.socket.binding.port-offset attribute can be used to shift all port definitions of a fixed
number, in case you want to run multiple application servers on the same machine; if you don’t
provide a value for it, it defaults to 0.

Please notice that WildFly uses the port 9990 for all management interfaces (Web

 interface and CLI). The EAP 6/ AS 7 native port 9999 has been deprecated so
update your scripts accordingly.

2.2.5. System-Properties

System properties can be either set as part of the application server startup script or included in the
application server configuration file, like in the following example, which sets the property
"myproperty" to "false":

<system-properties>
  <property name="myproperty" value="false"/>
</system-properties>

System properties can be also set by means of the management instruments (See Configuring
System Properties in Standalone Mode) or by passing arguments to your server startup scripts.

For example, in order to set the property "key" at server startup, you can use the -D option as
follows:

$ ./standalone.sh -Dkey=value

If your list of properties is quite large, then you can use as well the -P flag which points to a file
based list of system properties:

$ ./standalone.sh -P /tmp/file.properties

31
2.2.6. Profile

As you can see from the tree of elements contained in the server configuration, the profile element
holds a collection of subsystems: each subsystem in turn contains a subset of capabilities used by
the application server.

<profile>

  <subsystem
xmlns="urn:jboss:domain:logging:6.0">

  . . . ..

</profile>

For example, the undertow subsystem contains the definition of a set of connectors used by the
Web server, the ejb subsystem defines the EJB Container configuration and modules used by it, and
so on.

The content of an individual profile configuration looks largely the same in domain.xml and
standalone.xml. The only difference is that a standalone configuration is only allowed to have a
single profile element (the profile the server will run), while a domain can have multiple profiles.

The content of individual subsystem configurations are the same between standalone and domain
configuration files.

2.3. Configuring WildFly in Standalone mode


The standalone configuration is contained into the JBOSS_HOME/standalone/configuration folder.

The server configuration in standalone mode is based on a single "profile" which includes the
detailed configuration of the various subsystems and definition of the network interfaces and
sockets that those subsystems may open. Out of the box, the following built-in server configurations
are available:

• standalone.xml : This is the default standalone configuration file used by the application
server. It does not include the messaging subsystem and is not able to run in a cluster.

• standalone-full.xml : This configuration adds to the default configuration the messaging


provider and iiop-openjdk libraries.

• standalone-ha.xml : This configuration enhances the default configuration with clustering


support (JGroups / mod_cluster).

• standalone-full-ha.xml : This configuration adds both clustering capabilities and messaging /


iiop-openjdk libraries.

• standalone-microprofile.xml : Provides full support for MicroProfile API combined with JAX-
RS and related technologies. It does not include the ejb container. A minimal messaging
configuration is included.

32
• standalone-microprofile-ha.xml : Similar to standalone-microprofile.xml but with support for
high availability web sessions and distributed Hibernate second-level caching.

• standalone-load-balancer.xml : This configuration allows using WildFly as HTTP front-end


only for clustered applications

If you want to start the application server with a non-default configuration, you can use the -c
parameter. Here’s for example how to start WildFly to use a High Availability Profile:

$ ./standalone.sh -c standalone-ha.xml

WildFly 20 contains a sample CLI script that lets you evolve your standalone
configuration to use Microprofile configurations To launch the script use:

bin/jboss-cli.sh --file=docs/examples/enable-microprofile.cli

When run, the script updates a given standalone configuration. By default it
applies to standalone.xml. Use -Dconfig=[yourconfig] to select another standalone
configuration.

2.3.1. Configuring JVM settings in Standalone Mode

If you had a quick look in the bin folder of your server installation, you should have discovered a
large amount of shell scripts, which serve to different purposes. In particular, a couple of them
named standalone.conf (standalone.conf.bat for Windows users) is executed on server boot in
standalone mode. This script files can be used for a variety of purposes such as setting JVM
settings; here is for example how to change the JVM settings on a Linux machine to use the J2SE 8
MaxMetaspace parameter:

JAVA_OPTS="-Xms64m -Xmx1024m -XX:MaxMetaspaceSize=256M"

2.3.2. Configuring Network Interfaces in Standalone Mode

Although we recommend to use the -D options to set the management and public bindings, you can
make these changes permanent in your configuration by providing a default for the network
interfaces. Here is how to set from the Command Line Interface a default for the management
interface to the Address 192.168.10.1:

/interface=management/:write-attribute(name=inet-
address,value=${jboss.bind.address.management:192.168.10.1})

Much the same way, you can bind the public address of the server to the same Address this way:

33
/interface=public/:write-attribute(name=inet-
address,value=${jboss.bind.address:192.168.10.1})

If you want to expose a network interface toward all available IP addresses, then you can use the
0.0.0.0 address as follows:

/interface=public/:write-attribute(name=inet-
address,value=${jboss.bind.address:0.0.0.0})

The application server will request a reload of the configuration to propagate the changes.

reload

2.3.3. Configuring Socket Bindings in Standalone Mode

Socket bindings allow you to set the ports that will be used by your interfaces.

There are two types of Socket Bindings: Inbound and Outbound. Inbound socket bindings control
the list of socket connections that are accepted by the application server. Almost all bindings fall in
this group. On the other hand, Outbound sockets control outgoing connections such as mail
connections.

Socket bindings can be varied through the socket-binding-group element which contains, in
standalone mode, just an element: the standard sockets. Here is how to vary the default http port
as an example:

/socket-binding-group=standard-sockets/socket-binding=http/:write-
attribute(name=port,value=${jboss.http.port:8180})

As usual, the above configuration can be overriden if you include the variable at startup as follows:

$ ./standalone.sh -Djboss.http.port=8280

On the other hand, here is how you can vary the host address of the mail outbound connection:

/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=mail-smtp/:write-attribute(name=host,value=localhost)

(See the Appendix of this book for more details about the mail subsystem).

In case you need to remove some Socket bindings, which are maybe not needed in your
configuration, then you can use the remove operation on themselves. Here is how to remove the
ajp socket binding from your configuration:

34
/socket-binding-group=standard-sockets/socket-binding=ajp/:remove

Watch out! Socket bindings typically are referenced through other parts of your

 configuration. For example the "ajp" socket binding is used when running ha
profiles.

2.3.4. Configure Path references in Standalone Mode

A path can be defined to point to an external resource that can be referenced through your
configuration. In its simplest form, a path references an absolute path that is available on your
hard drive:

/path=logpath/:add(path=/home/francesco/logs)

Paths are however quite flexible; hence you can reference a path expression from another path. In
the following example we have defined the path "mydir" which is relative to the application
server’s base dir:

/path=mydir/:add(relative-to=jboss.server.base.dir)

2.3.5. Configuring System Properties in Standalone Mode

Properties can be configured through the system-property path which contains the required
operations for setting/reading/removing System properties. Here is how to add the System Property
named "mykey" with value "value":

/system-property=mykey/:add(value=value)

The value of the property can be checked by means of the read-resouce operation:

/system-property=mykey/:read-resource(recursive=false)

A property can be as well removed from the application server memory using "remove":

/system-property=mykey/:remove

Finally, it’s possible to dump all system properties which have been set on the application server by
digging into the platform-mbean resource:

35
/core-service=platform-mbean/type=runtime:read-attribute(name=system-properties)
{
  "outcome" => "success",
  "result" => {
  "[Standalone]" => "",
  "awt.toolkit" => "sun.awt.X11.XToolkit",
  "file.encoding" => "UTF-8",
  "file.encoding.pkg" => "sun.io",
  "file.separator" => "/",
  "java.awt.graphicsenv" => "sun.awt.X11GraphicsEnvironment",
  "java.awt.headless" => "true",
  "java.awt.printerjob" => "sun.print.PSPrinterJob",
  "java.class.path" => "/home/jboss/wildfly-20.0.0.Final/jboss-modules.jar",
  "java.class.version" => "52.0",
  . . . . .
  "jboss.server.base.dir" => "/home/jboss/wildfly-20.0.0.Final/standalone",
  "jboss.server.config.dir" => "/home/jboss/wildfly-
20.0.0.Final/standalone/configuration",
  "jboss.server.data.dir" => "/home/jboss/wildfly-20.0.0.Final/standalone/data",
  "jboss.server.deploy.dir" => "/home/jboss/wildfly-
20.0.0.Final/standalone/data/content",
  "jboss.server.log.dir" => "/home/jboss/wildfly-20.0.0.Final/standalone/log",
  "jboss.server.name" => "fedora",
  "jboss.server.persist.config" => "true",
  "jboss.server.temp.dir" => "/home/jboss/wildfly-20.0.0.Final/standalone/tmp",
  . . . . .
  }
}

This can be particularly useful if you want to determine at runtime information about the JDK
being used, or the application server base/config/deploy/data directories.

2.4. Configuring WildFly in Domain mode


In order to understand the domain configuration, we need at first to understand which are the key
components of a domain. A domain is a collection of server groups; a server group is in turn a
collection of servers.

The concept of server groups can be seen as a set of servers managed as a single
unit by the domain. You can actually use server groups for fine-grained

 configuration of nodes; for example, each server group is able to define its own
settings such as customized JVM settings, socket bindings interfaces, or deployed
applications.

From the process point of view, a domain is made up of the following elements:

• Domain Controller: The domain controller is the management control point of your domain.
An AS instance running in domain mode will have at most one process instance acting as a

36
Domain Controller. The Domain Controller holds a centralized configuration, which is shared by
the node instances belonging to the domain.

• Host controller: It’s a process that is responsible for coordinating with a Domain Controller the
life-cycle of server processes and the distribution of deployments, from the Domain Controller
to the server instances.

• Application server nodes: These are regular Java processes that map to instances of the
application server. Each server node, in turn, belongs to a Server group.

Additionally, when starting a Domain, you will see another JVM process running on your machine:
this is the Process controller. It’s a very lightweight process whose primary function is to spawn
server processes and host controller processes, and manage their input/output streams. Since it’s
not configurable, we will not further discuss about it.

The following picture summarizes the concepts exposed so far:

In the above picture, we have designed a domain made up of a Domain Controller, running on a
dedicated server instance and two Host Controllers. The Domain defines two Server Groups
(main-server-group and other-server-group which are the default WildFly server group names);
each Server Group in turn contains two Servers, making up a total of 4 WildFly servers.

Pay attention on the Server Group distribution, which spans across the two
different Hosts. Say main-server-group contains one block of applications and
 other-server-group another block; with the above configuration you will be able
to run both applications without a single point of failure.

In the following section we will show how to create in practice this Domain configuration by
configuring at first configure the Domain Controller and its domain.xml configuration file. Next we
will configure the Host Controllers where your applications can be provisioned.

2.4.1. Configuring the Domain Controller – Part 1: domain.xml

The server configuration of the domain is centralized in the domain.xml file of the Domain
Controller. The domain.xml is located in the domain/configuration folder and it contains the main
configuration that will be used for all server instances. This file is only required for the Domain

37
Controller. In the domain.xml file we will define the server group configuration (which can be
anyway changed at runtime, as we will see in a minute).

<server-groups>
  <server-group name="main-server-group" profile="full">
  <jvm name="default">
  <heap size="64m" max-size="512m"/>
  </jvm>
  <socket-binding-group ref="full-sockets"/>
  </server-group>
  <server-group name="other-server-group" profile="full">
  <jvm name="default">
  <heap size="64m" max-size="512m"/>
  </jvm>
  <socket-binding-group ref="full-sockets"/>
  </server-group>
</server-groups>

As you can see, we have defined two server groups: main-server-group and other-server-group.
Each server group is in turn associated with a server profile and a socket-binding-group. The
default configuration includes the following pre-configured profiles:

• default: Supports the Java EE Web-Profile plus some extensions like REST Web Services or
support for EJB3 remote invocations. You should associate this profile with the "standard-
sockets" socket-binding-group.

• full: supports of Java EE Full-Profile and all server capabilities without clustering. You should
associate this profile with the "full-sockets" socket-binding-group.

• ha: the default profile with clustering capabilities. You should associate this profile with the "ha-
sockets" socket-binding-group.

• full-ha: the full profile with clustering capabilities. You should associate this profile with the
"full-ha-sockets" socket-binding-group.

• load-balancer: this profile can be used to allow one or more servers to act as Load balancer for
your cluster

Quick Recap! When running in Domain mode you can choose the server groups
configuration among the built-in profiles. When running in standalone mode
 you can choose the server configuration by selecting (-c) among the available
configuration files.

2.4.2. Configuring the Domain Controller – Part 2: host.xml

The other key domain configuration file is host.xml which defines:

• The application servers which are part of a domain server distribution and the server group to
which they belong.

• The network interfaces and security settings for these application servers

38
• The location of the Domain Controller

In our example Domain configuration, there are no application servers running on this host; this
means that we have an host which is dedicated to the Domain Controller. This is stated by the
following empty servers element:

<servers />

Next, we need to specify the location of the Domain Controller. Since the Domain Controller will
be running on the same Host, we will include a "local" element into the domain-controller stanza:

<domain-controller>

  <local/>①

</domain-controller>

① This is going to be a Master Controller.

Now we can start the Domain Controller so that will be bound to the IP Address 192.168.0.1:

$ domain.sh -Djboss.bind.address.management=192.168.0.1

You can choose to start the domain using a non-standard configuration file by
passing the --domain-config parameter. Example :


$ ./domain.sh --domain-config=domain-alternate.xml

2.4.3. Configuring the Host Controllers (host.xml)

After the Domain Controller is configured and started, the next step is to setup the two Host
Controllers. The Host Controller configuration will download the Domain configuration from the
Domain Controller and use its host.xml file, to define the servers running in it.

As an alternative you can name the host file as you like and start the domain with
 the the --host-config parameter. Example:

./domain.sh --host-config=host-slave.xml

The first thing is to choose a unique name for each host in our domain to avoid name conflicts. So
we will choose for the first host the name "host1":

39
<host name="host1"
xmlns="urn:jboss:domain:11.0">

  ...

</host>

And for the second host the name "host2":

<host name="host2"
xmlns="urn:jboss:domain:11.0">

  ...

</host>

Next, we need to specify that the Host Controller will connect to a remote Domain Controller. We
will not specify the actual IP and port of the Domain Controller but leave them as a property named
jboss.domain.master.address and jboss.domain.master.port.

Additionally, we need to specify the username, which will be used to connect to the Domain
Controller. So let’s add to the Domain Controller the user wildflyadmin which we have formerly
created:

<domain-controller>
  <remote host="${jboss.domain.master.address}" ①
  port="${jboss.domain.master.port:9999}"
  username="wildflyadmin"
  security-realm="ManagementRealm"/>
</domain-controller>

① No default for this property. We will define it at start-up

Finally, we need to specify the Base64 password for the server identity we have included in the
remote element:

40
<management>
  <security-realms>
  <security-realm name="ManagementRealm">
  <server-identities>
  <secret value="RXJpY3Nzb24xIQ==" /> ①
  </server-identities>
  . . . . . .
  </security-realm>
  </security-realms>
  . . . . . .
</management>

① This secret is generated using add-user.sh script on the Master Controller

The authentication is not required if the Remote Domain Controller is located on


 the same machine (e.g. localhost).

The last step is to configure the server nodes inside the host.xml file on both hosts. Here is the first
Host Controller (host1):

<servers>
  <server name="server-one" group="main-server-group"/>
  <server name="server-two" group="other-server-group" auto-start="false">
  <socket-bindings port-offset="150"/>
  </server>
</servers>

And here is the second Host Controller (host2):

<servers>
  <server name="server-three" group="main-server-group"/>
  <server name="server-four" group="other-server-group" auto-start="false">
  <socket-bindings port-offset="150"/>
  </server>
</servers>

Please notice the auto-start flag indicates that the server instances will not be

 started automatically if the host controller is started. If the auto-start is omitted,


by default the server will start.

For the server-two and server-four a port-offset of 150 is used to avoid port conflicts. With the port
offset, we can reuse the socket-binding group of the domain configuration for multiple server
instances on one host. Done with our configuration, we can start host1 with:

$ ./domain.sh -b 192.168.0.2 -Djboss.domain.master.address=192.168.0.1

41
And as well we can start host2 with:

$ ./domain.sh -b 192.168.0.3 -Djboss.domain.master.address=192.168.0.1

If you look at the Domain Controller console, you should notice the following output, which shows
that the Domain Controller has started and the other slave hosts have successfully connected:

[Host Controller] 18:46:26,867 INFO [org.jboss.as] (Controller Boot Thread)


WFLYSRV0025: WildFly Full 20.0.0.Final (12.0.1.Final) (Host Controller) started in
4448ms - Started 77 of 79 services (23 services are lazy, passive or on-demand)
[Host Controller] 18:46:48,790 INFO [org.jboss.as.domain.controller] (Host Controller
Service Threads - 29) WFLYHC0019: Registered remote slave host "host1", JBoss WildFly
Full 20.0.0.Final (12.0.1.Final)
[Host Controller] 18:47:12,799 INFO [org.jboss.as.domain.controller] (Host Controller
Service Threads - 29) WFLYHC0019: Registered remote slave host "host2", JBoss WildFly
Full 20.0.0.Final (12.0.1.Final)

By default a host controller requires a connection to the domain controller to be


started. It is however possible to start the Host controller using its locally cached
configuration by passing the parameter --cached-dc . Example:

$ domain.sh --host-config=host-slave.xml --cached-dc

2.4.4. Domain breakdown

The above configuration has produced a domain configuration made up of a dedicated Domain
Controller and a set of four server nodes split into two Server Groups and two different Hosts as
shown by the following picture:

42
With the above architecture, the hosts where applications are deployed are completely
independent from administrative tasks. On the other hand, the Domain Controller is solely
responsible for the management of the domain. As per definition, there can be at most one Domain
Controller in a Domain, this means that you should care for Domain Controller restart in case of
failure.

Although this might appear a limitation, it is not so critical as it might seems: at first the Domain
Controller is not at all necessary to keep running your applications on server nodes. Let’s repeat it
again, the Domain Controller is solely responsible for managing your Domain (e.g server start/stop,
application deployment etc.).

Next, you can get notified of a Domain Controller failure with very simple network instruments
such as any port monitoring script or, if you are looking for more advanced options, have a look at
the Domain Controller Failover section, which is described a few sections ahead.

2.5. Managing the WildFly Domain


So far we have built up a sample Domain using the XML configuration files. The recommended way
to control your Domain resources and structure is by means of the Web console and the Command
Line Interface.

We have dedicated one section named Managing the Domain with HAL Management Console to the
Admin Console. We will cover now Domain management using the Command Line Interface.

The first step will be connecting to the Domain Controller which listen on the
jboss.bind.address.management (if not set defaults to 127.0.0.1) and to the port
jboss.management.http.port (if not set defaults to 9990).

$ ./jboss-cli.sh --connect controller=192.168.0.1:9990

[domain@192.168.0.1:9990/]

The CLI management interfaces relies on the local mechanism which means that any user
connecting from a local host will be granted a guest access. See the section The Management Realm
for more information.

On the other hand, if the Domain Controller is located on a remote host an username/password
challenge will be displayed. Once connected, you will see from a CLI tab expansion that new
options are available to control your Domain

43
[domain@192.168.0.1:9990/] /

core-service
  extension management-client-content server-group

deployment
  host path
socket-binding-group

deployment-overlay
interface profile system-property

More in detail let’s focus on these elements:

• profile: The profile is the path required to modify the configuration of the profiles contained in
the Domain

• host: The host path can be used for host-wide operation (reload/restart) and for accessing the
single servers of your Domain

• server-group: the server-group path can be used to perform Server-Group wide operations
(start/stop/restart/reload)

With this simple schema in our mind, we will go through the most common Domain management
tasks.

2.5.1. Mananaging the Domain Profiles

The most obvious task for a System administrator will be varying the configuration of a profile.
Each time you need to do that, just prepend the profile name, before digging into the subsystem:

[domain@host:9990 /] /profile=[profile]/subsystem=[subsystem]:[operation]

For example, if you are to change an ejb3 settings (e.g. timeout of SLSB) in the full profile, then you
could execute the following command:

[domain@192.168.0.1:9990 /] /profile=full/subsystem=ejb3/strict-max-bean-instance-
pool=slsb-strict-max-pool:write-attribute(name=timeout,value=100)
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined
}

2.5.2. Mananaging the Domain Hosts

Some management operations are to be performed at server level. For example, you can decide to
start, stop, suspend, resume or restart a server node. For each operation, you will find the

44
corresponding command under:

[domain@host:9990 /] /host=[host]/server-config=[server]:[operation]

For example, let’s see how to restart the server-one which is available on the "master" host:

[domain@192.168.0.1:9990 /] /host=master/server-config=server-one:restart
{
  "outcome" => "success",
  "result" => "STARTING"
}

When using the host’s server-config path, it is also possible to create or remove new servers. The
only requirement is to fill up the mandatory attributes. Here is how to add a server:

[domain@192.168.0.1:9990 /] /host=master/server-config=server-five:add(auto-
start=false, socket-binding-port-offset=400, group=main-server-group)
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined
}

Now check that the Host includes the new server with the following command:

[domain@192.168.0.1:9990 /] /host=master:read-children-names(child-type=server-config)
{
  "outcome" => "success",
  "result" => [
  "server-one",
  "server-two",
  "server-five" ①
  ]
}

① The server we have added.

The new server is already operative so, for example, you can execute start/stop command or
provision deployments on it:

[domain@192.168.0.1:9990 /] /host=master/server-config=server-five:start
{
  "outcome" => "success",
  "result" => "STARTING"
}

45
Conversely, it is also possible to remove a Server from the configuration. The only requirement is
that the Server must be stopped. Here is how to remove the server-five:

[domain@192.168.0.1:9990 /] /host=master/server-config=server-five:remove
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined
}

Finally, if you are going to inspect information about the servers, you must proceed through the
following path to reach the subsystem you are interested to monitor:

[domain@host:9990 /] /host=[host]/server=[server]/[subsystem]

So, here is for example how to gather statistics on the ExampleDS Datasource running on the
server-one:

[domain@192.168.0.1:9990/] /host=master/server=server-one/subsystem=datasources/data-
source=ExampleDS/statistics=pool:read-resource(include-runtime=true)

2.5.2.1. Managing the Host controller

When using the /host path, it is possible to control the Host controller as well and not just the
Servers running within it. The amount of specific operations which can be issued on the Host
controller include the Host reload and shutdown. For example, here is how to reload the Host
named "slave":

[domain@192.168.0.1:9990 /] /host=slave:reload
{
  "outcome" => "success",
  "result" => undefined
}

And here is how to shutdown an Host, which will stop all processes running on the Host Controller:

[domain@192.168.0.1:9990 /] /host=slave:shutdown
{
  "outcome" => "success",
  "result" => undefined
}

Be aware that the Host Controller cannot be restarted from the CLI once stopped! You have to use
the domain.sh script from the local host in order to do that.

46
2.5.3. Managing the Server Groups

Some management operations cam be performed also at Server Group level. You can execute the
same control operations you have seen at server level ( start, stop, suspend, resume or restart )
but in this case, they are executed on multiple nodes. Here is how to restart the main-server-group:

[domain@192.168.0.1:9990 /] /server-group=main-server-group:restart-servers
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined
}

On the other hand, if you just need to reload their configuration, you can use the reload-servers
command:

[domain@192.168.0.1:9990 /] /server-group=main-server-group:reload-servers
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined
}

Sometimes you will be prompted to restart one or maybe all the servers in a
Domain to propagate changes. Do not shutdown the Host controller for this
 purpose as you will need a domain.sh on all Hosts to restart your Domain! Simply
issue a restart on the Server Groups and you will save lots of time!

2.6. Domain Controller Failover


At the beginning of this chapter we have stated that a Domain can contain at most one Domain
Controller. Although this rule remain unchanged also in the latest release of the application server,
a new option, called Domain Discovery can let you elect a new Domain Controller in case of
failure.

Let’s make one step back at first. One of the domain.sh startup options is --backup which allows to
keep a back up copy of the Domain’s Controller’s configuration (domain.xml) on the Host Controller
that used it. This backup copy can then be used if the Host Controller is elected as new Domain
Controller.

The following picture depicts our Domain Controller failover scenario:

47
• The Host1 is the Domain Controller of your Domain. Nothing new to add to its configuration.

• The Host2 is an Host Controller which connects to the Domain Controller Host1. This Host is
able to elect as Domain Controller another Host using discovery-options held in its host.xml
configuration file

• The Host3 is also an Host Controller which, at startup, connects to the Domain Controller
Host1. This Host Controller however starts with the --backup so we can use it as backup for the
Domain Controller, should it fail.

What is missing to describe are the discovery-options which needs to be included in the Host2’s
host.xml file in order to reconnect to a Backup Domain Controller. The discovery-options include
one or more additional Domain Controllers which will be contacted in case there is a failure in
communication with the default Domain Controller.

The following XML excerpt shows an example of it:

<domain-controller>
  <remote host="${jboss.domain.master.address:192.168.0.1}" port=
"${jboss.domain.master.port:9999}" security-realm="ManagementRealm">
  <discovery-options>
  <static-discovery name="discovery-one"
  protocol="${jboss.domain.master.protocol:remote}"
  host="${jboss.domain.master.address:192.168.0.10}" ①
  port="${jboss.domain.master.port:9999}"/>
  </discovery-options>
  </remote>
</domain-controller>

① This is the Host Controller that will gain control of the Domain

48
As you can see, within the discovery-options section we can include a static-discovery section
with the list of backup Domain Controllers. In our case, we will try to reconnect to the Host3 which
is bound to the IP 192.168.0.10 and port 9999. Next picture depicts the Failover scenario:

In order to re-sync the Host3 configuration with the configuration held by the defunct Domain
Controller you have however to connect to it with the Command Line and issue the write-local-
domain-controller to trigger the process. Here is a transcript of the command:

[domain@192.168.0.10:9999 /] /host=host3:write-local-domain-controller
{
  "outcome" => "success",
  "result" => undefined,
  "server-groups" => undefined,
  "response-headers" => {"process-state" => "reload-required"}
}

Finally, issue a reload in order to propagate the changes through the Domain.

2.6.1. Using Multiple Protocols to reconnect to the Domain Controller

By default you will be using the remote protocol to reconnect to the new Domain Controller. You
can however define multiple mechanisms to reconnect to a new Domain Controller, for example
through the http or https port:

49
<discovery-options>
  <static-discovery name="master-https" protocol="https-remoting" host="
192.168.0.10" port="9993" security-realm="ManagementRealm"/>
  <static-discovery name="master-http" protocol="http-remoting" host="192.168.0.10"
port="9990" />
</discovery-options>

2.6.2. Using Multiple Hosts in the Discovery Opions

Besides testing different protocols you can also try multiple hosts sequentially tried in case of
failure:

<domain-controller>
  <remote host="${jboss.domain.master.address:192.168.0.1}" port=
"${jboss.domain.master.port:9999}" security-realm="ManagementRealm">
  <discovery-options>
  <static-discovery name="discovery-one"
  protocol="${jboss.domain.master.protocol:remote}"
  host="${jboss.domain.master.address:192.168.0.10}"
  port="${jboss.domain.master.port:9999}"/>
  <static-discovery name="discovery-two"
  protocol="${jboss.domain.master.protocol:remote}"
  host="${jboss.domain.master.address:192.168.0.20}"
  port="${jboss.domain.master.port:9999}"/>
  </discovery-options>
  </remote>
</domain-controller>

2.7. Standalone mode vs Domain mode


The choice of domain mode versus standalone mode comes down to whether the user wants to use
the centralized management capability domain mode provides. Some enterprises have developed
their own sophisticated multi-server management capabilities and are comfortable coordinating
changes across a number of independent WildFly instances. If this is your case, a multi-server
architecture comprised of individual standalone mode AS instances is a good option.

Standalone mode is better suited for most development scenarios. You should definitely use if you
are running a single server installation; also you should consider using it when the "domain mode"
is not a feasible choice, such as if you are running a WildFly instance in an Arquillian-based test
suite. Generally speaking, any individual server configuration that can be achieved in domain
mode can also be achieved in standalone mode, so even if the application being developed will
eventually run in production on a domain mode installation, much (probably most) development
can be done using standalone mode.

Domain mode can be helpful in some advanced development scenarios; i.e. those involving
interaction between multiple AS instances. Developers may find that setting up various servers as
members of a domain is an efficient way to launch a multi-server cluster.

50
3. Chapter 3: Server Management with HAL
Management console
The HAL Management console is a Web application which can be used to control the application
server from any modern browser. Although pretty simple and intuitive, this tool will not be the
main management tool of this book. The main reason for that is that the UI of the Web console is
constantly evolving between server releases thus making the maintenance of this book too
complex. Besides that, the degree of control and automation that you can achieve with the Web
console is not comparable with the Command Line Interface.

That being said, in some cases it’s still useful to control the application server using a Web console
and can sometimes let you achieve things faster. This chapter will tell you when it’s appropriate to
turn to it.

3.1. Connecting to the HAL console


Once started the application server, you can connect to the HAL Management console which is
available, by default, on port 9990. So for example, if you have bound the management interfaces
on localhost, the Administration Console can be reached at the following address:
http://localhost:9990

Connecting to the HAL Management console requires a management user. Enter the credentials
of the management user we have formerly created. Once logged, an introduction screen will guide
you to a quick tour of the most interesting settings:

51
Above is the welcome page which can help you to reach a mix of handy features like application
deployment, datasource creation or handy links to docs and forums. When running in standalone
mode, you will find a set of tabs in the upper section of the Console.

• Deployments: This tab allows deploying/undeploying application to a Server

• Configuration: This tab can be used for configuring the services available in the application
server, thus making this the most important reference for system administrators.

• Runtime: This tab can be used to gather Runtime metrics about the single services running and
the application server’s JVM metrics.

• Access Control: This tab can be used to define Role Based Access Control over the application
server services and resources.

• Patching: The last tab will let you apply a patch to a server release or to rollback a patch
applied.

3.2. Varying the Server Configuration


By clicking on the upper Tab named "Configuration" you will be able to manage and vary the
server configuration. In the latest release of the application server, the Admin Console has been
revamped and now it displays the subsystems in a card-like schema:

52
By selecting the configuration item on the left, you will navigate to the next layer in the middle
panel. For example, Subsystems | JCA. From the JCA label, click on the View button to configure
your resource. For example, here is the configuration window for the JCA subsystem:

Once there, if your user has the permissions to change the configuration (See section Configuring
Role Based Access Control), you will see the "Edit" link on the configuration elements.

53
Click on the Close link in the top-right corner to return to the previous screen.

3.3. Gathering Runtime statistics


If you are on the hook for server metrics, then you need to select the Runtime upper tab. From
there you can collect information on the following resources:

• JVM Statistics

• System Properties

• Examine log files

• Gather statistics about the single subsystems

Select the resource you want to acquire statistics from and click on the "View" button. In the
following example, we have selected the JVM statistics of the standalone server:

The selected resource will be displayed in the mid panel:

54
Click on Close to return to the main Admin Console window.

3.4. Adding custom Response Headers to the HTTP


management interface
The HTTP management interface of WildFly already returns a pre-defined set of HTTP headers in
all of the responses that are sent to the clients. It is however possible that custom headers can be
returned to satisfy the requirements of an environment the application server is installed within.
You can add a custom Response Header by setting the constant-headers attribute as set of headers
using key / value pairs against a specific path prefix. Here is an example:

[standalone@localhost:9990 /] /core-service=management/management-interface=http-
interface:write-attribute(name=constant-headers, value=[{path=/management,
headers=[{name=X-Cluster, value=Cluster01}]}])

Here is a view of the Response Header after setting the above constant-headers attribute:

55
Some headers are integral to the correct operation of the server and so can not be
overridden, these headers are: -

• Connection

• Content-Length
 • Content-Type

• Date

• Transfer-Encoding

If you attempt to set these headers an error will be reported.

3.5. Managing the Domain with HAL Management


Console
The Web console in Domain mode contains a richer set of options to handle the different profiles
and hosts which make up the Domain. Let’s see in detail.

You can reach it the console by pointing at the jboss.bind.address.management (if not set defaults
to 127.0.0.1) and to the port jboss.management.http.port (if not set defaults to 9990) of the Domain
Controller. In the former example described in Managing the WildFly Domain, it would be:
http://192.168.0.1:9990

The Web console in Domain mode contains the same tab list as standalone mode:

56
As seen from the above picture, you can use the Runtime tab to browse your domain by Hosts and
Server Groups. You can browse through the Hosts in order to manage the individual server nodes.
For example, here is how you handle typical management tasks for the server-one :

From this UI, you have multiple options available:

• Copy allows you to create a clone of the Server in the Domain in any available Host or Server
Group.

• Edit URL allows to set a custom URL for this server to be used by subsystems such as JAX-RS or
Web.

• Configuration Changes lets you track the In-memory configuration changes if they are
enabled.

• Reload reloads the server configuration

• Restart restarts the server

57
• Suspends suspends the server execution gracefully

• Stop stops the server gracefully

• Kill stop the server execution with a 'kill' signal. If not available on the platform, it will destroy
the process.

• Destroy stop the server by destroying its operating system process.

Additionally, by clicking on the (+) top button, you will be able to add new Servers to the Domain.
This is discussed in the next section.

3.5.1. Varying your Domain setup

From the Runtime tab of your Domain, you will be able to change your Domain topology. If you are
browsing through the Hosts (see previous picture), you will be able to add a new Server to your
Domain by clicking on the (+) top button. A new UI will be displayed where you can specify the
Server name, the Server Group to which it belongs and the auto-start policy.

The following picture depicts how to add a new Server named "server-four" to the Server Group
named "main-server-group":

On the other hand, if you want to manage Server Groups, then browse through your Server
Groups and select your server group, as indicated by this picture:

58
Within the Server Group UI, you can perform the same actions we have discussed

 for single Servers. In this case, however, the actions, will be pursued on all
Servers in the Server Groups.

By clicking on the top (+) button, you will be able to Add a new Server Group as depicted by the
following picture:

Click on Save to persist the changes.

3.5.2. Configuring Domain JVM Settings

The JVM settings of a domain of servers can be done at three different levels:

59
• Host level: the configuration will apply to all servers that are defined in host.xml

• Server group level: the configuration applies to all servers that are part of the group.

• Server level: the configuration is used just for the single server.

The general rule (that is applied to all elements configurable at multiple levels including also
System properties, Paths and Interfaces) is that the most specific configuration overrides the most
general one. So for example, the JVM Server Group configuration will override the JVM Host
configuration, while the Server configuration will override all others available configurations.

3.5.2.1. Configuring Host JVM Settings

The JVM Settings of an host can be configured by selecting in the Runtime upper tab and then
clicking on the View button near each Host. A list of left-tabbed options will be available:

As you can see from the above picture, in the JVMs Tab you can configure the "default" JVM Host
configuration or Add/Remove a new JVM default that will be valid for that Host. In order to edit the
JVM configuration, click on the Edit link and Save when done. Those settings will be eventually
overridden if a Server/Server Group JVM configuration exists.

60
3.5.2.2. Configuring Server Groups JVM Settings

The Server Group JVM Settings are valid across all members of a Server Group, unless a more
specific configuration is defined at Host level or at Server level. In order to configure the Server
Group JVM Settings, select the Runtime upper tab menu and then click on the View button, next to
each Server Group. A list of left-tabbed options will be available:

Just like for the Host JVM configuration, you can configure or Add/Remove a default Server Group
configuration. In order to edit the JVM configuration, click on the Edit link and Save when done.

3.5.2.3. Configuring Server JVM Settings

Finally, you can configure JVM settings also at Server Level. This can be done by clicking on the
View button near each Server:

61
A Server can be reached either by browsing across the available Hosts or through
 the Server Groups.

From the JVMs tab, you can edit the specific JVM Server settings which prevail on all other JVM
configurations.

3.6. Using the HAL Management console’s Management


Model
If you want a detailed description of WildFly management model then you can change the
perspective of your Web console. You can enter the Management Model by clicking on the Tools |
Management Model option contained in the bottom right corner of the Web console:

This option has been enriched so that now you can actually use it to modify your server

62
configuration in a visual CLI style like. The first image shows how to use it to go through the
Management Model Description:

By clicking on the Data tab, you can actually change the Model parameters. Here is for example
how to set the public interfaces address:

3.7. Configuring Macros for frequent management


operations
Within the lower panel of the HAL Console, you will find two options to edit and record Macros for

63
management operations. When you make any configuration change, the Web console can record
the related operations as a macro. To start a macro recording select the Start Macro Recording
option from the tools menu. In the next dialog you have to choose a unique name for the macro
and optionally provide a description. Then, you can choose whether to omit read operations during
recording and whether the macro should be opened in the macro editor after recording has been
stopped.

Once you’ve started macro recording, a pulsing icon in the footer will give you feedback that a
macro is being recorded and how many operations have been recorded so far. The item in the tools
menu will change from Start Macro Recording to Stop Macro Recording. Every operation that
you will perform during this time, will be recorded as part of the macro.

If you’re done with recording choose Stop Macro Recording from the tools menu. Use the macro
editor to see the recorded operations, to reply macros or to copy & paste the operations. For
instance, in case you have tested the Connection pool for the default Datasource:

64
4. Chapter 4: Server Management with the
CLI
The Command Line Interface is a management tool, which can be used to govern every aspect of
your server configuration. Within this chapter, we will have a closer look at its syntax whilst in the
next chapter we will see some advanced recipes for server administrators. Here is our checklist:

• At first we will review the Command Line start up options available for standalone and domain
server modes

• Next, we will cover how to construct the Command Line commands

• Then, we will learn how to trace CLI commands

• Finally, we will learn how to use the CLI in graphical mode

4.1. Starting the Command Line


The CLI startup script is located in the $JBOSS_HOME/bin folder and it is named jboss-cli.sh
(Windows users, as well, will use the jboss-cli.bat equivalent).

By launching the shell script, you will start with a disconnected session. You can connect at any
time using the connect[standalone/domain controller] command, which by default, connects to a
server controller located at localhost on port 9990.

$ ./jboss-cli.sh

You are disconnected at the moment. Type ‘connect' to connect to the


server or ‘help' for the list of supported commands.
[disconnected /] connect
Connected to standalone controller at localhost:9990

If you invoke the CLI using the --help flag you can see a brief summary of its options which includes
the following ones:

jboss-cli.sh/jboss-cli.bat [--help] [--version] [--controller=host:port]


  [--connect] [--file=file_path]
  [--commands=command_or_operation1,command_or_operation2...
  [--command=command_or_operation]
  [--user=username --password=password]
  [--no-local-auth]

The first option we will discuss about is the --connect flag (or -c) which will let you connect
automatically and can be combined with the --user and --password if you are connecting to a
remote server host:

65
$ ./jboss-cli.sh --connect 192.168.10.1 --user=admin1234 --password=password1234!
Connected to standalone controller at 192.168.10.1:9990

Another interesting option that is worth exploring is the --file option which allows executing script
files written using the CLI syntax or even other scripting languages:

$ ./jboss-cli.sh jboss-cli.bat --file=myscript.cli

Commands can be also injected in no-interactive way using the --command and --commands flags,
which include a comma, separated list of commands:

$ ./jboss-cli.sh --commands="connect,deploy Utility.jar"

4.1.1. Recovering your server configuration using the CLI

The CLI can be your lifesaver tool in case you have an inconsistent server configuration (for
example, two references to the same application, one in the deployments folder and another in the
deployment repository). In such a scenario, you can start the application server adding the --admin-
only flag as follows:

./standalone.sh --admin-only

This will cause to open administrative interfaces and accept management requests, but not start
other runtime services or accept end user requests.

Once that you have completed the changes in the configuration, you can resume the normal
application behavior by issuing, from the Command Line Interface, the following command which
sets to false the admin-only mode and reloads the configuration:

[standalone@localhost:9990/] reload --admin-only=false

4.2. Using the CLI


One of the most interesting features of the CLI is its embedded intelligence, which helps us to find
the correct spelling of resources and commands, by simply pressing the Tab key. You can even use it
to find out the parameters needed for a particular command, without the need to go through the
reference documentation.

For example, by entering the /subsystem= command and pressing Tab, the CLI will show you all
the subsystems which are available in the application server:

66
After you are done with the node path, adding ':' at the end of the node path and pressing the Tab
key again will print all the available operation names for the selected node:

Once that you have chosen the operation, (in our example, write-attribute which will vary one
attribute for one particular resource) add '(' after the operation name and press the Tab key.

Choose the parameter name and specify its value after '='. Finally, when all the parameters have
been specified, close the command parenthesis and press enter to issue the command.

In the above command, we managed to set one key attribute of the default Datasource of the
application server.

Commands which can be used against the CLI can be divided into two broad categories:

67
• Operations: These include the resource path (address) on which they are executed. (Ex.
/subsystem=naming:jndi-view which displays the JNDI tree of the application server)

• Commands: These don’t include the resource path and they can, thus, execute an action
independently from the path of the current resource (e.g.: read-config-as-xml which displays the
XML configuration file).

In the next section, we will provide some more details on the syntax for creating CLI commands.

4.3. Build up the CLI commands


So far we have already built and executed some CLI commands, however in order to reach every
resource of the application server we need to learn the exact syntax expected by the CLI
interpreter. All CLI operation requests allow for low-level interaction with the server management
model. They provide a controlled way to edit server configurations. An operation request consists
of three parts:

• an address, prefixed with a slash (/).

• an operation name, prefixed with a colon (:).

• an optional set of parameters, contained within parentheses (()).

4.3.1. Determine the resource address

The server configuration is presented as a hierarchical tree of addressable resources. Each resource
node offers a different set of operations. The address specifies which resource node to perform the
operation on. An address uses the following syntax:

/node-type=node-name

node-type is the resource node type. This map to an element type in the server configuration.
node-name is the resource node name. This map to the name attribute of the element in the server
configuration. Separate each level of the resource tree with a slash (/). So for example, the following
CLI expression identifies the http default listener, which is part of the undertow subsystem:

/subsystem=undertow/server=default-server/http-listener=default

Once you have identified a resource, you can perform operations on the resource. An operation
uses the following syntax: :operation-name

4.3.2. Reading attributes of resources

So for example, you can query the list of available resources for your nodes by adding the read-
resource command at the end of it:

68
/subsystem=undertow/server=default-server/http-listener=default/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "buffer-pool" => "default",
  "enabled" => true,
  "max-post-size" => 10485760L,
  "socket-binding" => "http",
  "worker" => "default"
  }
}

If you want to query for a specific attribute of your node, you can use the readAttribute operation
instead; for example here’s how to read the "enabled" attribute from the http listener:

/subsystem=undertow/server=default-server/http-listener=default/:read-
attribute(name=enabled) {
  "result" => true
}

4.3.3. Writing attributes of resources

The CLI is not however just about querying attributes of the application server; you can also set
attributes or create resources. For example, if you were to vary the http port of the http connector,
then you have to use the corresponding write-attribute on the http’s socket binding interface as
shown here:

/socket-binding-group=standard-sockets/socket-binding=http/:write-
attribute(name=port,value=8280)
{
  "outcome" => "success",
  "response-headers" => {
  "operation-requires-reload" => true,
  "process-state" => "reload-required" ①
  }
}

① The update will not be effective until you reload the Server

4.3.4. Adding new resources

Adding new resources can be done through the add operation which requires the list of attributes
of the resource you are going to create like in the following example:

69
/subsystem=naming/binding=java\:global\/myname:add(binding-type=simple, type=int,
value=100)

Adding a new resource requires that you enter all the mandatory attributes of it and in the correct
format. For this reason it might be simpler to use the CLI graphical mode to add new resources (See
section Adding resources in graphical mode later in this chapter).

4.3.5. Reading children resources

The resources of the application server are arranged in a tree-like order. This means that you can
retrieve the full tree of resources available or just a subset of it. See the following example:

/subsystem=logging:read-resource
{
  "outcome" => "success",
  "result" => {
  "add-logging-api-dependencies" => true,
  "use-deployment-logging-config" => true,
  "async-handler" => undefined,
  "console-handler" => {"CONSOLE" => undefined},
  "custom-formatter" => undefined,
  "custom-handler" => undefined,
  "file-handler" => undefined,
  "log-file" => {
  "server.log" => undefined,
  "server.log.2015-09-30" => undefined,
  },
  "logger" => {
  "com.arjuna" => undefined,
  "org.apache.tomcat.util.modeler" => undefined,
  "org.jboss.as.config" => undefined,
  "sun.rmi" => undefined
  },
  "logging-profile" => undefined,
  "pattern-formatter" => {
  "PATTERN" => undefined,
  "COLOR-PATTERN" => undefined
  },
  "periodic-rotating-file-handler" => {"FILE" => undefined},
  "periodic-size-rotating-file-handler" => undefined,
  "root-logger" => {"ROOT" => undefined},
  "size-rotating-file-handler" => undefined,
  "syslog-handler" => undefined
  }
}

This is the output of the logging subsystem’s resources. However you might be interested only in a
subset of these resources; this is particularly true if you need to parse the output of the CLI from an

70
external tool/programming language. Let’s say you want to collect just the list of log files created by
the application server. You can then use the read-children-resources for this purpose:

/subsystem=logging:read-children-resources(child-type=log-file)
{
  "outcome" => "success",
  "result" => {
  "server.log" => {},
  "server.log.2015-09-30" => {},
  }
}

4.3.6. Extra operations available on resources

Besides these operations that we have seen so far, (which are available on every resource of your
subsystems) there can be special operations which can be performed exclusively on one resource.
For example, within the naming subsystem, you are able to issue a jndi-view operation, which will
display the list of JNDI bindings

/subsystem=naming/:jndi-view
{
  "outcome" => "success",
  "result" => {"java: contexts" => {
  "java:" => {
  "TransactionManager" => {
  "class-name" =>
"com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate",
  "value" =>
"com.arjuna.ats.jbossatx.jta.TransactionManagerDelegate@afd978"
  },
  . . . .
}

4.4. Enabling properties resolution in the CLI


It is possible to allow the CLI to resolve parameter which are passed at start-up using the --resolve
-parameter-values parameter as demonstrated in the following example:

./jboss-cli.sh -c -Dmyval=foo --resolve-parameter-values

Now, whenever you are referencing a parameter as value, you will be able to use the System
Property in replacement of it. To verify it

71
/system-property=foo:add(value=${myval})
{"outcome" => "success"}

/system-property=foo:read-attribute(name=value)
{
  "outcome" => "success",
  "result" => "foo"
}

4.5. Detecting active operations


It is possible to query for the list of running management commands and detect the status of each
operation. This is especially useful if you want to find out if any management command is blocked
by another operation or if it’s actively running.

All currently running operations can be viewed through the read-children-resources of the
management-operations:

/core-service=management/service=management-operations:read-children-resources(child-
type=active-operation)
{
  "outcome" => "success",
  "result" => {"-1246693202" => {
  "access-mechanism" => "undefined",
  "address" => [
  ("deployment" => "example")
  ],
  "caller-thread" => "management-handler-thread - 24",
  "cancelled" => false,
  "exclusive-running-time" => 345318272121L,
  "execution-status" => "awaiting-stability",
  "operation" => "deploy",
  "running-time" => 345318272121L
  }
}

The key attribute is "execution-status" which can have the following values:

• executing: The caller thread is actively executing

• awaiting-other-operation: The caller thread is blocking waiting for another operation to


release the exclusive execution lock

• awaiting-stability: The caller thread has made changes to the service container and is waiting
for the service container to stabilize

• completing: The operation is committed and is completing execution

• rolling-back: The operation is rolling back

72
You can use cancel-non-progressing-operation to bulk delete all operations which are handling a
lock for over 15 seconds:

/core-service=management/service=management-operations:cancel-non-progressing-
operation

On the other hand, you can examine the active-operation, and then directly cancel it by invoking
the cancel operation on it:

/core-service=management/service=management-operations/active-operation=-
1155777943:cancel
{
  "outcome" => "success",
  "result" => undefined
}

4.6. Tracing CLI commands


Tracing commands which are sent across the native management interface might be required if
you want to keep an adequate level of security in your system. By default, CLI commands are not
audited; however it just takes a minute to enable them. Log into the console and point to the
management’s core service to reach the logger’s audit log as follows:

/core-service=management/access=audit/logger=audit-log:write-
attribute(name=enabled,value=true)

Logging is done in JSON format and by default is directed into the data/audit-log.log file of your
application server base directory.

You can specify a custom format for your CLI auditing commands or, as an alternative, direct your
auditing commands to your operating system logger. Consult the JBoss EAP documentation if you
need additional information about custom formats of auditing:
https://access.redhat.com/site/documentation/en-
US/JBoss_Enterprise_Application_Platform/6.2/html/Administration_and_Configuration_Guide/Abou
t_a_Management_Interface_Audit_Logging_Formatter.html

4.6.1. In-memory configuration changes

Another alternative for tracing the management commands can be found in the core management
subsystem which allows to configure an in-memory history of the last configuration changes. For
example to track the last 10 configuration changes, you need to activate configuration changes
with:

/subsystem=core-management/service=configuration-changes:add(max-history=10)

73
Now we can list the last configuration changes :

/subsystem=core-management/service=configuration-changes:list-changes()

4.7. Running the CLI in graphical mode


Up to now we have used the Command Line in terminal mode which, thanks to the auto completion
functionality, does not require a steep learning curve. There’s however an even more intuitive way
to run the Command Line and generate scripts, that is the graphical mode. The CLI graphical mode
can be activated by passing the --gui switch to the jboss-cli script, as shown:

 $ ./jboss-cli.sh --gui

A java graphical application will display, containing on the left side the list of server resources and
on the upper side the command being built:

The Command Line Interface in graphical mode does not require using the connect command
since, by default, connects to the server address and port specified in the file jboss-cli.xml. It will
however prompt for username/password if you are trying to connect to a remote host.

By using the CLI in graphical mode you can just navigate through the application server module
and then right click on the node you want to operate with. Here’s gor example how to get a dump of
the JNDI tree:

74
Once selected the command (when necessary the GUI will prompt for additional parameters to be
set), the command will be included in the upper text box and can be executed by clicking on the
Submit button:

4.7.1. Adding resources in graphical mode

If you paid attention to the list of resources contained in the graphical CLI, you should have
discovered, for each resource, one element containing the value "=*" as shown by the following
picture:

These paths which are marked with an asterisk can be used to create new resources. So in the
above example, if you were to create a new Datasource, you could right click on the "data-source=*"
element and select "Add" from the list of options:

75
76
5. Chapter 5: Advanced CLI features
In this chapter we will be pedaling harder with the CLI, covering some more advanced features.
The following sections will teach you how to:

• Use the batch mode to execute multiple CLI commands

• Use batch deployments to execute multiple resource deployments

• Take snapshots of your configuration using the CLI

• Applying patches to your server installation

5.1. Using CLI batch mode


We have already learned how to execute multiple CLI commands by including them in a file. The
batch mode can be either used interactive mode or in files and allows the execution of multiple CLI
commands as an atomic unit. It is quite like a transaction, so that if any of the commands or
operations fails, the changes are rolled back. On the other hand, if the execution ends without any
error, changes are committed into the configuration.

You cannot include navigation commands as part of a batch, therefore commands

 like cd, pwd, or help are excluded, because they do not result in any change into
the server configuration.

In order to start batch mode you need to demarcate your session using the batch command. If you
are running the CLI in interactive mode, you will notice that the prompt is now be marked by the
character "#".

Then, in order to terminate your batch session, you have to use the run-batch command. Here’s an
example session:

[standalone@localhost:9990/] batch

[standalone@localhost:9990/#] deploy MyApplication.jar

[standalone@localhost:9990/#] /system-property=myprop:add(value=myvalue)

[standalone@localhost:9990/#] run-batch

Another handy command is list-batch, which can be executed during a batch session to get the list
of pending batch commands:

[standalone@localhost:9990/] list-batch

#1 deploy MyApplication.jar

#2 /system-property=myprop:add(value=myvalue)

77
5.1.1. More about batch commands

If you are executing your batch scripts in interactive mode, you might need to edit or interrupt
your batch session and continue it later. For this purpose, when running in batch mode you are
allowed to use some extra commands such as holdback-batch, which creates a savepoint to your
batch execution:

[standalone@localhost:9990/ #] undeploy myproject.war

#1 undeploy myproject.war

[standalone@localhost:9990/ #] holdback-batch

In order to continue your batch of commands, you can issue the batch command again.

It is also possible to create multiple savepoints by adding an unique name to your holdback-batch
command as follows:

[standalone@localhost:9990/# ] holdback-batch step1

Later on, you can continue the execution, by specifying the holdback name:

[standalone@localhost:9990/] batch step1

The list of available batch commands does not end here. For the sake of completeness we will list
them all here, in case you need some extra power to your batches:

• batch: Starts a batch of commands. When the batch is paused, reactivates the batch.

• list-batch: Lists the commands that have been added to the batch.

• run-batch: Executes the currently active batch of commands and exits batch mode.

• holdback-batch: Saves the currently active batch and exits the batch mode, without executing
the batch. The held back batch can later be re-activated, by executing "batch"

• clear-batch: Removes all the existing command lines from the currently active batch. The CLI
stays in the batch mode after the command is executed.

• discard-batch: Discards the currently active batch. All the commands added to the batch will be
removed, the batch will be discarded and the CLI will exit the batch mode

• edit-batch-line: Replaces an existing line from the currently active batch, (with the specified
line number) with the new one.

• remove-batchline: Removes from the batch the line specified as number argument.

• move-batch-line: Moves an existing line from the specified position to the new position,
shifting the lines between the specified positions.

Finally, one more interesting option offered by batch mode is the ability to execute external CLI

78
scripts from within the CLI. Here is for example how to execute the script named myscript.cli as
argument to the run-batch command:

[standalone@localhost:9990/] run-batch --file=myscript.cli --verbose

5.2. Using batch deployments


Batch deployments have been introduced specifically to make it easier to install/uninstall complex
applications; a batch deployment consists of a JAR file with ".cli" extension containing a set of
applications to be deployed plus one deploy and one undeploy configuration file. Things will look
easier with an example: suppose that we are going to manage three applications named app1.war,
app2.war and app3.war. Let’s bundle them all in a file named batchdeploy.cli using the jar tool:

$ jar cvf batchdeploy.cli app1.war app2.war app3.war

Now let’s create a file named deploy.scr which contains the deployment order:

deploy app1.war

deploy app2.war

deploy app3.war

As a final step, create a file named undeploy.scr, which contains the undeploy order as well:

undeploy app1.war

undeploy app2.war

undeploy app3.war

Now update your cli archive including the above two scripts:

$ jar uvf batchdeploy.cli deploy.scr undeploy.scr

As a result, you should expect the following CLI archive breakdown:

79
$ jar -tf test.cli

app1.war

app2.war

app3.war

deploy.scr

undeploy.scr

Now you can deploy this archive by issuing

[standalone@localhost:9990/] deploy test.cli

#1 deploy app1.war

#2 deploy app2.war

#3 deploy app3.war

Undeploying is easy as well and requires adding the --path flag, otherwise the deployer will look for
a deployed application using the CLI script file name:

[standalone@localhost:9990/] undeploy --path=test.cli

#1 undeploy app1.war

#2 undeploy app2.war

#3 undeploy app3.war

5.3. Applying patches to your configuration


The patch command is a simple and effective management option to apply application server
patches. This mechanism has been used so far to get updates of the versions 9 and 10 of the
application server. The recommended strategy to provision updates of the application server is by
means of the Galleon tool which is discussed in the section Provisioning WildFly using Galleon

We provide for your reference an example on how the patch command works, using as example
the 10.0.1 patch to be applied on an existing 10.0.0 installation of WildFly.

Start by downloading the patch used to upgrade from the release 10.0.0 of the application server
to the 10.1. This is available from the same location where the application server is available, that
is http://www.wildfly.org

80
The following picture shows the download link and description for the patch:

At first, download the patch zip file labeled as "Updated Existing 10.X.X. Final Install". In order to
apply the 10.1 patch just unzip the patch bundle in a folder of your like

$ unzip wildfly-10.1.0.Final-update.zip

Now we will show how to apply patching it to in offline mode. Launch the CLI when WildFly is shut
down:

$ ./jboss-cli.sh

You are disconnected at the moment. Type 'connect' to connect to the server or 'help'
for the list of supported commands.

[disconnected /]

Now in "disconnected mode" execute the following command (adjust the path to the location where
you have unzipped the patch):

[disconnected /] patch apply /tmp/wildfly-10.1.0.Final.patch


{
  "outcome" : "success",
  "result" : {}
}

Please notice that the patch installation might find some conflicts, which prevent the installation. In
this case, first review the conflicts. If you find them not critical (might be for example the
README.txt file in the deployments folder) just choose the --override-all in order to solve any
conflicts:

[disconnected /] patch apply /tmp/wildfly-10.1.0.Final.patch --override-all

You can follow the same guidelines for installing the patch in online mode; just you will be warned
that the server needs a restart:

81
{
"outcome" : "success",
"response-headers" : {
"operation-requires-restart" : true,
"process-state" : "restart-required"
}
}

When the server is restarted, check from the server logs that the new 10.1 version has been
correctly installed:

10:45:41,940 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full


10.1.0.Final (WildFly Core 2.2.0.Final) started in 26217ms - Started 728 of 971
services (415 services are lazy, passive or on-demand)

Patch rollback can be executed in either online or offline modes as specified above. The steps are
the same as applying a patch, although instead a rollback command is used instead of the apply
command:

[standalone@localhost:9990 /] patch rollback --reset-configuration=true


{
"outcome" : "success",
"response-headers" : {
"operation-requires-restart" : true,
"process-state" : "restart-required"
}
}

Next, issue a shutdown/restart command for the patch to take effect:

[standalone@localhost:9990/] shutdown --restart=true

5.4. Taking snapshots of your configuration


The configuration of the application server is pretty much like a database; actually, every change
that is applied into the configuration is persisted in the standalone_xml_history folder (the same
applies for the domain, whose folder is named domain_xml_history).

Within these folders you will normally find a set of files which are part of the application server
history:

• standalone.initial.xml: This file contains the original configuration that was used the first time
you successfully booted the application server. It is the only file which does not get overwritten.

• standalone.boot.xml: This file contains the configuration that was used for the last successful

82
boot of the server. This gets overwritten every time we boot the server successfully.

• standalone.last.xml: This file gets overwritten each time a change is committed to the
configuration. If you happen to corrupt your server configuration and you want to restore to
the latest save point, this is the file to pickup.

Besides the above three files, standalone_xml_history contains a directory called current which at
boot is be empty. As you apply configuration changes, this folder will contain the latest 100
configuration version in the format standalone.vX.xml (where X is the change version).

As you restart the application server, the current folder is emptied and its content is timestamped
using the format YYYYMMDD-HHMMSSMS. These timestamped folders are kept for 30 days.

The last element within the standalone_xml_history is the snapshot folder where you can find the
server configuration’s snapshot created by you, using the Command Line Interface.

In order to take a snapshot of the configuration, just issue the take-snapshot command and the CLI
will back up your configuration:

:take-snapshot
{
  "outcome" => "success",
  "result" => "/opt/wildfly-
20.0.0.Final/standalone/configuration/standalone_xml_history/snapshot/20131108-
171642235standalone.xml"
}

You can check the list of available snapshots by using the list-snapshots command:

:list-snapshots
{
  "outcome" => "success",
  "result" => {
  "directory" => "/opt/wildfly-
20.0.0.Final/standalone/configuration/standalone_xml_history/snapshot",
  "names" => [
  "20131108-171642235standalone.xml",
  "20131108-171803638standalone.xml"
  ]
  }
}

You can delete a particular snapshot using the delete-snapshot command, which requires the
snapshot name as parameter. Suppose we would need to delete the snapshot we’ve just created:

:delete-snapshot(name="20131108-171642235standalone.xml")
{"outcome" => "success"}

83
5.5. Running the CLI in offline mode
WildFly has a new management mode for the CLI which allows changing your configuration
without actually being connected to a WildFly server. This can be done using the embed-server
command. Here is the synopsis of the command:

embed-server [--admin-only=true|false]
  [-c=config_file || --server=config=config_file]
  [--empty-config --remove-existing-config]
  [--jboss-home=rootdir]
  [--stdout=discard|echo]

Let’s say you want to vary your default configuration. Start the CLI as usually:

$ ./jboss-cli.sh
You are disconnected at the moment. Type 'connect' to connect to the server or 'help'
for the list of supported commands.

Next, execute the following command in disconnected mode:

[disconnected /] embed-server --std-out=echo

As you can see from the log, an embedded WildFly server will start, without binding any network
interface, so you will not conflict with any running server.

19:44:09,759 INFO [org.jboss.as] (MSC service thread 1-2) WFLYSRV0049: WildFly Full
20.0.0.Final (WildFly Core 12.0.1.Final) starting
. . . . .
19:44:12,020 WARN [org.jboss.as.domain.management.security] (MSC service thread 1-2)
WFLYDM0111: Keystore /home/francesco/jboss/wildfly-
20.0.0.Final/standalone/configuration/application.keystore not found, it will be auto
generated on first use with a self signed certificate for host localhost
19:44:12,108 INFO [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0212:
Resuming server
19:44:12,112 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full
20.0.0.Final (WildFly Core 12.0.1.Final) started in 2665ms - Started 67 of 81 services
(25 services are lazy, passive or on-demand)
[standalone@embedded /]

You can check the status of the server by executing a simple "ls" on the application server root:

84
[standalone@embedded /] ls

core-service
deployment
deployment-overlay
extension
interface
path
socket-binding-group
subsystem
system-property
launch-type=EMBEDDED
name=fedora
namespaces=[]
organization=undefined
process-type=Server
product-name=WildFly Full
product-version=20.0.0.Final
profile-name=undefined
release-codename=
release-version=12.0.1.Final
running-mode=ADMIN_ONLY
runtime-configuration-state=ok
schema-locations=[]
server-state=running
suspend-state=RUNNING
uuid=8dfd2264-b3c8-499b-8234-03a270dc87f4

Now try updating your configuration, while running in embedded mode:

[standalone@embedded /] /socket-binding-group=standard-sockets/socket-
binding=http/:write-attribute(name=port,value=8280)

If you check your server configuration XML file, you will see that it has been updated accordingly.

Please note that the Offline mode enables you to start with an empty configuration file, which will
be therefore created from scratch:

[disconnected /] embed-server --server-config=empty-config.xml --empty-config

Finally, in order to stop the embedded server you can simply type:

85
[standalone@embedded /] stop-embedded-server

10:50:46,480 INFO [org.jboss.as] (MSC service thread 1-5) WFLYSRV0050: 20.0.0.Final


(WildFly Core 12.0.1.Final) stopped in 36ms

[disconnected /]

5.6. Suspending and resuming the Server


WildFly is now capable to suspend/resume the execution of application requests without shutting
down the server. This new feature allows your running request to complete while avoiding new
requests to be queued. In order to suspend the server, enter the Command Line Interface and
execute:

 :suspend

Check from the logs that the server status has changed to suspended:

10:54:43,595 INFO [org.jboss.as.server] (management-handler-thread - 1) WFLYSRV0236:


Suspending server with no timeout.
10:54:43,604 INFO [org.jboss.as.ejb3] (management-handler-thread - 1) WFLYEJB0493:
EJB subsystem suspension complete

Please note that you can also start the server directly in suspend mode, by passing the attribute
"suspend" to "-start-mode":

$ standalone.sh --start-mode=suspend

When you are in suspend mode you are still able to:

• Deploy/undeploy applications

• Change your configuration

You can check at any time the status of your server by issuing a read-attribute command at the
root of your configuration:

[standalone@localhost:9990 /] :read-attribute(name=suspend-state)
{
  "outcome" => "success",
  "result" => "SUSPENDED"
}

It is possible to specify a timeout to the Suspend mode to allow running operations to complete. For

86
example, in order to specify a 30 seconds timeout:

:suspend(timeout=30)

The same operation executed against a managed domain:

:suspend-servers(timeout=30)

When running in Domain mode, it’s possible to suspend just a single Server as follows:

/host=master/server-config=server-one:suspend(timeout=30)

Why the Timeout is important? While HTTP requests are always allowed to
complete, whatever is the timeout, in-flight transactions (such as EJB) will run
 only until the timeout period expires. So potentially they could fail if they don’t
complete by that time.

If you want that all in-flight Transaction complete whatever is the timeout, you have to force it with
this parameter:

/subsystem=ejb3:write-attribute(name=enable-graceful-txn-shutdown,value=true)

Once completed the running requests, you can either shutdown the server or resume the execution
with the resume command:

:resume

5.7. Graceful shutdown of the Server


In order to enable a graceful shutdown of the server you need to specify an appropriate timeout
value when stopping the server. If this parameter is specified, the server will be suspended and will
wait up to the specified timeout for all requests to finish before shutting down.

Example: graceful shutdown waiting up to 30 seconds:

:shutdown(timeout=60)

Example: graceful shutdown domain-wide.

:stop-servers(timeout=60)

87
Example 3: graceful shutdown just for server 1 of the domain:

/host=master/server-config=server-one:stop(timeout=60)

5.8. Conditional execution with the CLI


Although the Command Line Interface wasn’t specifically to control the execution flow of
management commands, you can still use some conditional executions to execute commands if one
condition is valid or not.

For example, here is how you can execute a conditional deployment- e.g. to check if the application
myproject.war was not already deployed:

if (outcome != success) of /deployment=myproject.war:read-resource


  deploy myproject.war
end-if

Another example, slightly more complex, will check if the org.postgres module has been already
installed. The conditional execution will either install it (if not installed already) or just print a
message (saying that it’s already installed):

if (outcome != success) of /core-service=module-loading/:list-resource-loader-


paths(module=org.postgres)
  module add --name=org.postgres --resources=postgresql-42.2.5.jar
--dependencies=javax.api,javax.transaction.api
else
  echo "module org.postgres already installed"
end-if

5.9. Migration of legacy systems with the CLI


In order to help users migrating from legacy subsystems such as messaging (AS 7/ EAP 6/ WildFly 8)
or jbossweb (AS 7.1 /EAP 6), jacorb (AS 7/ EAP 6/ WildFly 8) a set of management operations have
been included in the CLI. These operations are only available in the legacy subsystems which need
to be included in the WildFly configuration if you want to perform the migration. Here are the steps
required to migrate your legacy configurations (as an example we will migrate the "web"
configuration from AS 7 to "undertow" configuration in WildFly)

• Include the legacy subsystem and its configuration in your WildFly configuration:

88
<server xmlns="urn:jboss:domain:6.0">
  <extensions>
. . . .
  <extension module="org.jboss.as.web"/>
. . . . .
  </extensions>
  <subsystem xmlns="urn:jboss:domain:web:2.2" default-virtual-server="default-host"
native="false"> ①
  <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http
"/>
  <virtual-server name="default-host" enable-welcome-root="true">
  <alias name="localhost"/>
  <alias name="example.com"/>
  </virtual-server>
  </subsystem>
<domain-controller>

① The legacy "web" subsystem from the older configuration

• Comment out the new subsystem configuration (undertow) in your WildFly configuration:

<!--
<subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server"
default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other" statistics-enabled="${wildfly.undertow.statistics-
enabled:${wildfly.statistics-enabled:false}}">
  <buffer-cache name="default"/>
  <server name="default-server">
  <http-listener name="default" socket-binding="http" redirect-socket="https"
enable-http2="true"/>
  <https-listener name="https" socket-binding="https" security-
realm="ApplicationRealm" enable-http2="true"/>
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  <http-invoker security-realm="ApplicationRealm"/>
  </host>
  </server>
  <servlet-container name="default">
  <jsp-config/>
  <websockets/>
  </servlet-container>
  <handlers>
  <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
  </handlers>
</subsystem>
-->

• Then, start WildFly in admin-only mode:

89
$ ./standalone.sh --admin-only

• Connect from the CLI:

$ ./jboss-cli.sh -c

• Finally, execute the migrate operation in the legacy subsystem. In our case the legacy
subsystem is "web" which will migrate into "undertow":

[standalone@localhost:9990 /] /subsystem=web:migrate
{
  "outcome" => "success",
  "result" => {"migration-warnings" => []}
}

Now check from the server configuration that undertow configuration has been created, using the
settings you have in your legacy web subsystem.

90
6. Chapter 6: Deploying applications
This chapter discusses about deploying applications on WildFly application server. As we will see in
a minute, deploying applications with the new release of the application server is still an immediate
task, which can be accomplished using a different number of instruments such as:

• File system copy of files (standalone mode only)

• Using the management interfaces (Admin Console or CLI)

• Using Maven to deploy WildFly applications

6.1. File system deployment


File system is the old school approach to deploy applications that is well known to the majority of
developers. This kind of deployment is available on standalone mode only therefore, if you are
about to deploy applications on a WildFly domain, you have to use the standard management
instruments (CLI or Admin Console).

File system deployment just requires that you copy an archived application into the deployments
folder, and it will be automatically deployed. Example:

$ cp example.war /opt/wildfly-20.0.0.Final/standalone/deployments

You should then expect to find in your server’s log some evidence of your deployment along with
the dependencies activated by your deployment. In our case, since we deployed a web application,
we will find the following info on the server’s Console:

12:35:13,724 INFO [org.wildfly.extension.undertow] (MSC service thread 1-7)


JBAS018210: Register web context: /example
12:35:13,872 INFO [org.jboss.as.server] (Controller Boot Thread) JBAS018559: Deployed
"example.war" (runtime-name : "example.war")

What just happened is that a process named the Deployment scanner picked up your application
and prepared it for deployment. The scanner can operate in one of two different modes:

6.1.1. Mode 1: Auto-deploy mode:

When running in auto-deploy mode, the scanner will directly monitor the deployment content,
automatically deploying new content and redeploying content whose timestamp has changed. This
is similar to the behavior of previous JBoss AS releases, except that the deployment scanner will not
monitor any more changes in deployment descriptors, since Java EE 6/7 applications do not require
deployment descriptors.

6.1.2. Mode 2: Manual deploy mode:

When running the manual deploy mode, the scanner will not attempt to deploy the application.

91
Instead, the scanner relies on a system of marker files, with the user’s addition or removal of a
marker file serving as a sort of command telling the scanner to deploy, undeploy or redeploy
content.

The default rule is that archived applications use the auto-deploy mode while
 exploded archives require manual deploy mode.

In order to perform manual deploy mode, you have to add a marker file named
application.dodeploy to the deployments folder. For example supposing you want to deploy the
Example.ear folder to the deployments folder, using a Linux machine:

$ cp -r Example.ear $JBOSS_HOME/standalone/deployments

$ touch $JBOSS_HOME/standalone/deployments/Example.ear.dodeploy

In case a deployment fails, the deployment scanner places a marker file


application.failed (ex. Example.ear.failed) in the deployments directory to indicate
that the given content failed to deploy into the runtime. The content of the file
 will include some information about the cause of the failure. Note that with auto-
deploy mode, removing this file will make the deployment eligible for
deployment again.

6.1.3. Configuring the Deployment scanner attributes

The deployment scanner attributes are part of the deployment-scanner subsystem. Out of the box
there is a deployment scanner named "default" which contains the deployment settings that we
have described at the beginning of this chapter.

/subsystem=deployment-scanner/scanner=default:read-resource
{
  "outcome" => "success",
  "result" => {
  "auto-deploy-exploded" => false,
  "auto-deploy-xml" => true,
  "auto-deploy-zipped" => true,
  "deployment-timeout" => 600,
  "path" => "deployments",
  "relative-to" => "jboss.server.base.dir",
  "runtime-failure-causes-rollback" => expression "${jboss.deployment.scan
ner.rollback.on.failure:false}",
  "scan-enabled" => true,
  "scan-interval" => 5000
  }
}

Here is a short description of the scanner attributes:

92
• enabled: when set to true (default) the Deployment scanner will be enabled.

• path: this is the folder inspected by the deployment scanner. If "Path Relative To" is configured,
this will be the Path relative to that variable, otherwise it is intended to be an absolute path.

• relative to: if configured, this is the file system path will be appended to the Path variable
(default jboss.server.base.dir).

• scan-interval: this is the amount of time between each directory scan.

• auto-deploy-exploded: when set to true, automatically deploys exploded archives. (default


false)

• auto-deploy-zipped: when set to true automatically deploys zipped archives (default true).

• auto-deploy-xml: when set to true, resources contained in XML files (e.g. datasources, resource
adapters) are automatically deployed.

• deployment-timeout: sets the time limit to complete an application deployment (default 600
seconds).

• runtime- failure-causes-rollback: This flag indicates whether a runtime failure of a


deployment causes a rollback of the deployment as well as all other (even unrelated)
deployments as part of the scan operation.

By setting the attributes of the default deployment scanner, you can customize its behavior. For
example, if you want to allow automatic deployment of exploded archives then you can issue from
the CLI the following command:

/subsystem=deployment-scanner/scanner=default:write-attribute(name=auto-deploy-
exploded,value=true)

6.2. Deploying using the Web interface


Deploying an application using the management instruments (such as the Web console and the CLI)
is the recommended choice for production environments. This is also the only choice available if
you are running the AS in domain mode or if you don’t have remote access to the deployments
folder of a standalone distribution.

The appearance of the Web Console deployer might be slightly different

 depending on the version of WildFly you are using. The snapshots of this chapter
reflect WildFly 14 Management Console

Let’s see at first how to perform a standalone deployment:

6.2.1. Standalone Deployment

Deployments can be managed by selecting the Deployments upper tab of the Administration
console. By Clicking on the (+) button a list of options will display:

93
You can choose to Upload Deployment to upload a compressed archive. The option Add
Unmanaged Deployment lets you upload a deployment from a folder of your file system. Finally
the option Create Empty Deployment allows you to add an empty artifact to your deployments, in
case you want to add content later.

Let’s choose the option Upload Deployment. In the next screen, you will be able to Drag and Drop
your deployment in the window (if your browser supports Drag&Drop). Otherwise you will need to
point to the filesystem location of your deployment:

 Deployments added by drag and drop will be enabled by default.

Click on Next to continue. In the following screen confirm the Name, Runtime Name of your

94
deployment and if the application will be Enabled and click on Finish:

If your deployment completed successfully, it will be added in your Content Repository. Now, by
clicking again on the (+) button, a set of options will be available to control your deployment as you
can see from the following picture:

• Enable allows to enable your application if it was disabled.

• Disable swaps the status to disabled if it was enabled.

• Replace replace the deployment with a new one.

• Explode transforms the compressed deployment into exploded deployment.

• Remove removes the application from the repository.

6.2.2. Domain Deployment

Domain deployment requires a few more steps: as a matter of fact, every deployment in Domain
mode transition through a Content Repository, then it is assigned to a Server Group.

Start by selecting the Deployments upper tab as shown by the following picture:

95
Now click on the (+) button and select Upload Content from the Combo. From there you can either
Drag&Drop or select the archive from your file system:

Confirm the Name and Runtime Name of your deployment and click on Finish:

Now the application will be enlisted in the Content Repository. In order to assign your application
to a Server Group, select it and from the Deployment Tab choose to "Deploy existing content", as
depicted by this screen:

96
As you can see from the above picture, it is also possible to directly upload the

 application file and assign it to a Server Group by choosing Upload new


deployment. This will speed up the deployment process.

In the next screen, you will choose which available application has to be deployed on the selected
Server Group:

Click Deploy to complete deployment. Now the application has been deployed to the selected Server
Group:

97
6.2.2.1. Managing your application status

The status of your application can be managed at two different levels. At Repository level, you can
choose to redeploy, replace, download or undeploy your application from the Repository. This will
have effect on the Server Group where the application has been deployed:

On the other hand, if you want to Disable or Remove the application from a Server Group, then
you have to reach the Deployment and click on its Combo menu placed next to it:

98
By clicking on the Remove option, the application will be evicted from the
 Content Repository and requires to be uploaded again if you need it later.

6.3. Deploying the application using the CLI


Finally yet importantly, we will mention another valuable option for deploying your applications:
the Command Line Interface. Although you might think that using a terminal to deploy an
application is more tedious, I can promise you that at the end of this chapter it will take less time
than a 100 meters Olympic final!

Let’s provide some proof of concept. The first thing we will learn is how to use the deploy
command to deploy an application to a standalone server:

deploy /home/user1/myproject.war

Quite simple isn’t it? The great thing is that the file system paths are expandable (using the Tab key)
therefore you can deploy an application just like if you are using your friendly bash shell!

The corresponding command to undeploy the application is obviously undeploy:

undeploy myproject.war

Again, you don’t need even to remember the application name you are going to undeploy. Just hit
tab after typing "undeploy" as shown here:

undeploy
--headers= --help --path= myproject.war

Re-deploying an application requires an additional flag (-f ) in order to force application


redeployment:

deploy -f myproject.war

99
Finally, it’s worth mentioning that you can deploy an application also from a remote URL. Here is
how to deploy the application named helloworld.war from the GitHub repository of this book:

deploy --url=https://github.com/fmarchioni/wildfly-admin
-guide/blob/master/chapter6/helloworld.war?raw=true --name=helloworld.war

6.3.1. Manipulating exploded deployments

In the examples provided so far we have deployed packaged archives. It is however possible to
deploy exploded archives as well by using the --unmanaged flag. Here is for example how to deploy
an application contained in the folder myexplodedapp.war:

deploy /home/user1/myexplodedapp.war --unmanaged

It is also possible to transform a package deployment into an exploded one, by using the explode()
command on the active deployment:

/deployment=kitchensink.ear:explode()

Please notice that this operation is not recursive so you need to explode the sub-deployment if you
want to be able to manipulate the sub-deployment content.

Once you are working on an exploded deployment, you can also add content to it dynamically by
using the add-content operation like in the following example:

/deployment=kitchensink.war:add-content(content=[{target-path=WEB-
INF/classes/com/sample/MyServlet.class, input-stream-
index=/tmp/com/sample/MyServlet.class}])

6.3.2. Listing the module dependencies of a deployed application

Since WildFly 16 it is possible to list the module dependencies added by WildFly to your deployed
application. You can list the module dependencies of a deployment through the CLI using the list-
modules operation as follows:

/deployment=demo.war:list-modules

In case of ear-subdeployments, the list-modules operation is also available under the


subdeployment resource:

/deployment=demo.ear/subdeployment=demo.war:list-modules

By default, the list-modules operation displays the list of dependencies in a compact style, showing

100
only the module name. You can change this setting using the attribute verbose=[false*|true] to
enable/disable a detailed response as follows:

[standalone@localhost:9990 /] /deployment=demo.ear:list-modules(verbose=true)
  {
  "outcome" => "success",
  "result" => {
  "system-dependencies" => [
  {
  "name" => "com.fasterxml.jackson.datatype.jackson-datatype-jdk8",
  "optional" => true,
  "export" => false,
  "import-services" => true
  }
  ...
  ],
  "local-dependencies" => [
  {
  "name" => "deployment.demo.ear.test-application-ejb.jar",
  "optional" => false,
  "export" => false,
  "import-services" => true
  },
  ...
  ],
  "user-dependencies" => [
  {
  "name" => "com.fasterxml.jackson.datatype.jackson-datatype-jdk8",
  "optional" => false,
  "export" => false,
  "import-services" => false
  },
  {
  "name" => "org.hibernate:4.1",
  "optional" => false,
  "export" => false,
  "import-services" => false
  },
  ...
  ]
  }
  }

The list_modules operation shows information in three different categories:

• system-dependencies: These are the dependencies added implicitly by the server container.

• local-dependencies: These are dependencies on other parts of the deployment.

• user-dependencies: These are the dependencies defined by the user via a manifest file or
deployment-structure.xml.

101
6.3.3. CLI Domain deployment

As we already learned, when running in domain mode deployments are bound to one or more
server groups. In order to deploy an application to all server groups in a domain you have to issue
the following command:

deploy application.war --all-server-groups

On the other hand, if you want to deploy your application to one or more (comma separated) sever
groups, use the --server-groups flag instead:

deploy application.war --server-groups=main-server-group

Un-deploying an application can be done as well on all server groups as follows:

undeploy application.war --all-relevant-server-groups

On the other hand, undeploying one application from a single (or a set) of server groups is a bit
more troublesome. If the application is not available on the other server groups you can simply
issue:

undeploy application.war --server-groups=main-server-group

On the other hand, if the application to be undeployed is available on other server groups, you need
to force the deployer to perform a safe undeploy (i.e. without deleting the content) by issuing the
following command:

undeploy application.war --server-groups=main-server-group --keep-content

6.4. Deploying applications using Maven


Today Maven is one of the most popular tool for assisting the developer in structuring the project,
compiling, packaging and deploying it as an application. Maven is essentially based on a set of
plugins which can be used to enhance its capabilities and a plugin does exist to manage and deploy
applications on WildFly.

The complete documentation for the WildFly Maven plugin is available at the following address:
https://docs.jboss.org/wildfly/plugins/maven/latest/

In order to use the plugin, you have to include into your Maven’s pom.xml the following wildfly-
maven-plugin plugin:

102
<plugin>
  <groupId>org.wildfly.plugins</groupId>
  <artifactId>wildfly-maven-plugin</artifactId>
  <version>2.0.2.Final</version>
</plugin>

With the WildFly plugin configured, you should at first check that the application server is up and
running, then you can issue the command to deploy your Maven project:

mvn wildfly:deploy

Once you are done with your application, you can undeploy it using the corresponding goal:

mvn wildfly:undeploy

In order to redeploy your application, issue the following command:

mvn wildfly:redeploy

6.4.1. Domain Deployment

The above settings can be used for deploying your applications to standalone servers. If you are
planning to deploy your application to a domain of servers, then you can specify the domain
settings through the configuration stanza of your plugin. In this case, we are deploying our
application to the other-server-group:

<plugin>
  <configuration>
  <domain>
  <server-groups>
  <server-group>main-server-group</server-group>
  </server-groups>
  </domain>
  </configuration>
</plugin>

103
7. Chapter 7: Configuring Database
connectivity
In this chapter, we will learn how to configure connections to Databases using WildFly. Database
connectivity is part of every application; therefore, we have included it at the top of the list.
Database connectivity is achieved, by default, through the datasources subsystem which uses the
JCA layer which enables you to reach a broader range of Enterprise Information Systems (EIS).

Since WildFly 14 it is however possible to use a lighter connection pool implementation called
Agroal which lets you reach your relational database without the JCA abstraction layer. In the last
section of this chapter we will learn how to configure an Agroal Connection pool. In essence, these
are the topics we are going to cover in this chapter:

• Setting up a datasource using the management tools (CLI, Web console)

• Setting up a datasource as a deployable resource

• Configuring and securing a datasource

• Configuring an Agroal Datasource

The Following prerequisites are needed to execute the examples contained in this chapter:

1) A Relational Database which we used as target. In this chapter we will connect to PostgreSQL
v.11.

For a quick start, you can simply start a PostgreSQL server using Docker as follows:

$ sudo docker run -e POSTGRES_PASSWORD=postgres -e POSTGRES_DB=postgres -e


POSTGRES_USER=postgres -d -p 5432:5432 postgres:11

If you check the docker process for postgres (with the format parameter to display just the image
name), you will see that it is now available:

$ docker ps --format '{{.Image}}'


postgres:11

Therefore in the following configuration block, you can simply reach the Database at localhost on
port 5432.

2) A JDBC library of the Database we are connecting to. As we will be using PostgreSQL database,
you will need its JDBC Drivers which can be downloaded from: https://jdbc.postgresql.org/ or from
the Maven repository with:

$ wget https://repo1.maven.org/maven2/org/postgresql/postgresql/42.2.5/postgresql-
42.2.5.jar

104
7.1. Creating a Datasource using the CLI
Installing the data source using the Command Line Interface can be used to quickly create the
module structure containing the JDBC Driver. It is the recommended option if you plan to create a
CLI script so that you can replicate it across your installations. Launch the jboss-cli.sh script and
connect as usual.

The following command will install the org.postgres module creating for you the module directory
structure just as we did at the beginning of this chapter:

module add --name=org.postgres --resources=postgresql-42.2.5.jar


--dependencies=javax.api,javax.transaction.api

Next, we need to install the JDBC driver using the above defined module:

/subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-
name="org.postgres",driver-class-name=org.postgresql.Driver)

Finally, install the data source by using the data-source shortcut command, which requires as
input the Pool name, the JNDI bindings, the JDBC Connection parameters and finally the security
settings:

data-source add --jndi-name=java:/PostGreDS --name=PostgrePool --connection


-url=jdbc:postgresql://localhost:5432/postgres --driver-name=postgres --user
-name=postgres --password=postgres

You can find the full script to create a datasource at: https://bit.ly/3cnOFEj

The outcome of the CLI session is the following structure in the $JBOSS_HOME/modules folder:

/home/jboss/wildfly-20.0.0.Final/modules
└───org
  └───postgres
  └───main
  module.xml
  postgresql-42.2.5.jar

The modules.xml file has been created with all the requires resources and dependencies needed by
the JDBC Driver:

105
<?xml version="1.0" encoding="UTF-8"?>
<module xmlns="urn:jboss:module:1.0" name="org.postgres">
  <resources>
  <resource-root path="postgresql-42.2.5.jar"/>
  </resources>
  <dependencies>
  <module name="javax.api"/>
  <module name="javax.transaction.api"/>
  </dependencies>
</module>

You can check that the datasource has been installed correctly by issuing the following command:

/subsystem=datasources/data-source=PostgrePool:test-connection-in-pool

7.1.1. Creating a Datasource in Domain mode

When using Domain mode, the datasource needs to be assigned to a server Profile; hence the CLI
commands should be adapted. The module installation is not different from standalone mode, you
just need to be aware that it requires to be executed on every Host Controller of your domain:

module add --name=org.postgres --resources=postgresql-42.2.5.jar


--dependencies=javax.api,javax.transaction.api

Next, we need to install the JDBC driver on a server Profile:

/profile=full-ha/subsystem=datasources/jdbc-driver=postgres:add(driver-
name="postgres",driver-module-name="org.postgres",driver-class-
name=org.postgresql.Driver)

Finally, install the data source by using the data-source shortcut command, which requires also the
--profile additional option:

data-source add --jndi-name=java:/PostGreDS --name=PostgrePool --connection


-url=jdbc:postgresql://localhost:5432/postgres --driver-name=postgres --user
-name=postgres --password=postgres --profile=full-ha

7.1.2. Creating an XA Datasource

If you are going to use an XA Datasource in your applications there are some changes that you need
to apply to your CLI scripts. Start as usual by creating the module at first:

106
module add --name=org.postgres --resources=postgresql-42.2.5.jar
--dependencies=javax.api,javax.transaction.api

Next, install the JDBC driver using the above module:

/subsystem=datasources/jdbc-driver=postgres:add(driver-name="postgres",driver-module-
name="org.postgres",driver-class-name=org.postgresql.Driver)

The twist now is to use the xa-data-source shortcut command in order to create the XA Datasource.
This command requires that you specify the Datasource name, its JNDI Bindings, the XA Datasource
class, the Security settings and, finally, at least one property must be specified (in our case we have
specified the Server host name):

xa-data-source add --name=PostGresXA --jndi-name=java:/PostGresXA --driver


-name=postgres --xa-datasource-class=org.postgresql.xa.PGXADataSource --user
-name=postgres --password=postgres --xa-datasource-properties=[{ServerName=localhost}]

Next, you can add additional properties needed for your Database connections, such as the
Database schema:

/subsystem=datasources/xa-data-source=PostGresXA/xa-datasource-
properties=DatabaseName:add(value="postgres")

You can find the full script to create a XA datasource at: https://bit.ly/3coiGny

7.1.2.1. Enabling XA transactions on the DB

Please note that some database (notably Oracle) require that you issue some GRANT statements on
specific system tables in order to allow for XA Recovery of transactions. Here is the list of
statements you need to issue on the Oracle database:

107
GRANT SELECT ON
sys.dba_pending_transactions TO <USER>;

GRANT SELECT ON
sys.pending_trans$ TO <USER>;

GRANT SELECT ON
sys.dba_2pc_pending TO <USER>;

GRANT EXECUTE ON
sys.dbms_xa TO <USER>;

GRANT EXECUTE ON
sys.dbms_system TO <USER>;

7.2. Configuring a Datasource using the Admin Console


Configuring the Datasource from the Admin Console is the quickest option, although it cannot be
automatized as a CLI script. As usual you need to download the JDBC Driver first. Then open up the
Admin console and choose to deploy the JDBC Driver as you would do with an ordinary application:

Check that the JDBC Driver has been correctly deployed:

108
Next, select the Configuration upper Tab and enter into the DataSources subsystem by clicking on
the View button:

You will find enlisted the built-in ExampleDS datasource. Let’s add a PostgreSQL datasource by
clicking on the (+) button and choosing "Add Datasource"

109
The datasource wizard will start. It is a guided procedure which will provide some defaults for your
configuration. The first option will be the database type to be used:

Next, in a three-step sequence, you will at first need to specify the Datasource name and the JNDI
binding where it can be looked up:

Click on Next, and move to the JDBC Driver UI, which follows here:

110
In this screen, specify the module name, in case you are loading the Driver from a module installed
on WildFly. Otherwise, leave the module name blank and select from the Combo the Driver name
and Driver class name which should be detected from your deployed jar file.

Click on the Next button. The last screen is about Connection Settings:

Enter the Connection URL, the Username and Password.

The Security Domain is alternative to entering the Username and Password (See
 Chapter 16: WildFly’s legacy security model).

In the final screen you can Test the Database connection which is an useful option to check before
committing the Datasource to the configuration. When you are done, you should see the Datasource
enlisted as follows:

111
7.3. Deploying a Datasource as a resource
An alternative approach to install the Datasource consists in copying the JDBC driver into the
deployments folder (which elects it to be a module as well) and then referencing the Jar file as
Driver. Let’s see each step in detail:

• Copy the JDBC Driver into the deployments folder

$ cp postgresql-42.2.5.jar /opt/wildfly-20.0.0.Final/standalone/deployments

Once copied the file, you should be able to see on your server logs that the driver has been
deployed successfully:

14:57:29,684 INFO [org.jboss.as.server] (DeploymentScanner-threads - 1) WFLYSRV0010:


Deployed "postgresql-42.2.5.jar" (runtime-name : "postgresql-42.2.5.jar")

• Deploy the Datasource file:

In order to install the Datasource, you will need to reference the Driver jar name, as in the
following example:

data-source add --name=PostgrePoolDeploy --jndi-name=java:/PostgreDSDeploy --driver


-name=postgresql-42.2.5.jar --connection-url=jdbc:postgresql://localhost/postgres
--user-name=postgres --password=postgres

As an alternative, you can simply drop a -ds.xml file in the deployments folder with the JDBC
Connection settings. For example, this is a postgres-ds.xml, which is suitable for PostgreSQL
database:

<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
  <datasource jndi-name="java:/PostgreDSDeploy" pool-name="PostgrePoolDeploy">
  <connection-url>jdbc:postgresql://localhost:5432/postgres</connection-url>
  <driver>postgresql-42.2.5.jar</driver>
  <security>
  <user-name>postgres</user-name>
  <password>postgres</password>
  </security>
  </datasource>
</datasources>

Once created the -ds.xml copy it in your deployments folder (or package it along with your
applications). The application server should log your successful data source deployment:

112
09:37:45,949 INFO [org.jboss.as.server.deployment] (MSC service thread 1-2)
JBAS015876: Starting deployment of "postgres-ds.xml"
09:37:46,042 INFO [org.jboss.as.server] (DeploymentScanner-threads - 2) JBAS018559:
Deployed "postgres-ds.xml"

Drawbacks?

You might wonder if there is any pitfall when using the older -ds.xml approach.
Well actually if you "bypass" the management interface and deploy the

 Datasource by copying it in the deployments folder, you will not be able to manage
it through the CLI or Web admin interface.

Hence, you should consider using deployable data source for development/testing
purpose and never consider to use it in a production environment.

7.3.1. Packaging Datasources in your applications

Data source definitions can be also packed in your application so that you don’t have to modify at
all the server configuration. The format of the deployable data source is the same described in the
earlier section, where the data source was dropped in the deployments folder of the application
server. When deploying data source as part of your application you have to add them in a specific
folder, which varies depending on the application package format:

Application

Location

Web application (.war)

WEB-INF

EJB application (.jar)

META-INF

Enterprise application (.ear)

META-INF ( of top level archive)

As an example, the following snapshot shows a Web application that ships with a data source
definition named example-ds.xml:

113
Warning! As for deployable resources, you cannot manage these resources

 through the application server management interfaces; therefore should be used


just for development or testing purposes.

In the above example, we have just included the -ds.xml file in your application, therefore we
suppose that you have separately deployed the postgresql-42.2.5.jar into the deployments folder. As
an alternative, you can pack the JDBC driver along with your application (in our example in the
WEB-INF/lib folder) so that you create a self-consistent application.

7.4. Configuring Datasources


Now we have learned all possible strategies for configuring a datasource. It is about time to learn
about the configuration, which will require some tweaks either in the datasource pool size, in the
flush strategy to be adopted and some hardening to improve the security of the configuration. Let’s
see in detail these steps.

7.4.1. Configuring the Datasource pool attributes

Once created, the data source uses some default settings that might be good for an initial shot; in
order to achieve optimal performance you should adjust these values based on your needs. Here is
the list of attributes that you can configure with a short description:

• min-pool-size: The minimum number of connections in the pool (default 0)

• initial-pool-size: The initial number of connections to acquire from the database

• max-pool-size: The maximum number of connections in the pool (default 20)

• pool-use-strict-min: Whether idle connections below the min-pool-size should be closed

• pool-prefill: Attempts to prefill the connection pool to the minimum number of connections.
This will check your connections as soon as the Datasource is installed.

• flush-strategy: Specifies how the pool should be flushed in case of an error. The default one
(FailingConnectionOnly) forces destroying only connections with error.

• idle-timeout-minutes: Specifies the maximum time, in minutes, a connection may be idle


before being closed. The actual maximum time depends also on the IdleRemover scan time,

114
which is half of the smallest idle-timeout-minutes value of any pool.

• track-statements: Whether to check for unclosed statements when a connection is returned to


the pool, result sets are closed, a statement is closed or return to the prepared statement cache.
Valid values are: "false" (do not track statements), "true" (track statements and result sets and
warn when they are not closed), "nowarn" (track statements but do not warn about them being
unclosed)

So how these attributes can affect your applications ? Let’s see it from the boot process.

When the application server starts, if you have configured an initial-pool-size, the datasource will
be eventually filled up with that number of connections. Otherwise, you start with an empty pool.

From now on, every time a Connection is requested to the datasource a check is made to see if any
idle Connection is available. If not, the application server will attempt to acquire a new database
Connection. So, unless the max-pool-size has been reached, a new Connection will be created.

When a Connection completes its job it becomes idle. A Connection can stay idle up to a maximum
number of minutes as specified by the idle-timeout-minutes. After that, the Connection is returned
to the pool.

The pool-use-strict-min allows for a variation to this rule. If set to true, idle connections are not
returned to the pool if you have hit the min-pool-size. They will just stay idle, ready to be used by
your applications.

Here is the min-pool-size and max-pool-size settings are applied to the PostgreSQL datasource:

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=min-pool-
size,value=10)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=max-pool-
size,value=50)

7.4.2. Configuring flush strategies

The flush strategy option for the connection pool defines how the pool should be flushed in case
there is an error on a connection belonging to the pool. In any case, connections with error are
destroyed, and the pool is scheduled for prefill if supported.

The list of available flush strategies can be collected from the CLI by expanding the flush-strategy
attribute of a datasource:

115
/subsystem=datasources/data-source=ExampleDS:write-attribute(name=flush-
strategy,value=

AllConnections
  FailingConnectionOnly

AllGracefully
  Gracefully

AllIdleConnections
  IdleConnections

AllInvalidIdleConnections
 InvalidIdleConnections

EntirePool
  UNKNOWN

Here is a description of the available flush strategies you can set:

• FailingConnectionOnly: Only the connections with the error are destroyed. (This is the default
strategy).

• InvalidIdleConnections: All idle connections are checked if they are invalid, using the
javax.resource.spi.ValidatingManagedConnectionFactory which prunes for invalid connections.

• IdleConnections: All idle connections are destroyed.

• Gracefully: Like IdleConnections but also Active connections will be destroyed once they return
to the pool.

• EntirePool: This policy is more aggressive than Gracefully as all connections are destroyed
(active and idle).

• AllInvalidIdleConnections: Like InvalidIdleConnections, but across all credentials for the pool
if supported.

• AllIdleConnections: Like IdleConnections, but across all credentials for the pool if supported

• AllGracefully: Like Gracefully, but across all credentials for the pool if supported.

• AllConnections: Like EntirePool, but across all credentials for the pool if supported.

7.4.3. Protecting Datasource credentials

Until now, we have specified the username and password of your Datasource connection using
clear text format; this can lead to a potential security hole in case a malicious user is able to
intercept your datasource configuration. Luckily, it is possible to protect against these issues by
specifying in your security area a security-domain instead of clear text credentials.

Security domains are explained in detail in the section WildFly Security Domains.

 However right now you can think of them as a database of user credentials which
are used to access and use a sensitive resource.

116
We will at first define a new security-domain and then we will link it to our datasource. The
security-domain will contain the database password (which is postgres") using an encrypted
format.

7.4.3.1. Step 1: Generate the encrypted password

In order to do that, we can use a class named SecureIdentityLoginModule which is part of the
PicketBox libraries. Launch the class name passing as parameter the text to encrypt as shown in the
following example:

$ cd $JBOSS_HOME/modules/system/layers/base/org/picketbox/main

$ java -classpath picketbox-5.0.3.Final.jar


org.picketbox.datasource.security.SecureIdentityLoginModule postgres
Encoded password: 1d5bcec446b79907df8592078de921bc

7.4.3.2. Step 2: Create the Security Domain

Now create a security-domain in your security subsystem and name it "ds-encrypted". This
security domain will be based on the SecureIdentityLoginModule which takes as input the
username, the encrypted password and some options such as the Database pool name (as part of
the managedConnectionFactoryName). The following CLI set of commands will create the ds-
encrypted security domain:

/subsystem=security/security-domain=ds-encrypted:add(cache-type="default")

/subsystem=security/security-domain=ds-encrypted/authentication="classic":add()

/subsystem=security/security-domain=ds-encrypted/authentication="classic"/login-
module="org.picketbox.datasource.security.SecureIdentityLoginModule":add(code="org.pic
ketbox.datasource.security.SecureIdentityLoginModule",flag="required",module-
options={"username" => "postgres","password" =>
"1d5bcec446b79907df8592078de921bc","managedConnectionFactoryName" =>
"jboss.jca:service=LocalTxCM,name=java:/PostGreDS"})

The above script is available on Github at: http://bit.ly/3ci0Rqh

The resulting XML (which can be directly included as well in your server configuration, provided
that performed a server shutdown before that):

117
<security-domain name="ds-encrypted" cache-type="default">
  <authentication>
  <login-module code=
"org.picketbox.datasource.security.SecureIdentityLoginModule" flag="required">
  <module-option name="username" value="postgres"/>
  <module-option name="password" value="1d5bcec446b79907df8592078de921bc"/>
  <module-option name="managedConnectionFactoryName"
  value="jboss.jca:service=LocalTxCM,name=java:/PostGreDS"/>
  </login-module>
  </authentication>
</security-domain>

7.4.3.3. Step 3: Let your datasource use the Security Domain:

Now it’s time to update your datasource configuration, to use the ds-encrypted security-domain. To
do that, you need to undefine at first the username and password attributes which are
incompatible with the security-domain setting:

batch

/subsystem=datasources/data-source=PostgrePool:undefine-attribute(name=user-name)

/subsystem=datasources/data-source=PostgrePool:undefine-attribute(name=password)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=security-
domain,value=ds-encrypted)

run-batch

Here is the resulting datasource configuration:

<datasource jndi-name="java:/PostGreDS2" pool-name="PostgrePool2">


  <connection-url>jdbc:postgresql://localhost:5432/postgres</connection-url>
  <driver>postgres</driver>
  <security>
  <security-domain>ds-encrypted</security-domain>
  </security>
</datasource>

You should reload your configuration in order to see the above changes reflected. Next, you can
verify from the Administration Console or the CLI if your connection pool is able to connect to the
database. Example:

118
/subsystem=datasources/data-source=PostgrePool:test-connection-in-pool
{
  "outcome" => "success",
  "result" => [true]
}

7.4.4. Masking your Datasource credentials

Another alternative to using clear text passwords in your configuration is to store your credentials
in a ciphered storage and include an alias to the entries available in the storage.

In WildFly 8,9,10 the recommended way to mask your sensitive data is via the Vault utility (located
in $JBOSS_HOME/bin.

Since WildFly 11, the Vault utility has been deprecated and you are encouraged to use Credential
Stores which are part of Elytron security framework. See Protecting Datasource credentials for an
example how to to mask your datasource password.

7.4.5. Using System Properties in your deployable data sources

Sometimes it can be useful not to hardcode properties contained in your -ds.xml files. For example,
in the following data source file we are defining the connection-url as a System Property:

<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
  <datasource jndi-name="java:/PostGreDS" pool-name=PostgresPool">
  <connection-url>${connection.url}</connection-url> ①
  <driver>postgresql-42.2.5.jar</driver>
  . . . .
  </datasource>
</datasources>

① The System Property will replace this expression when the server is started.

In order to activate the replacement of the reference with the actual property, you have to check
that the jboss-descriptor-property-replacement (part of the "ee" domain) is set to true.

/subsystem=ee/:read-attribute(name=jboss-descriptor-property-replacement)
{
  "outcome" => "success",
  "result" => true,
}

Now you can either pass the System Property using the standard Java way (-D in the application
server startup script) or adding a System Property element at the top of the configuration:

/system-property=connection.url/:add(value=jdbc:postgresql://localhost:5432/postgres)

119
7.4.6. Configuring Multi Datasources

When configuring the connection-url parameter of the data source, it’s possible to specify a set of
JDBC URLs which is somewhat similar to Oracle Weblogic multi data source feature. You need to
specify the list of connection urls and a delimiter to separate them:

/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=connection-
url,value=jdbc:postgresql://172.17.0.2/postgres| jdbc:postgresql://172.17.0.3/postgres
)

/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=url-
delimiter,value=|)

For a production environment, it’s however recommended to use a more robust approach such as
Oracle Real Application Cluster (RAC), which allows data high availability. In the following
example, we are using a connection-url, which features an Oracle RAC made up of two nodes (host1
and host2):

/subsystem=datasources/data-source=OracleDS/:write-attribute(name=connection-
url,value=jdbc:oracle:thin:@(description=(address_list=(load_balance=on)(failover=on)(
address=(protocol=tcp)(host=host1)(port=1521))(address=(protocol=tcp)(host=host2)(port
=1521)))(connect_data=(service_name=sid)(failover_mode=(type=select)(method=basic)))))

7.5. Policies for creating/destroying connections


The policy for creating and destroying physical connections to the database has been enhanced as it
can now be controlled by means of pool incrementer/decrementer classes.

By default, each time a new request is received a new connection is created if none of the
connections are available. Conversely connections are configured to be destroyed when the idle
timeout is scheduled.

Capacity policies are divided into two categories: incrementers and decrementers.

• An incrementer capacity policy specifies the conditions for adding new physical connections to
the pool.

• An decrementer capacity policy specifies the conditions for removing connections from the
pool.

7.5.1. Configuring the incrementer capacity policy

The incrementer policy allows you to control how many or a specific size of the pool that is created
when a connection isn’t available for immediate check out. The following options are available:

MaxPoolSize: The MaxPoolSize iIncrementer policy will fill the pool to its max size for each
request. This policy is useful when you want to grab the maximum number of connections after the
first request. Here is how to apply it:

120
/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
incrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.MaxPoolSizeIncrementer)

The effect of this policy setting will be that, since the first request, all connections (as specified in
the max-pool-size) will be reclaimed by the server:

/subsystem=datasources/data-source=PostgrePool/statistics=pool:read-resource(include-
runtime=true)
{
  "outcome" => "success",
  "result" => {
  "ActiveCount" => 1,
  "AvailableCount" => 20,
. . . .
  "CreatedCount" => 20,
  "DestroyedCount" => 0,
  "IdleCount" => 20,
  "InUseCount" => 0,

Size : This Size option is the default; it fills the pool by the specified number of connections for each
request (default 1). This policy is useful when you want to increment with an additional number of
connections per request in anticipation that the next request will also need a connection.

Here is how to set the pool incrementer policy to use the Size with an increment of two units:

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
incrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.SizeIncrementer)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
incrementer-properties.size,value=2)

As you can see, after the first request, the pool will create (and mark as Active) two connections:

121
/subsystem=datasources/data-source=PostgrePool/statistics=pool:read-resource(include-
runtime=true)
{
  "outcome" => "success",
  "result" => {
  "ActiveCount" => 2,
  "AvailableCount" => 20,
. . . .
  "CreatedCount" => 2, ①
  "DestroyedCount" => 0,
  "IdleCount" => 20,
  "InUseCount" => 0,

① Connections filled after the first request

Watermark: This policy will fill the pool to the specified number of connections for each request.
This policy is useful when you want to keep a fixed number of connections in the pool at all time.
Here is how to apply this policy to allow a fixed number of 10 connections:

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
incrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.WatermarkIncrementer)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
incrementer-properties.size,
value=10)

7.5.2. Configuring the decrementer capacity policy

A decrementer capacity policy works just the opposite as the incrementer policy. By default,
connections are released to the database when idle-timeout-minutes is reached. Through the
decrementer policy, you can specify a different decrementer behavior. Here are your options:

MinPoolSize: remove connections until min-pool-size is reached. This policy is useful when you
want to limit the number of connections after each idle timeout request.

You can apply it as follows:

/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=capacity-
decrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.MinPoolSizeDecrementer)

Size: removes a certain number of connections. This policy is useful when you want to decrement
an additional number of connections per idle timeout request in anticipation that the pool usage
will lower over time.

122
/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=capacity-
decrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.MinPoolSizeDecrementer)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
decrementer-properties.size,
value=2)

TimedOut: removes all connections that are registered with the timed out flag. This policy is the
default decrement policy.

/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=capacity-
decrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.TimedOutDecrementer)

Watermark: removes connections until a certain size is reached. This policy is useful when you
want to keep a specified number of connections in the pool all the time.

/subsystem=datasources/data-source=PostgrePool/:write-attribute(name=capacity-
decrementer-
class,value=org.jboss.jca.core.connectionmanager.pool.capacity.WatermarkDecrementer)

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=capacity-
decrementer-properties.size,
value=2)

7.6. Gathering Datasource runtime statistics


By default the runtime statistics of the datasources are not collected. You have to enable the
statistics-enabled on each datasource. Here is how to do it using PostgreSQL Datasource:

/subsystem=datasources/data-source=PostgrePool:write-attribute(name=statistics-
enabled,value=true)
{
  "outcome" => "success",
  "response-headers" => {
  "operation-requires-reload" => true,
  "process-state" => "reload-required"
  }
}

Now reload your server and check the statistics as follows:

123
/subsystem=datasources/data-source=PostgrePool/statistics=pool:read-resource(include-
runtime=true)
{
  "outcome" => "success",
  "result" => {
  "ActiveCount" => 1,
  "AvailableCount" => 19,
  "AverageBlockingTime" => 0L,
  "AverageCreationTime" => 331L,
  "AverageGetTime" => 166L,
  "AveragePoolTime" => 175000L,
  "AverageUsageTime" => 18L,
  "BlockingFailureCount" => 0,
  "CreatedCount" => 1,
  "DestroyedCount" => 0,
  "IdleCount" => 0,
  "InUseCount" => 1,
  "MaxCreationTime" => 331L,
  "MaxGetTime" => 331L,
  "MaxPoolTime" => 175000L,
  "MaxUsageTime" => 18L,
  "MaxUsedCount" => 1,
  "MaxWaitCount" => 0,
  "MaxWaitTime" => 0L,
  "TimedOut" => 0,
  "TotalBlockingTime" => 0L,
  "TotalCreationTime" => 331L,
  "TotalGetTime" => 332L,
  "TotalPoolTime" => 175000L,
  "TotalUsageTime" => 18L,
  "WaitCount" => 0,. . . .
  "statistics-enabled" => true
  }
}

As you can see, the list of statistics is quite large and would require a couple of pages to be included
all. The most relevant statistics of database connectivity are however the following ones:

• ActiveCount: The number of connections to the database which are active right now. In a
nutshell, these are the socket connections which are qualified as "ESTABLISHED" in your netstat
output. This includes both the connections which are actively working and the ones which are
idle.

• AvailableCount: This is the Maximum number of connections which can be acquired from the
database minus the ones which are in use (See InUseCount).

• InUseCount: This is the number of Connections which are executing SQL statements right now.
This also includes unclosed Connections/Statements.

• IdleCount: This is the number of Connections which are idle. (This metric is directly influenced
by the parameter idle-timeout-minutes )

124
• MaxUsedCount: This is the peak of Connections which have been requested to the database. If
this value matches with the maximum pool size, check out the MaxWaitCount (the maximum
time spent waiting for a connection in the pool ) and the WaitCount (the maximum number of
requests which waited for a connection in the pool ) to see if your applications are starving for
database connections

7.6.1. Detecting leaked connections

Using a Database Connection requires that you properly close the Connections (and the
Statements/ResultSet opened during its usage) when you are done with it. Otherwise the
Connection will be considered "InUse" by the application server leading to a leak in resources.
Luckily, the Connection pool implementation is capable to detect the leaked connectors either
during shutdown of the pool, or once the pool is flushed.

The leak detector pool is not available out of the box but you have to set the ironjacamar.mcp
system property with a value of
org.jboss.jca.core.connectionmanager.pool.mcp.LeakDumperManagedConnectionPool in
order to enable it. At the same time, the system property ironjacamar.leaklog can be used to trace
the log into a separate file. Here is an example, applied to standalone.conf startup file:

JAVA_OPTS="$JAVA_OPTS
-Dironjacamar.mcp=org.jboss.jca.core.connectionmanager.pool.mcp.LeakDumperManagedConne
ctionPool -Dironjacamar.leaklog=leaks.txt"

In order to trigger the creation of the dump file, we need to flush the connection pool, causing the
file leaks.txt to be written:

Leak detected in pool: PostgreDS


  ConnectionListener: 1c6c192
  Allocation timestamp: 1437048073071
  Allocation stacktrack:
java.lang.Throwable: ALLOCATION LEAK
  at
org.jboss.jca.core.connectionmanager.pool.mcp.LeakDumperManagedConnectionPool.getConne
ction(LeakDumperManagedConnectionPool.java:96)
. . . . .
org.jboss.as.connector.subsystems.datasources.WildFlyDataSource.getConnection(WildFlyD
ataSource.java:67)
  at com.sample.index_jsp._jspService(index_jsp.java:69) ①

① The class which causes the Leak

As you can see, we have found the exact location in our code (index.jsp) where the connection leak
has been produced. There is also an allocation timestamp available. This is not as immediate,
however with a couple of lines of Java we can convert it to a readable date:

125
Timestamp stamp = new Timestamp(1437048073071l);

System.out.println(stamp.getTime());

7.7. Configuring Agroal Datasource


The downside of the default JCA-based datasource is that it’s more heavyweight solution when you
are using connections towards a relational database only. Let’s see in practice how to configure
Agroal Datasource. In the current version of WildFly, the Agroal Datasource is not yet the default
datasource therefore you need to add at first the available extension to your configuration. Launch
the jboss-cli.sh script, connect and execute:

/extension=org.wildfly.extension.datasources-agroal:add

Next, we will add the datasources-agroal subsystem itself to your model:

/subsystem=datasources-agroal:add

In future releases of the application server, the datasources-agroal extension

 and its subsystem will be added by default so check with your configuration if
they are already there!

Ok, now we will configure an agroal datasource. With the current release of the application server,
this is possible only by adding the driver as a module so let’s add it:

module add --name=org.postgres --resources=postgresql-42.2.5.jar


--dependencies=javax.api,javax.transaction.api

Reload your configuration:

reload

Now it’s time to add a JDBC Driver to your datasource-agroal configuration:

/subsystem=datasources-
agroal/driver=agroal_driver:add(class=org.postgresql.Driver,module=org.postgres)

Last step, is adding a new agroal datasource with the following CLI command:

126
/subsystem=datasources-agroal/datasource=AgroalDataSource:add(jndi-
name=java:jboss/datasources/AgroalDatasource,connection-
factory={driver=agroal_driver,username=postgres,password=postgres,url=jdbc:postgresql:
//localhost:5432/postgres},statistics-enabled=true,connection-pool={min-size=1,max-
size=20})

As you can see from the above command, the only requirement is to define an Agroal Connection
factory with a reference to the JDBC Driver, the username and password and, at least, a connection
pool size. This will reflect in the following XML datasource:

<subsystem xmlns="urn:jboss:domain:datasources-agroal:1.0">
  <datasource name="AgroalDataSource" jndi-name=
"java:jboss/datasources/AgroalDatasource" statistics-enabled="true">
  <connection-factory driver="agroal_driver" url=
"jdbc:postgresql://localhost:5432/postgres" username="postgres" password="postgres"/>
  <connection-pool max-size="20" min-size="1"/>
  </datasource>
  <drivers>
  <driver name="agroal_driver" module="org.postgres" class=
"org.postgresql.Driver"/>
  </drivers>
</subsystem>

As you might imagine, the advantage of using a "direct" database connection pool is an improved
performance in terms of SQL Execution. The Agroal Datasource, however, is not intended to be
used only for plain SQL statements, but it can be used as well as a replacement for JPA based
application, by specifying an Agroal Datasource rather than a standard datasource:

<persistence-unit name="agroal-pu">
  <description>Agroal Datasource in persistence.xml</description>
  <jta-data-source>java:jboss/datasources/AgroalDatasource</jta-data-source>
</persistence-unit>

7.7.1. Creating an Agroal XA Datasource

If your transactions involve more than one resource manager, you can still use agroal as
replacement for the standard xa-datasource. The difference is that you will be adding as driver an
XA Provider:

/subsystem=datasources-
agroal/driver=xa_agroal_driver:add(class=org.postgresql.xa.PGXADataSource,module=org.p
ostgres)

Then you will add an xa-datasource, pretty much the same way you did for the non-xa datasource:

127
/subsystem=datasources-agroal/xa-datasource=XAAgroalDataSource:add(jndi-
name=java:jboss/datasources/XAAgroalDatasource,connection-
factory={driver=xa_agroal_driver,username=postgres,password=postgres,url=jdbc:postgres
ql://localhost:5432/postgres},statistics-enabled=true,connection-pool={min-size=1,max-
size=10})

And here is the resulting configuration for the XA Datasource:

 <subsystem xmlns="urn:jboss:domain:datasources-agroal:1.0">
  <xa-datasource name="XAAgroalDataSource" jndi-name=
"java:jboss/datasources/XAAgroalDatasource" statistics-enabled="true">
  <connection-factory driver="xa_agroal_driver" url=
"jdbc:postgresql://localhost:5432/postgres" username="postgres" password="postgres"/>
  <connection-pool max-size="10" min-size="1"/>
  </xa-datasource>
  <drivers>
  <driver name="agroal_driver" module="org.postgres" class=
"org.postgresql.Driver"/>
  <driver name="xa_agroal_driver" module="org.postgres" class=
"org.postgresql.xa.PGXADataSource"/>
  </drivers>
</subsystem>

128
8. Chapter 8: Configuring Undertow
Webserver
This chapter introduces you to the new Web container, named Undertow that can be used to
execute your Java EE 8 compliant Web applications. Since the Web server uses the Java NIO (New
Input Output) API to construct its responses, we will also learn how to configure the io subsystem
that is part of all server configurations. Then, we will cover other core aspects including Virtual
Host configuration, the Servlet Container settings and how to audit logs from Web application.
Summing up, here are the topics that we are going to discuss in this chapter:

• Undertow Web server architecture

• Configuring Undertow filters and handlers

• Configuring Virtual Hosts

• How to adjust Servlet and JSP settings

• How to audit logs from Undertow

8.1. Entering Undertow Web server


With the arrival of Java EE 8 and the requirement to handle advanced features such as the Web
Sockets API and HTTP upgrades (e.g. EJB over HTTP), an important decision has been made by the
WildFly development team. After a long commitment to JBoss Web Server (a fork of Apache
Tomcat), the new release of the application server is now based on a new Web server named
Undertow.

Undertow makes a large use of XNIO (http://www.jboss.org/xnio) which is a low-


level I/O layer which can be used anywhere to simplify the usage of NIO API. It

 solves out some of the complexities of using Selectors and the lack of NIO support
for multicast sockets and non-socket I/O such as serial ports, while still
maintaining all the capabilities available in NIO.

In terms of architecture, Undertow is designed around a composition-based architecture that


allows you to build a fully functional Web server by combining small single components called
handlers. These handlers are chained together to form either a fully functional Java EE Servlet
container or a simpler HTTP Process handler embedded in your code. Here is in a nutshell the
components used by Undertow:

129
The top component is the Server, which contains a set of listeners based on the communication
protocol (http, https, ajp) . Each listener has a set of core elements, the most important ones are the
worker, which is the pool of threads used by the listener, and defined through the "io" subsystem,
and the http-binding which contains the ports bound by a specific listener.

Besides this, each Server contains an host definition with a set of Filters and Handlers which are
used to intercept requests from a client before they access a resource at back end and as well to
manipulate responses from server before they are sent back to the client associated to it. The next
section discusses about them more in detail.

8.2. Configuring Undertow Filters


A filter enables some aspect of an HTTP request to be modified and can use predicates to control
when a filter executes. Some common use cases for filters include the following ones:

• Writing a Response Header

• Adding a Connection Limit

• Compressing the Response

• Adding an Error filter

• Adding a custom filter

Let’s see them in detail.

8.2.1. Writing a Response Header with a filter

A response Header filter can be used to add an Header to your HTTP Response. The Header is made
of a name and a value. Here is how to add a simple Response Header:

130
/subsystem=undertow/configuration=filter/response-header=server-header:add(header-
name=my-response-header, header-value="WildFly-Dev")

You can then attach the filter to your Undertow server as follows:

/subsystem=undertow/server=default-server/host=default-host/filter-ref=server-
header/:add()

Once you reload the configuration, you should be able to trace in your HTTP Request the new
Response Header:

It is also possible to add Custom Response Headers to the Management interface of WildFly. Check
this section for more details: Adding custom Response Headers to the HTTP management interface

8.2.2. Adding a connection limit filter

You can control the number of concurrent requests by using a connection limit filter, which
includes the number of maximum concurrent requests and the number of request allowed to be
queued up:

/subsystem=undertow/configuration=filter/connection-limit=mylimit/:add(max-concurrent-
requests=25,queue-size=100)

Then, apply the filter on your Server as follows:

/subsystem=undertow/server=default-server/host=default-host/filter-ref=mylimit/:add()

8.2.3. Adding a gzip filter

A gzip filter simply compresses the response to allow a faster throughput of data:

/subsystem=undertow/configuration=filter/gzip=zipfilter/:add()

Then, apply the filter on your Server as follows:

131
/subsystem=undertow/server=default-server/host=default-host/filter-
ref=zipfilter/:add()

8.2.4. Adding an error filter

An error filter is an handy option to provide an error page for a specific error code. For example,
let’s say you want to intercept all error codes ‘404’ (page not found) and display the error page
located in /var/docs/www/error.html. The following batch script (available at http://bit.ly/2DzQyeu)
can be applied to the default root context of your Undertow server:

batch

/subsystem=undertow/server=default-server/host=default-host/filter-ref=404-
handler:add(predicate=true)

/subsystem=undertow/configuration=filter/error-page=404-
handler/:add(code=404,path=/var/docs/www/error.html)

run-batch

8.2.5. Adding a custom filter

A custom filter is based on a class that implements io.undertow.server.HttpHandler which needs


to be installed as a module. This custom modules will be able to manipulate the HTTP request and
must eventually either call another handler or end the exchange. Here is the structure of the
HttpHandler interface:

public interface HttpHandler {

  void handleRequest(HttpServerExchange exchange) throws Exception;

An example of custom handlers is the io.undertow.server.handlers.HttpTraceHandler. This


handler provides detail such as the Http Headers, the Query String. If you want to check the source
code of this Handler, it is available at:

https://github.com/undertow-
io/undertow/blob/master/core/src/main/java/io/undertow/server/handlers/HttpTraceHandler.java

Here is how you can add the HttpTraceHandler filter, which is packaged in the io.undertow.core
module:

/subsystem=undertow/configuration=filter/custom-filter=custom-filter/:add(class-
name=io.undertow.server.handlers.HttpTraceHandler,module=io.undertow.core)

132
8.3. Configuring Undertow Handlers
Undertow Handlers are Java classes implementing the io.undertow.server.HttpHandler interface.
The purpose of each of these classes is to handle the current request and pick up the next Handler
to invoke, based on the current request.

As you can see from the following picture, an Handler chain is composed of several individual
Handlers which eventually produce either a Servlet response or an error, for example in case that
the requested Path is not found:

In the current version of Undertow Web server you can use two handlers to match your requests:

• File based handlers: These handlers associate the request with a specific path on your file
system. By default a file Handler already exists which is used to match the Root Web context
("/") with the welcome-content page of WildFly

• Reverse Proxy handler: This handler allows to reverse proxy request from one application
running on one host to another application running on an another host.

8.3.1. Configuring a File based Handler

In the following example, we are defining an handler named "help-page" to intercept requests to
the Web context named "help" to the folder location in $JBOSS_HOME/help.

Start by defining the Handler and its File destination folder:

/subsystem=undertow/configuration=handler/file=help-page/:add(cache-buffer-
size=1024,cache-buffers=1024,directory-listing=true,path=${jboss.home.dir}/help)

Next, we will bind this handler to the default-server, at the URI "help":

133
/subsystem=undertow/server=default-server/host=default-
host/location=help/:add(handler=help-page)

With the above configuration, any URL pointing to http://localhost:8080/help will be handled by the
document folder located in $JBOSS_HOME/help.

8.3.2. Creating a Reverse Proxy Handler

A reverse proxy can be defined to transparently proxy request arriving to one host to another
host. A typical usage of a reverse proxy is to provide Internet users access to a server that is behind
a firewall.

Reverse proxies can also be used to balance load among several back-end servers,

 or to provide caching for a slower back-end server. In addition, reverse proxies


can be used simply to bring several servers into the same URL space.

Let’s see a practical example: we want to forward request from the path localhost:8080/in to the
path localhost:8180/out. You can simply create this scenario by starting a WildFly installation on
the default port and another WildFly server with an offset of 100.

The following picture depicts our scenario:

So here is the configuration that has to be added on the front-end WildFly. We will use a CLI batch
file to configure the reverse proxy:

134
batch

/subsystem=undertow/configuration=handler/reverse-proxy=myproxy:add()

/subsystem=undertow/configuration=handler/reverse-
proxy=myproxy/host=localhost:add(instance-id="myRoute",outbound-socket-binding="http-
remote",path="/out",scheme="http")

/subsystem=undertow/server=default-server/host=default-
host/location="/in":add(handler="myproxy")

/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=http-remote/:add(host=localhost,port=8180)

run-batch

• On line 2 we have defined a new reverse-proxy named "myproxy" in the handler section of
your Undertow server.

• On line 3 we have defined the target URL to be "/out" and bound the http-remote outbound
connection

• On line 4 we have defined the incoming URL to be proxied as "/in" on localhost

• On line 5, we have defined the outbound connection binding to be "localhost" on port 8180.

The source code for the above script is available at http://bit.ly/2HE4PJK

In order to test the application, you can deploy an application with the runtime name out.war on
the backend server and test it on the front-end server using http://localhost:8080/in

8.4. Configuring Undertow Listeners


Listeners are the entry point of Web applications running on Undertow application. All incoming
requests will pass through a listener, and the listener is in charge to translate the request into an
io.undertow.server.HttpServerExchange object, and then turning the result into a response that
can be sent back to the client.

Out of the box the following listeners are available in the default configuration:

135
<server name="default-server">
  <http-listener name="default" socket-binding="http" redirect-socket="https"
enable-http2="true"/>
  <https-listener name="https" socket-binding="https" security-realm=
"ApplicationRealm" enable-http2="true"/>
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  <http-invoker security-realm="ApplicationRealm"/>
  </host>
</server>

Each listener is bound to a socket binding which needs to be defined in the socket-binding section
of your server. For example, the http-listener is bound to the "http" socket binding:

/subsystem=undertow/server=default-server/http-listener=default/:read-
attribute(name=socket-binding)
{
  "outcome" => "success",
  "result" => "http"
}

Hence, if you want to include a new listener in your configuration then you have to add it and
reference its socket-binding. Here is for example how to add the ajp-listener to a non-ha
configuration:

/subsystem=undertow/server=default-server/ajp-listener=default-ajp:add(socket-
binding=ajp)

Once added, the socket listener will be started immediately (no reload or restart required):

$ netstat -an | grep 8009

TCP 127.0.0.1:8009 0.0.0.0:0 LISTENING

The most interesting configurable attribute is the worker attribute which reflects the io worker
pool of threads which are configured for that listener:

/subsystem=undertow/server=default-server/http-listener=default/:read-
attribute(name=worker)
{
  "outcome" => "success",
  "result" => "default", ①
}

① The name of the Thread Pool from the "io" subsystem.

136
Next section discusses about the configuration of the pool of connections used by the listener.

8.4.1. Configuring the Web server Pool

XNIO workers are the central point of coordination of undertow network activity. There are two
types of XNIO workers, which are used by Undertow:

• I/O threads, which perform non-blocking tasks and are used to handle callback events for
read/write operations.

• Worker threads, which are from a fully configurable standardExecutor-based thread pools.

When performing blocking operations such as Servlet requests, the Worker threads comes into
play.

Worker are easy to detect on log files and stack traces as they are tagged with the
 Worker name, to make them easier to identify in thread dumps and log files.

The built-in configuration of the "io" subsystem include a worker named "default" which includes a
set of undefined attributes:

/subsystem=io/worker=default:read-resource()
{
  "outcome" => "success",
  "result" => {
  "io-threads" => undefined,
  "stack-size" => undefined,
  "task-core-threads" => undefined,
  "task-keepalive" => undefined,
  "task-max-threads" => undefined,
  "outbound-bind-address" => undefined,
  "server" => {
  "/127.0.0.1:8080" => undefined,
  "/127.0.0.1:8443" => undefined
  }
  }
}

Having the worker attributes to "undefined" simply means that the io worker defaults will be
automatically configured based on the numbers of cpus. Here is, more in detail, how defaults are
calculated:

• The io-threads corresponds to the number of IO threads to create. As we said, these threads are
shared between multiple connections therefore they mustn’t perform blocking operations as
while the operation is blocking, other connections will essentially hang. If not specified, a
default will be chosen, which is calculated by cpuCount * 2.

• The task-max-threads corresponds to the maximum number workers allowed to run blocking
tasks such as Servlet requests. In general terms, the default value of it (CPUs * 16) is reasonable
default for most cases. If you see that new requests are being queued up, you should investigate

137
the cause of it. If your application is working as expected, then you should increase this task-
max-threads parameter.

• The task-core-threads specifies the starting number of threads for the worker task thread pool.
The default is 2.

• The stack-size corresponds to the Web server Thread stack size. With a larger Thread stack size,
the Web server will consume more resources, and thus fewer users can be supported.

• The task-keepalive (default 60) controls the number of seconds to wait for the next request
from the same client on the same connection. With Keep-Alives the browser gets to eliminate a
full round trip for every request after the first, usually cutting full page load times in half.

8.4.2. Configuring a custom Worker

Here is how you can define a new worker named largerworker with the io-threads set to 10, and
the task-max-threads set to 100:

/subsystem=io/worker=largeworker/:add(io-threads=10,stack-size=0,task-
keepalive=60,task-max-threads=100)

Now that we have refined our IO Worker configuration, we need to inject our configuration into
Undertow. As a general rule, each server defined in your Undertow configuration has a set of
listeners attached to it. You can configure the worker size at listener level as follows:

/subsystem=undertow/server=default-server/http-listener=default/:write-
attribute(name=worker,value=largeworker)

The above command needs a reload in your server configuration to take effect.

reload

Once you have completed your HTTP’s worker configuration, the Web server will use its worker
Threads which are named using the following criteria: [worker name]-[worker id]. The Thread
pool can then be monitored using any tool like the JConsole utility (included as part of the JDK
standard edition) that allows printing a dump of the Threads stack traces running in a JVM. Here is
a dump of the default worker pool:

138
8.4.3. Other listeners attributes

Besides the worker attribute, each listener contains many other attributes. Most of them are
runtime attributes; therefore, in order to acquire information about it, you have specify "include-
runtime=true" in your CLI query and enable the statistics on the undertow subsystem as well (See
Gathering statistics about Web applications). Here is how to collect runtime attributes from the http
listener:

/subsystem=undertow/server=default-server/http-listener=default/:read-
resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "allow-encoded-slash" => false,
  "allow-equals-in-cookie-value" => false,
  "always-set-keep-alive" => true,
  "buffer-pipelined-data" => true,
  "buffer-pool" => "default",
  "bytes-received" => 12017L,
  "bytes-sent" => 22106L,
  "certificate-forwarding" => false,
  "decode-url" => true,
  "disallowed-methods" => ["TRACE"],
  "enable-http2" => false,
  "enabled" => true,
 . . . . .
  }
}

Most of these properties are intuitive to understand; you can however query for a short description
of any property by means of the read-resource-description command. Let’s see how to use this
handy command to gather information about the property of an element:

139
/subsystem=undertow/server=default-server/http-listener=default:read-resource-
description
{
  "outcome" => "success",
  "result" => {
  "description" => "http listener",
  "attributes" => {
  "allow-encoded-slash" => {
  "type" => BOOLEAN,
  "description" => "If a request comes in with encoded / characters
(i.e. %2F), will these be decoded.",
  },
. . .
  }
}

8.5. Configuring Undertow Buffer Pool


As we said, Undertow is based on the Java NIO API and makes use of a pool of J2SE’s
java.nio.ByteBuffer whenever buffering is needed.

A Buffer is an object, which holds some data, that is to be written to or that has
just been read from. The addition of the Buffer object in NIO marks one of the
most significant differences between the new library and original I/O. In stream-

 oriented I/O you used to write data directly to, and read data directly from,
Stream objects. In the NIO library, all data is handled with Buffers. When data is
read, it is read directly into a buffer. When data is written, it is written into a
buffer.

Undertow’s IO Buffer Pool configuration is contained in the io subsystem, under the buffer-pool
element. Here is a excerpt from the resource descriptions:

140
/subsystem=io/buffer-pool=default:read-resource-description
{
  "outcome" => "success",
  "result" => {
  "description" => "Defines buffer pool",
  "attributes" => {
  "buffer-size" => {
  "type" => INT,
  "description" => "How big is the buffer",
  },
  "buffers-per-slice" => {
  "type" => INT,
  "description" => "How many buffers per slice",
  },
  "direct-buffers" => {
  "type" => BOOLEAN,
  "description" => "Does the buffer pool use direct buffers",
  }
}

More in detail, the first parameter (buffer-size) lets you define the java.nio.ByteBuffer size.
Provided that direct buffers are being used, the default 16kb buffers are optimal if maximum
performance is required (as this corresponds to the default socket buffer size on Linux). Default
value: 16384.

The second parameter buffers-per-slice defines how many buffers per slice are assigned. Slices are
used for manipulating sub-portions of large buffers, avoiding the overhead of processing the entire
buffer. Default value:128.

The third parameter, direct-buffers lets you choose to use direct buffers or not. A Direct buffer is a
kind of buffer that is allocated outside the Java heap; hence, their memory address is fixed for the
lifetime of the buffer. This in turn causes that the kernel can safely access them directly and, hence,
direct buffers can be used more efficiently in I/O operations. Default value: true

Here is for example how to increase the value of buffer-size to 32000 bytes:

/subsystem=io/buffer-pool=default:write-attribute(name=buffer-size,value=32000)

8.6. Configuring Virtual Hosts in Undertow


Virtual hosting is a mechanism whereby one web server process can serve multiple domain
names, giving each domain the appearance of having its own server. In this tutorial we will show
how to create and use a virtual host address for a JBoss web application.

Name-based virtual hosting is created on any web server by establishing an aliased IP address in
the Domain Name Service (DNS) data and telling the web server to map all requests destined for the
aliased address to a particular directory of web pages. For demonstration purposes, we will use a

141
static hosts file, making an IP alias for localhost. The first thing we need to do is setting up the
Virtual Host alias into the host file (‘c:\windows\system32\drivers\etc\hosts‘ for Windows or
‘/etc/hosts‘ for Linux) :

127.0.0.1 my-wildfly

Now the application server. Configuring a Virtual Host with WildFly is pretty simple and it requires
defining a new host in a batch script, and specify its alias and the default web module used by it:

batch
/subsystem=undertow/server=default-server/host=myvirtualhost:add(alias=["my-wildfly"])
/subsystem=undertow/server=default-server/host=myvirtualhost/setting=access-
log:add(prefix="myvirtualhost")
/subsystem=undertow/server=default-server/host=myvirtualhost:write-
attribute(name=default-web-module,value=welcome.war)
run-batch

This will results in the following change in your configuration:

<subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server"


default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other" statistics-enabled="${wildfly.undertow.statistics-
enabled:${wildfly.statistics-enabled:false}}">

  <server name="default-server">
  . . . . .
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  <http-invoker security-realm="ApplicationRealm"/>
  </host>
  <host name="myvirtualhost" alias="my-wildfly" default-web-module="welcome.war
">
  <access-log prefix="myvirtualhost"/>
  </host>
  </server>

  . . . . .
</subsystem>

Reload your configuration. Now every request for http://localhost:8080/ will land on the default-host
host and requests for http://my-wildfly:8080/ will be handled by the secondary host we have added.

Finally note that if you want that an application can only be accessed through one specific virtual
host, you have to specify it into the jboss-web.xml deployment descriptor. In this example we will
also deploy the Web application "test" that Virtual Host:

142
<jboss-web>
  <context-root>/test</context-root>
  <virtual-host>myvirtualhost</virtual-host>
</jboss-web>

8.7. Configuring the Servlet Container and JSP Settings


The Java Servlet technology was created as a portable way to provide dynamic, user-oriented
content. A Java Server Page, on the other hand, is a document containing a mixture of HTML
markup language and Java code, which is translated behind the hoods in a Servlet.

You can configure the Servlet Container and JSP Settings by means of the servlet-container element
which is just beneath the undertow Web server.

These settings affect applications developed with this technology and also
 applications built on the top of Servlets like Java Server Faces applications.

Here is a short description of the Servlet Container, settings, gathered using the CLI:

/subsystem=undertow/servlet-container=default:read-resource-description
{
  "outcome" => "success",
  "result" => {
  "description" => "A servlet container",
  "attributes" => {
  "allow-non-standard-wrappers" => {
  "type" => BOOLEAN,
  "description" => "If true then request and response wrappers that do
not extend the standard wrapper classes can be used",

  },
  "default-buffer-cache" => {
  "type" => STRING,
  "description" => "The buffer cache to use for caching static
resources",
  },
  "default-encoding" => {
  "type" => STRING,
  "description" => "Default encoding to use for all deployed
applications",
  },
  "default-session-timeout" => {
  "type" => INT,
  "description" => "The default session timeout (in minutes) for all
applications deployed in the container.",
  },
  "disable-caching-for-secured-pages" => {
  "type" => BOOLEAN,

143
  "description" => "If Undertow should set headers to disable caching
for secured paged. Disabling this can cause security problems, as sensitive
pages may be cached by an intermediary.",
  },
  "eager-filter-initialization" => {
  "type" => BOOLEAN,
  "description" => "If true undertow calls filter init() on deployment
start rather than when first requested.",
  },
  "ignore-flush" => {
  "type" => BOOLEAN,
  "description" => "Ignore flushes on the servlet output stream. In most
cases these just hurt performance for no good reason.",
  },
  "stack-trace-on-error" => {
  "type" => STRING,
  "description" => "If an error page with the stack trace should be
generated on error. Values are all, none and local-only",
  },
  "use-listener-encoding" => {
  "type" => BOOLEAN,
  "description" => "Use encoding defined on listener",
  }
. . .
}

The most important setting is the Default default-session-timeout, which governs the length (in
minutes) of the HTTP Session. By default, this attribute is set to 30 minutes.

JSP Settings, on the other hand, are available as a child element of the default Servlet container.
Here is the list of JSP container settings:

/subsystem=undertow/servlet-container=default/setting=jsp:read-resource-description
{
  "outcome" => "success",
  "result" => {
  "description" => "JSP container configuration.",
  "attributes" => {
  "check-interval" => {
  "type" => INT,
  "description" => "Check interval for JSP updates using a background
thread.",
  },
  "development" => {
  "type" => BOOLEAN,
  "description" => "Enable Development mode which enables reloading JSP
on-the-fly",
  },
  "disabled" => {
  "type" => BOOLEAN,

144
  "description" => "Enable the JSP container.",
  },
  "display-source-fragment" => {
  "type" => BOOLEAN,
  "description" => "When a runtime error occurs, attempts to display
corresponding JSP source fragment",
  },
  "dump-smap" => {
  "type" => BOOLEAN,
  "description" => "Write SMAP data to a file.",
  },
  "error-on-use-bean-invalid-class-attribute" => {
  "type" => BOOLEAN,
  "description" => "Enable errors when using a bad class in useBean.",
  },
  "generate-strings-as-char-arrays" => {
  "type" => BOOLEAN,
  "description" => "Generate String constants as char arrays.",
  },
  "java-encoding" => {
  "type" => STRING,
  "description" => "Specify the encoding used for Java sources.",
  },
  "keep-generated" => {
  "type" => BOOLEAN,
  "description" => "Keep the generated Servlets.",
  },
  "mapped-file" => {
  "type" => BOOLEAN,
  "description" => "Map to the JSP source.",
  },
  "modification-test-interval" => {
  "type" => INT,
  "description" => "Minimum amount of time between two tests for
updates, in seconds.",
  },
  "recompile-on-fail" => {
  "type" => BOOLEAN,
  "description" => "Retry failed JSP compilations on each request.",
  },
  "scratch-dir" => {
  "type" => STRING,
  "description" => "Specify a different work directory.",
  },
  "smap" => {
  "type" => BOOLEAN,
  "description" => "Enable SMAP.",
  },
  "source-vm" => {
  "type" => STRING,
  "description" => "Source VM level for compilation.",

145
  },
  "tag-pooling" => {
  "type" => BOOLEAN,
  "description" => "Enable tag pooling.",
  },
  "target-vm" => {
  "type" => STRING,
  "description" => "Target VM level for compilation.",
  },
  "trim-spaces" => {
  "type" => BOOLEAN,
  "description" => "Trim some spaces from the generated Servlet.",
  },
  "x-powered-by" => {
  "type" => BOOLEAN,
  "description" => "Enable advertising the JSP engine in x-powered-by.",
. . . .

The most important setting is the development parameter that affects the way your changes are
reflected in your deployed applications (when set to true it enables on-the-fly reload of JSP pages).
By default this attribute is set to false. When set to true, a check is done based on the check-
interval to control if the JSP pages have been changed.

/subsystem=undertow/servlet-container=default/setting=jsp:write-
attribute(name=development,value=true)

/subsystem=undertow/servlet-container=default/setting=jsp:write-attribute(name=check-
interval,value=10)

reload

8.8. Configuring Undertow’s access logs


Access logs allow to trace the HTTP requests which have been acknowledged by Undertow. You can
enable access logs by adding the access-log element in your Undertow server. Example:

 <host name="default-host" alias="localhost">


  <location name="/" handler="welcome-content"/>
  <access-log pattern="%h %l %u %t &quot;%r&quot; %s %b
&quot;%{i,Referer}&quot; &quot;%{i,User-Agent}&quot; Cookie: &quot;%{i,COOKIE}&quot;
Set-Cookie: &quot;%{o,SET-COOKIE}&quot; SessionID: %S Thread: &quot;%I&quot;
TimeTaken: %T"/>
  <http-invoker security-realm="ApplicationRealm"/>
 </host>

You can enable the above access-log from the CLI as follows:

146
/subsystem=undertow/server=default-server/host=default-host/setting=access-
log:add(pattern="%h %l %u %t \"%r\" %s %b \"%{i,Referer}\" \"%{i,User-Agent}\" Cookie:
\"%{i,COOKIE}\" Set-Cookie: \"%{o,SET-COOKIE}\" SessionID: %S Thread: \"%I\"
TimeTaken: %T")

The pattern we are using combines information from the Cookie header in
request and Set-cookie header in response, session-id (%S), thread name (%I),
time taken in seconds (%T). For more details on the access log patterns check this
 resource:
http://undertow.io/javadoc/2.0.x/io/undertow/server/handlers/accesslog/AccessLog
Handler.html

Additionally, to enable the recording of the request’s time, the following attribute needs to be set:

/subsystem=undertow/server=default-server/http-listener=default:write-
attribute(name=record-request-start-time,value=true)

Here is an excerpt from the access_log file, once you have enabled it:

127.0.0.1 - - [31/Dec/2019:10:07:15 +0100] "GET / HTTP/1.1" 200 1504 "-" "Mozilla/5.0


(X11; Fedora; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0" Cookie: "-" Set-
Cookie: "-" SessionID: - Thread: "default task-1" TimeTaken: 0.033

8.8.1. Writing access logs in JSON format

As of WildFly 17, it is also possible to emit access logs in JSON format. This can be enabled through
the setting attribute, which can add the console-access-log:

/subsystem=undertow/server=default-server/host=default-host/setting=console-access-
log:add

Here is a sample output:

{"eventSource":"web-access","hostName":"default-
host","bytesSent":0,"dateTime":"[31/Dec/2019:10:22:17
+0100]","remoteHost":"127.0.0.1","remoteUser":null,"requestLine":"GET /bkg.gif
HTTP/1.1","responseCode":304}

The above example, has created a default JSON based access logs. If you want to customize the
access log metadata’s attributes, you can do it through the metadata setting. Example:

147
/subsystem=undertow/server=default-server/host=default-host/setting=console-access-
log:add(metadata={"@version"="1",
"qualifiedHostName"=${jboss.qualified.host.name:unknown}}, attributes={bytes-sent={},
date-time={key="@timestamp", date-format="yyyy-MM-dd'T'HH:mm:ssSSS"}, remote-host={},
request-line={}, response-header={key-prefix="responseHeader", names=["Content-
Type"]}, response-code={}, remote-user={}})

8.9. Gathering statistics about Web applications


If you want to collect statistics of your Web applications you need to formerly activate them as
follows:

/subsystem=undertow/:write-attribute(name=statistics-enabled,value=true)

Now, reload your server and you can query for your application statistics through the deployment
root subsystem, which enlists all the applications available:

/deployment=web-cluster-demo.war/subsystem=undertow/:read-resource(include-
runtime=true)
{
  "outcome" => "success",
  "result" => {
  "active-sessions" => 5,
  "context-root" => "/ demo",
  "expired-sessions" => 1,
  "max-active-sessions" => 5,
  "rejected-sessions" => 0,
  "server" => "default-server",
  "session-avg-alive-time" => 25,
  "session-max-alive-time" => 50,
  "sessions-created" => 5,
  "virtual-host" => "default-host",
  "servlet" => undefined
  }

8.10. Configuring HTTP/2 Support


One of the new features that has been added since WildFly 9, is the support for the new version of
the HTTP Protocol (HTTP/2) in the embedded Undertow Web server. Since HTTP/2 requires the use
of TLS in the request/response handshake, we will discuss it in this section: Setting up HTTP/2
which is about Security configuration.

8.11. Configuring EJB calls over Undertow’s HTTP


Since WildFly 11 it is possible to configure the Web server to handle the incoming HTTP requests,

148
unmarshal them and passes the result to the internal EJB invocation code.

The following dependencies needs to be included on the client side to invoke EJB over HTTP:

<dependency>
  <groupId>org.wildfly.wildfly-http-client</groupId>
  <artifactId>wildfly-http-ejb-client</artifactId>
</dependency>

Also, your clients needs to be configured to use a specific Context.PROVIDER_URL to access the EJB
over HTTP:

jndiProperties.put(Context.PROVIDER_URL,"http://localhost:8080/wildfly-
services[http://localhost:8080/wildfly-services]");

You can change the default "wildfly-services" URL Path through the Command Line Interface as
follows:

/subsystem=undertow/server=default-server/host=default-host/setting=http-
invoker:write-attribute(name=path,value=ejb-over-http)

At any time, you can disable the RMI over http feature with:

/subsystem=undertow/server=default-server/host=default-host/setting=http-
invoker:remove()

149
9. Chapter 9: Configuring the Enterprise
subsystems
This Chapter covers the core subsystems which are the backbone of Enterprise applications. We
will start with an in-depth overview of the ejb subsystem which is responsible for the management
of the EJB Container, then we will move to other core subsystems such as the ee subsystem, the
jaxrs, the singleton, the naming, the batch-jberet and the mail subsystem.

At the end of this chapter, you will have a comprensive view of the Jakarta EE stack from the
managemement point of view.

9.1. Configuring the ejb subsystem


In this section we will learn how to configure the EJB container through the following units:

• Stateless and Message Driven Bean pool configuration: this section discusses about defining
the number of EJB used in a pool

• EJB thread pool configuration: since the EJB container uses a Thread pool to serve the different
type of beans, we will learn how to configure its pool of threads.

• Stateful bean cache configuration: this part of the chapter will illustrate how to configure your
Stateful session beans (SFSBs) cache used to store conversational state.

9.1.1. Configuring the EJB Pools

Java EE containers typically allow storing Stateless Beans (SLSBs) and Message Driven Beans
(MDBs) in a pool. A bean in the pool represents the pooled state in the EJB lifecycle. A pooled EJB
does not have an identity. The advantage of having beans in the pool is that the time to create a
bean can be saved for a request.

Please keep in mind that the amount of beans allowed in the pool is pertinent to a

 single Stateless Session Bean. Therefore if you have n Beans available in your
application, the amount of EJB available in memory will be (n * max-pool-size).

As you can see from the following CLI query, two bean pools are available under the ejb3
subsystem:

• mdb-strict-max-pool: default pool used by Message driven beans

150
• slsb-strict-max-pool: default pool used by Stateless session beans

/subsystem=ejb3:read-children-names(child-type=strict-max-bean-instance-pool)
{
  "outcome" => "success",
  "result" => [
  "mdb-strict-max-pool",
  "slsb-strict-max-pool"
  ]
}

WildFly Stateless Session Beans by default are using a pool size derived from the size of the IO
worker pool. The IO worker pool in turn is computed based on the io system resources. As a proof
of evidence, here is the current setting for the slsb-strict-max-pool:

/subsystem=ejb3/strict-max-bean-instance-pool=slsb-strict-max-pool/:read-
resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "derive-size" => "from-worker-pools", ①
  "max-pool-size" => 20,
  "timeout" => 5L,
  "timeout-unit" => "MINUTES"
  }
}

① By default this matches with the "default" Thread Pool settings in the "io" subsystem.

The other options for the derive-size attribute include "from-cpu-count" and "none".

A value of "from-cpu-count" indicates that the max pool size should be derived from the total
number of processors available on the system. Note that the computation isn’t a 1:1 mapping, the
values may or may not be augmented by other factors. Here is how to set the SLSB pool to use the
"from-cpu-count" policy:

/subsystem=ejb3/strict-max-bean-instance-pool=slsb-strict-max-pool/:write-
attribute(name=derive-size,value=from-cpu-count)

A value of 'none', the default, indicates that maximum number of SLSB allowed in the pool are
specified by the max-pool-size attribute of the pool. Here is how to set the SLSB to use this policy:

/subsystem=ejb3/strict-max-bean-instance-pool=slsb-strict-max-pool/:write-
attribute(name=derive-size,value=none)

What about Message Driven Beans ? MDB are held in the mdb-strict-max-pool which can also be

151
tuned through the derive-size attribute. The only difference is that it defaults to the "from-cpu-
count" value:

/subsystem=ejb3/strict-max-bean-instance-pool=mdb-strict-max-pool:read-resource()
{
  "outcome" => "success",
  "result" => {
  "derive-size" => "from-cpu-count",
  "max-pool-size" => 20,
  "timeout" => 5L,
  "timeout-unit" => "MINUTES"
  }
}

The other key parameter is the timeout parameter which specifies the maximum time the
container waits to acquire an instance from the pool before raising an exception. Finally, timeout-
unit specifies the time unit (Nanoseconds up to Days) used in Timeout parameter.

9.1.2. Configuring the MDB delivery

Message Driven Beans are responsive to JMS messages. By default all messages are actively
delivered to Message Drive Beans. This behavior is however highly configurable.

The Command Line Interface includes two operations start-delivery and stop-delivery that can
prevent messages to be delivered. These operations have to be executed on the /deployment
subsystem which is a dynamic subystem containing the applications deployed on WildFly:

/deployment=mdbdemo.war/subsystem=ejb3/message-driven-bean=DemoMDB:stop-delivery

Then you can restore message delivery by means of the start-delivery operation:

/deployment=mdbdemo.war/subsystem=ejb3/message-driven-bean=DemoMDB:start-delivery

This configuration can be included also in the jboss-ejb3.xml file, configuring to false the active
element within it:

<jboss:ejb-jar>
  <assembly-descriptor>
  <d:delivery>
  <ejb-name>DemoMDB</ejb-name>
  <d:active>false</d:active> ①
  </d:delivery>
  </assembly-descriptor>
</jboss:ejb-jar>

① Messages won’t be delivered!

152
9.1.2.1. Configuring MDB Group delivery

The start-delivery and stop-delivery allow/prevent all users from receiving messages. A more fine-
grained approach consists in creating delivery groups so that you can define delivery at group
level. First of all, let’s see how to define a delivery group from the command line:

/subsystem=ejb3/mdb-delivery-group=mygroup:add

Then, you can associate your MDB to a particular group by using the
@org.jboss.ejb3.annotation.DeliveryGroup annotation in your code:

@MessageDriven(name = "HelloWorldMDB", activationConfig = {


@ActivationConfigProperty(propertyName = "destinationType", propertyValue =
"javax.jms.Queue"),
@ActivationConfigProperty(propertyName = "destination", propertyValue =
"java:/ExampleQueue"),
@ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue = "Auto-
acknowledge") })
@DeliveryGroup("mygroup") ①
public class DemoMDB implements MessageListener {
  public void onMessage(Message rcvMessage) {
  . . . .
  }
}

① Messages will be received only if the Group "mygroup" is active.

If you don’t want to pollute your application’s code with @DeliveryGroup annotation, you can opt
as usual for the jboss-ejb3.xml configuration:

<jboss:ejb-jar>
  <assembly-descriptor>
  <d:delivery>
  <ejb-name>DemoMDB</ejb-name>
  <d:group>mygroup</d:group>
  </d:delivery>
  </assembly-descriptor>
</jboss:ejb-jar>

Having defined the group/s of MDBs then you can activate or deactivate the single group delivery.
For example, in order to set to false the delivery of "mygroup":

/subsystem=ejb3/mdb-delivery-group=mygroup:write-attribute(name=active,value=false)

To restore the group delivery, write the attribute with a true value:

153
/subsystem=ejb3/mdb-delivery-group=mygroup:write-attribute(name=active,value=true)

9.1.2.1.1. Attaching an MDB to multiple Groups

Since WildFly 16, it is possible add multiple delivery groups in the same MDB with delivery enabled
only when all the delivery groups are active. Here is, for example, how to add multiple delivery
groups the DemoMDB example:

@DeliveryGroup("mygroup1")
@DeliveryGroup("mygroup2")
public class DemoMDB implements MessageListener {
  public void onMessage(Message rcvMessage) {
  . . . .
  }
}

So, delivery of messages to DemoMDB will only be active when "mygroup1" and "mygroup2" are all
active. The same effect can be applied declaratively, by including multiple delivery groups in the
jboss-ejb3.xml configuration:

<d:delivery>
  <ejb-name>DemoMDB</ejb-name>
  <d:group>mygroup1</d:group>
  <d:group>mygroup2</d:group>
</d:delivery>

9.1.3. Configuring the Stateful Session Bean cache

Stateful session beans require a different implementation from the container than the stateless
counterpart. There are several reasons for it:

• At first, because they maintain state, that state is associated exclusively with one session, so
there is just one instance per session.

• Second, since they are bound to one session, the container must prevent any concurrent
modification to that state.

• Third, because they maintain state, that state needs to be part of a clustering HA configuration.

• Last, if the instance is not accessed in a period of time, and the bean is not in use, the state may
be passivated to disk.

For these reasons Stateful beans are hold in a cache instead of a pool of anonymous instances.
There are two types of Stateful caches available in your configuration:

• simple: This a cache implementation using in-memory storage and eager expiration. It’s the
default for non-clustered (ha) profiles.

• distributable: This is the cache used to provide high-availability of SFSB state.

154
Each cache in turn can have a different strategies. For example the "simple" cache does not include
passivation as an option:

The "simple" cache can be used if you have a very limited amount of SFSBs clients
in your applications, as elements won’t be removed after a certain timeout either.
 So, you should only use this cache if you can guarantee that you’ll be calling the
@Remove method of the SFSB once you’re done with it.

9.1.3.1. Enabling Passivation for Stateful Session Beans

The "distributable" cache will store @Stateful EJB state in a distributed Infinispan cache. It is the
default for "ha" profiles. You can, however, switch the default SFSB cache to use the distributable
cache through the CLI:

/subsystem=ejb3/:write-attribute(name=default-sfsb-cache,value=distributable)

Much the same way, you can configure the clustered SFSB cache to use the passivating cache as
follows:

/subsystem=ejb3/:write-attribute(name=default-clustered-sfsb-
cache,value=distributable)

The distributable ejb cache is governed by the infinispan subsystem. You can configure it to evict
beans from the cache when a maximum size is reached or when they are idle for an amount of
time. Here is for example how to specify the maximum number of entries to 10000, causing a

155
passivation of beans exceeding this threshold:

/subsystem=infinispan/cache-container=web/distributed-
cache=dist/eviction=EVICTION/:write-attribute(name=max-entries,value=10000)

See the section Configuring ejb and web Cache containers for more information about the ejb cache
configuration.

Please note that since WildFly 16 it is possible to force eager passivation of


Stateful Session Beans via the System Property jboss.ejb.stateful.idle-timeout
 which reintroduces the previously (JBoss EAP 6) available ability to allow EJB
passivation behaviour based on timeout.

9.1.3.2. Disabling Passivation for a single deployment

Since EJB 3.2 there is a portable way for disabling passivation of Stateful Beans. This can be
achieved either via the default ejb-jar.xml configuration file or by annotation directly on the Bean
class. In the following example, we are disabling passivation for the Stateful Session Bean named
"ExampleSFSB" by setting to false the passivation-capable element contained in the ejb-jar.xml
configuration file:

<ejb-jar xmlns="http://xmlns.jcp.org/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee
  http://xmlns.jcp.org/xml/ns/javaee/ejb-jar_3_2.xsd"
  version="3.2">
  <enterprise-beans>
  <session>
  <ejb-name>ExampleSFSB</ejb-name>
  <ejb-class>com.sample.ExampleSFSB</ejb-class>
  <session-type>Stateful</session-type>
  <passivation-capable>false</passivation-capable>
  </session>
  </enterprise-beans>
 </ejb-jar>

This is the corresponding annotation which can be applied at Class level:

@javax.ejb.Stateful(passivationCapable=false)

public class ExampleSFSB {

....

156
9.1.4. Timeout policies for EJB

There are two kinds of timeout policies you can define for your EJBs:

• StatefulTimeout: This is used to force passivation of Stateful beans which are inactive for a
certain amount of time.

• AccessTimeout: This is used to specify the time period after which a queued request for a
Stateful or Singleton Bean times out.

Let’s see more in detail how to configure these timeout policies and which beans can leverage these
policies.

9.1.4.1. Configuring the Stateful Session Bean timeout

It is possible to configure a timeout for Stateful Session Beans which will cause the Beans to be
passivated when they don’t receive calls from clients for a certain amount of time. The timeout can
be configured at EJB level or using a default global stateful bean session timeout for all deployed
stateful beans.

At EJB level, this can be done using the @javax.ejb.StatefulTimeout annotation as in this example:

@StatefulTimeout(value = 1000, unit = TimeUnit.MILLISECONDS)


public class PassivatingBean {

Besides that, you can configure the Stateful Session Bean timeout using the stateful-timeout XML
element in the ejb-jar.xml deployment descriptor. For example, to set a timeout value of 20 seconds:

<stateful-timeout>
  <timeout>20</timeout>
  <unit>Seconds</unit>
</stateful-timeout>

You can also define a global stateful timeout (which will be overridden by specific EJB
StatefulTimeout settings) as follows:

/subsystem=ejb3:write-attribute(name=default-stateful-bean-session-timeout,
value=10000)

In the above example, the default timeout for SFSB is set to 10 seconds.

9.1.4.2. Configuring the Access timeout for SFSBs and Singleton beans

Stateful and Singleton Session Beans have an access timeout value specified for managing
concurrent access.

157
As the name implies a Singleton bean is a Session Bean with a guarantee that there is at most one
instance per JVM in the application.

This value is the period of time that a request to a session bean method can be blocked before it will
timeout. Here’s how to set this value to 5000 ms:

/subsystem=ejb3/:write-attribute(name=default-stateful-bean-access-timeout,value=5000)

The timeout value and the time unit used can also be specified using the
@javax.ejb.AccessTimeout annotation on the SFSB/Singleton method. It can be specified on the
session bean (which applies to all the bean’s methods) and on specific methods to override the
configuration for the bean.

Example:

@Singleton
public class Singleton_With_Timeout{
  @AccessTimeout(value = 5000, unit = java.util.concurrent.TimeUnit.MILLISECONDS)
  @Lock(LockType.WRITE)
  public void doSomething(){

  }
}

9.1.5. EJB3 Thread pool configuration

WildFly maintains a number of instances of Java thread objects in memory for use by Enterprise
Bean services, including remote invocation, the timer service, and asynchronous invocation.

The EJB Thread pool is used as first layer for client requesting EJBs, as shown by the following
picture, which shows the typical invocation, chain from a remote EJB client that is done through the
HTTP layer.

158
Behind the scenes, the Undertow Web server does the necessary plumbing to route the request to
the EJB Thread pool using the Remoting protocol. Once that a thread instance is acquired, the call
is routed to the EJB Session Bean pool where the actual methods are invoked.

Please note that invocations that are arriving from a local EJB client (e.g. a

 Servlet) happen on the thread of the originating client (in our example, the
Servlet thread whose size is configured in undertow subsystem).

9.1.5.1. Configuring the EJB thread pool

The application server includes, out of the box, a thread pool named "default" whose properties
can be inspected using the following CLI query:

/subsystem=ejb3/thread-pool=default/:read-resource
{
  "outcome" => "success",
  "result" => {
  "core-threads" => undefined,
  "keepalive-time" => {
  "time" => 60L,
  "unit" => "SECONDS"
  },
  "max-threads" => 10,
  "name" => "default",
  "thread-factory" => undefined
  }
}

The attribute max-threads specifies the maximum number of threads in the thread pool. It is a
required attribute and defaults to 10.

159
The attribute core-threads specifies the number of core threads in the thread pool. It is an optional
attribute and defaults to max-threads value.

The attribute keepalive-time specifies the amount of time that non-core threads can stay idle
before they become eligible for removal. It is an optional attribute and defaults to 60 seconds.

The attribute thread-factory specifies the name of a specific thread factory to use to create worker
threads. If not defined, an appropriate default thread factory will be used.

Here is how to configure the number of core-threads for the ejb3 subsystem:

/subsystem=ejb3/thread-pool=default:write-attribute(name=core-threads, value=3)

Here is how to set the max-threads attribute to 30 threads:

/subsystem=ejb3/thread-pool=default/:write-attribute(name=max-threads,value=30)

On the other hand, you can define a new thread pool, by specifying its name and the mandatory
attribute max-threads:

/subsystem=ejb3/thread-pool=largepool/:add(max-threads=50)

Then, you need to switch the default thread pool implementation to the new pool:

/subsystem=ejb3/service=remote/:write-attribute(name=thread-pool-name,value=largepool)

9.1.5.2. EJB Thread pool optimization

The behavior and configuration of thread pools used in the EJB3 subsystem has been improved in
WildFly 18. The following table summarizes the new ejb3 thread pool behavior, compared with
previous versions of the application server:

New ejb3 thread pool behavior Older thread-pool behavior

Ability to configure core-threads and max- max-threads is configurable, but core-threads is


threads independently not configurable and always equals to max-
threads

Ability to reuse available threads as much as Upon new request, new threads are created up
possible, without unnecessary creation of new to the limit of max-threads, even though idle
threads threads are available

Ability to timeout idle non-core threads after Idle threads are not timed out, and keepalive-
keepalive-timeout timeout is ignored

Ability to create and use non-core threads after Income tasks are queued after core threads are
core threads are saturated used up

160
Therefore, in the new thread pool architecture, idle workers are a key component when choosing
how to manage incoming tasks. When idle workers are available, the application server will
attempt to use these idle threads, provided that the maximum number of workers has not been
reached.

This architecture greatly reduces the number of Threads to be created to serve incoming requests,
thus improving the scalability of applications.

9.1.5.3. Gathering runtime statistics of the thread pool

If you add the include-runtime attribute to your CLI query then you can collect about the number
of running tasks and the queue size:

/subsystem=ejb3/thread-pool=default/:read-resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "active-count" => 2,
  "completed-task-count" => 20L,
  "current-thread-count" => 10,
  "keepalive-time" => {
  "time" => 100L,
  "unit" => "MILLISECONDS"
  },
  "largest-thread-count" => 10,
  "max-threads" => 10,
  "name" => "default",
  "queue-size" => 1,
  "rejected-count" => 0,
  "task-count" => 0L
  }
}

From the above query, we are aware of the following information:

The active-count informs us about the the approximate number of threads that are actively
executing tasks. The completed-task-count is the total number of tasks that have completed
execution. The current-thread-count measures the current number of threads in the pool.

The largest-thread-count is the largest number of threads that have ever simultaneously been in
the pool. The attribute queue-size measures the thread pool queue size. The task which haven’t
been executed are measured in the rejected-count. Finally, task-count counts the approximate
total number of tasks that have ever been scheduled for execution.

9.1.6. Configuring Interceptors at EJB Container level

WildFly allows users to implement their own EJB interceptors as a part of the deployment or (since
WildFly 17) at subsystem level.

161
The interceptor classes themselves are simple POJOs and use the
@javax.annotation.AroundInvoke or @javax.interceptor.AroundTimeout to mark the around
invoke method which will get invoked during the invocation on the bean. Here’s an example of the
interceptor:

public class ExampleContainerInterceptor {


  @AroundInvoke
  private Object aroundInvoke(final InvocationContext invocationContext) throws
Exception {
  return this.getClass().getName() + " " + invocationContext.proceed();
  }

You can configure Container Interceptors at deployment level through the jboss-ejb3.xml file,
which then gets placed under the META-INF folder of the EJB deployment, just like the ejb-jar.xml.
Here’s an example:

<jboss xmlns="http://www.jboss.com/xml/ns/javaee"
  xmlns:jee="http://java.sun.com/xml/ns/javaee"
  xmlns:ci ="urn:container-interceptors:1.0">

  <jee:assembly-descriptor>
  <ci:container-interceptors>
  <jee:interceptor-binding>
  <ejb-name>FlowTrackingBean</ejb-name>
  <interceptor-class>
com.example.ExampleContainerInterceptor</interceptor-class>
  </jee:interceptor-binding>
  </ci:container-interceptors>
  </jee:assembly-descriptor>
</jboss>

Since WildFly 17, however, it is possible to configure Container Inteceptors as part of the ejb3
subsystem through the following elements:

• Server EJB Interceptors: These interceptors are POJO Classes whose methods are annotated by
@javax.interceptor.AroundInvoke or @javax.interceptor.AroundTimeout. They are
decoupled from a specific EJB deployment. You can install them as module and include them in
your ejb3 subsystem as in the following example:

<server-interceptors>
  <interceptor module="com.sample.interceptors:1.1" class=
"com.sample.ExampleContainerInterceptor"/>
</server-interceptors>

• Client EJB Interceptors: These interceptors are POJO Classes implementing

162
org.jboss.ejb.client.EJBClientInterceptor interface. Here is a sample implementation:

import org.jboss.ejb.client.EJBClientInterceptor;
import org.jboss.ejb.client.EJBClientInvocationContext;

public class ClientInterceptor implements EJBClientInterceptor {

  @Override
  public void handleInvocation(EJBClientInvocationContext context) throws Exception
{
  context.sendRequest();
  }

  @Override
  public Object handleInvocationResult(EJBClientInvocationContext context) throws
Exception {
  return context.getResult();
  }
}

You can install Client EJB interceptors as module and include them in your ejb3 subsystem as in the
following example:

<client-interceptors>
  <interceptor module="com.sample.interceptors:1.1" class=
"com.sample.ClientInterceptor"/>
</client-interceptors>

9.1.7. Configuring Remote EJB Transport

WildFly uses the Remoting framework in order to provide remote access to EJBs. In the earlier
release of the application server (AS7) this framework used a Socket transport which landed on
port 4447 of the application server in order to connect the remote client and the EJB container.

This communication stack is not used anymore and remote EJB clients invocations happen on the
HTTP port on port 8080 using an underlying mechanism called HTTP-upgrade;

That’s surely a good news for system administrators which will not need to configure a firewall
exception to allow remote EJB access and even a better news if you are planning to deploy your
application on a cloud.

If you want to tune the transport of EJB calls then you have to focus on the http-connector used by
the remoting protocol. The http-remoting-connector used the "default" connector-ref which in turns
maps the "default" http-listener contained in the undertow configuration:

163
/subsystem=remoting/http-connector=http-remoting-connector/:read-
attribute(name=connector-ref)
{
  "outcome" => "success",
  "result" => "default"
}

Let’s say you have created (or renamed) another http-connector, you could vary the default
connector-ref attribute to use it for the EJB transport. Here is how to use, for example, the new-http-
connector:

/subsystem=remoting/http-connector=http-remoting-connector/:write-
attribute(name=connector-ref,value=new-http-connector)

The current version of WildFly contains also a set of properties such as worker-task-core-threads,
worker-task-read-threads, worker-task-write-threads. These properties have been deprecated and
thus have no effect on the transport of EJB invocations.

9.1.8. Enabling EJB statistics

If you need to monitor your EJB statistics via the CLI, you need to activate at first them using the
following command:

/subsystem=ejb3/:write-attribute(name=enable-statistics,value=true)

Then you can query your beans for statistics. Here is for example how to gather the live statistics of
a SFSB:

164
/deployment=javaee7-ejb-server-basic.jar/subsystem=ejb3/stateful-session-
bean=AccountEJB/:read-resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "cache-size" => 2,
  "component-class-name" => "AccountEJB",
  "declared-roles" => [],
  "execution-time" => 0L,
  "invocations" => 0L,
  "methods" => {},
  "passivated-count" => 0,
  "peak-concurrent-invocations" => 0L,
  "run-as-role" => undefined,
  "security-domain" => "other",
  "total-size" => 2,
  "wait-time" => 0L,
  "service" => undefined
  }
}

9.1.9. Consuming messages from an external Messaging Provider

Out of box the application server relies on Artemis ActiveMQ Resource adapter for messaging and
uses its own highly performant native protocol called "Core". Therefore, if you are connecting with
an external broker there can be two scenarios:

• You are connecting with a Messaging Broker which uses a different protocol (e.g. AMQP,
OpenWire etc) for communication. In this case you will need a Resource Adapter for the
communication.

• You are connecting with an external Artemis ActiveMQ server. In this case, you will need to
make sure you are connecting to the Artemis ActiveMQ Acceptor which exposes the Core
protocol.

Let’s see a concrete example for both cases.

9.1.9.1. Consuming messages from a Broker which uses a different Protocol

As an example, we will be connecting to a remote ActiveMQ broker. See the section ActiveMQ
Artemis overview to learn the differences between ActiveMQ and Artemis ActiveMQ.

• Start by downloading the Resource Adapter (Currently ActiveMQ resource adapter is hosted on
the Maven repository (http://mvnrepository.com/artifact/org.apache.activemq/activemq-rar).

• Once downloaded the resource adapter, you can either install it as a module, or simply deploy it
as you would do for an application:

cp activemq-rar-5.15.0 /opt/wildfly-20.0.0.Final/standalone/deployments

165
Now, we will configure the Resource Adapter through the resource-adapter subsystem (The
Resource adapter configuration can be partly be completed through the Admin Console. We will
need however configuring some advanced settings such as admin-objects and Connection factories;
hence we suggest a direct configuration in the XML configuration file):

166
<subsystem xmlns="urn:jboss:domain:resource-adapters:3.0">
  <resource-adapters>
  <resource-adapter id="activemq">
  <archive>activemq-rar-5.15.0.rar</archive> ①
  <transaction-support>XATransaction</transaction-support>
  <config-property name="UseInboundSession">
  false
  </config-property>
  <config-property name="Password">
  defaultPassword
  </config-property>
  <config-property name="UserName">
  defaultUser
  </config-property>
  <config-property name="ServerUrl">
  tcp://localhost:61616
  </config-property>
  <connection-definitions>
  <connection-definition class-name=
"org.apache.activemq.ra.ActiveMQManagedConnectionFactory" jndi-name=
"java:/MQConnectionFactory" enabled="true" pool-name="ConnectionFactory">
  <xa-pool>
  <min-pool-size>1</min-pool-size>
  <max-pool-size>20</max-pool-size>
  <prefill>false</prefill>
  <is-same-rm-override>false</is-same-rm-override>
  </xa-pool>
  </connection-definition>
  </connection-definitions>
  <admin-objects>
  <admin-object class-name="org.apache.activemq.command.ActiveMQQueue"
jndi-name="java:jboss/activemq/queue/TestQueue" use-java-context="true" pool-name=
"TestQueue">
  <config-property name="PhysicalName">
  activemq/queue/TestQueue
  </config-property>
  </admin-object>
  <admin-object class-name="org.apache.activemq.command.ActiveMQTopic"
jndi-name="java:jboss/activemq/topic/TestTopic" use-java-context="true" pool-name=
"TestTopic">
  <config-property name="PhysicalName">
  activemq/topic/TestTopic
  </config-property>
  </admin-object>
  </admin-objects>
  </resource-adapter>
  </resource-adapters>
</subsystem>

① This is searched in the list of installed modules or applications

167
Please note the CLI script required to generate the above Resource Adapter is available on Github:
http://bit.ly/2GC8nNe

The most interesting part, is the ActiveMQ configuration, which relies on the default Connection
settings, the Connection Factory configuration which exposes the ActiveMQ ConnectionFactory
through the JNDI mapping "java:/MQConnectionFactory" and two Administered objects: a JMS
Queue bound at "java:jboss/activemq/queue/TestQueue" and a JMS Topic Bound at
"java:jboss/activemq/topic/TestTopic".

Check from the Server logs that the Resource Adapter has been correctly deployed and the JCA
Objects have been bound in the application server JNDI Tree:

14:21:19,227 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-7)


WFLYJCA0002: Bound JCA ConnectionFactory [java:/MQConnectionFactory]
14:21:19,228 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-1)
WFLYJCA0002: Bound JCA AdminObject [java:/activemq/queue/TestQueue]
14:21:19,227 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-4)
WFLYJCA0002: Bound JCA AdminObject [java:jboss/activemq/topic/TestTopic]

9.1.9.2. Consuming messages from an external ArtemisMQ Broker

• Start by downloading the ArtemisMQ (Currently ActiveMQ resource adapter is hosted on the
Maven repository (http://activemq.apache.org/components/artemis/download/).

• Install the server according to the documentation available on


(http://activemq.apache.org/components/artemis/documentation/

Next, you need to make sure you are using the CORE protocol and that the following properties are
set, so that the Resource Adapter on WildFly can find Artemis Queues/Topics:

• anycastPrefix=jms.queue.

• multicastPrefix=jms.topic.

For example, the following acceptor will work:

<acceptors>
  <acceptor name="hornetq"
>tcp://127.0.0.1:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=CO
RE,HORNETQ,STOMP;useEpoll=true</acceptor>
</acceptors>

Also, within our configuration, we have defined a Queue to be accessed by JMS Cients:

168
<address name="demoQueue
  <anycast>
  <queue name="demoQueue"/>
  </anycast>
</address>

Now, done with Artemis ActiveMQ, let’s go to configure WildFly. First of All, we need a remote-
connector and a pooled-connection-factory. Within the pooled-connection-factory, specify the JNDI
name for the connection and the credentials to access the server.

<subsystem xmlns="urn:jboss:domain:messaging-activemq:8.0">
  <server name="default">

  <remote-connector name="remote-artemis" socket-binding="remote-artemis"/>

  <pooled-connection-factory name="remote-artemis" entries="java:/RemoteJmsXA


java:jboss/RemoteJmsXA" connectors="remote-artemis" ha="false" user="admin" password=
"admin" min-pool-size="15" max-pool-size="30" statistics-enabled="true">
  <inbound-config rebalance-connections="true" setup-attempts="-1"
setup-interval="5000"/>
  </pooled-connection-factory>

  </server>
</subsystem>

Please notice that you can use the parameter use-jndi="false" if you want to let
 your Connection Factory to skip JNDI lookup of Queues.

The remote-connector points to a socket-binding which contains the address and port of the remote
AMQ server:

<outbound-socket-binding name="remote-artemis">
  <remote-destination host="127.0.0.1" port="5445"/>
</outbound-socket-binding>

Then, within the naming subsystem, specify the JNDI Settings so that you are able to look up the
remote Queues/Topics running on Artemis AMQ:

169
<subsystem xmlns="urn:jboss:domain:naming:2.0">
  <bindings>
  <external-context name="java:global/remoteContext" module=
"org.apache.activemq.artemis" class="javax.naming.InitialContext">
  <environment>
  <property name="java.naming.factory.initial" value=
"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"/>
  <property name="java.naming.provider.url" value=
"tcp://127.0.0.1:5445" >
  <property name="connectionFactory.ConnectionFactory" value=
"tcp://127.0.0.1:5445"/>
  <property name="queue.demoQueue" value="demoQueue"/>
  </environment>
  </external-context>
  <lookup name="java:/demoQueue" lookup=
"java:global/remoteContext/demoQueue"/>
  </bindings>
<remote-naming/>

In our case, we will be looking up the JMS Queue named demoQueue.

9.1.9.2.1. Coding JMS Consumers and JMS Producers

Our JMS Consumers will need to refererence the pooled-conection-factory name through the the
@org.jboss.ejb3.annotation.ResourceAdapter annotation:

170
@MessageDriven(name = "DemoMDB", activationConfig = {
  @ActivationConfigProperty(propertyName = "destinationLookup", propertyValue =
"java:global/remoteContext/demoQueue"),
  @ActivationConfigProperty(propertyName = "destinationType", propertyValue =
"javax.jms.Queue"),
  @ActivationConfigProperty(propertyName = "user", propertyValue = "amq"),
  @ActivationConfigProperty(propertyName = "password", propertyValue = "amq"),
  @ActivationConfigProperty(propertyName = "acknowledgeMode", propertyValue =
"Auto-acknowledge")})
@ResourceAdapter(value="remote-artemis")
public class HelloWorldQueueMDB implements MessageListener {

  private static final Logger LOGGER = Logger.getLogger(HelloWorldQueueMDB.class


.toString());

  public void onMessage(Message rcvMessage) {


  TextMessage msg = null;
  try {
  if (rcvMessage instanceof TextMessage) {
  msg = (TextMessage) rcvMessage;
  LOGGER.info("Received Message from queue: " + msg.getText());
  } else {
  LOGGER.warning("Message of wrong type: " + rcvMessage.getClass()
.getName());
  }
  } catch (JMSException e) {
  throw new RuntimeException(e);
  }
  }
}

On the other hand, when using JMS Clients, we need to inject the remote JMSConnectionFactory
into the @JMSContext as follows:

171
@WebServlet("/HelloWorldMDBServletClient")
public class DemoServlet extends HttpServlet {

  private static final int MSG_COUNT = 5;

  @Inject
  @JMSConnectionFactory("java:/RemoteJmsXA")
  @JMSPasswordCredential(userName = "amq", password = "amq")
  private JMSContext context;

  @Resource(lookup = "java:global/remoteContext/demoQueue")
  private Queue queue;

  @Override
  protected void doGet(HttpServletRequest req, HttpServletResponse resp) throws
ServletException, IOException {
  resp.setContentType("text/html");
  PrintWriter out = resp.getWriter();
  try {
  final Destination destination = queue;

  out.write("<p>Sending messages to <em>" + destination + "</em></p>");


  out.write("<h2>The following messages will be sent to the
destination:</h2>");
  for (int i = 0; i < MSG_COUNT; i++) {
  String text = "This is message " + (i + 1);
  context.createProducer().send(destination, text);
  out.write("Message (" + i + "): " + text + "</br>");
  }

  } finally {
  if (out != null) {
  out.close();
  }
  }
  }

  protected void doPost(HttpServletRequest req, HttpServletResponse resp) throws


ServletException, IOException {
  doGet(req, resp);
  }
}

9.2. Configuring the ee subsystem


The ee subsystem provides a set of functionalities which can be used manage different aspects of
the application server, such as:

172
• Customise the deployment of Jakarta EE applications

• Create EE Concurrency Utilities objects

• Define the default bindings

Let’s see each functionality in detail:

9.2.1. Managing Jakarta EE Application Deployment

First off, the ee subsystem allows the customisation of the deployment behaviour for Jakarta EE
Applications by means of global modules and global directories.

A Global module is a set of JBoss Modules that will be added as dependencies to the JBoss Modules
module of every Jakarta EE deployment. Such dependencies allows Jakarta EE deployments to see
the classes exported by the global modules. In this section, you can learn more details about it: How
to turn your modules in a global module

A Global directory represents a directory tree scanned automatically to include .jar files and
resources as a single additional dependency. This module dependency is added as a system
dependency on each deployed application. Basically, with a global directory, you will be relying on
WildFly to automate the maintenance and configuration of a JBoss Modules module that represents
the jar files and resources of a specific directory. In this section you can learn more details about it:
How to use global directories for your modules

To manage application deployment, the ee subsystem configuration includes flags to configure


whether system property replacement will be done on XML descriptors and Java Annotations
included in Jakarta EE deployments.

The spec-descriptor-property-replacement, indicates whether system property replacement will


be performed on standard Jakarta EE XML descriptors. If not configured this defaults to true,
however it is set to false in the standard configuration files shipped with WildFly.

  <spec-descriptor-property-replacement>false</spec-descriptor-property-replacement>

When enabled, properties can be replaced in the following deployment descriptors:

• ejb-jar.xml

• persistence.xml

• application.xml

• web.xml

• permissions.xml

The jboss-descriptor-property-replacement indicates whether system property replacement will


be performed on WildFly proprietary XML descriptors, such as jboss-app.xml. This defaults to true:

  <jboss-descriptor-property-replacement>false</jboss-descriptor-property-replacement>

173
When enabled, properties can be replaced in the following deployment descriptors:

• jboss-ejb3.xml

• jboss-app.xml

• jboss-web.xml

• jboss-permissions.xml

• JMS Deployment descriptors (*-jms.xml)

• Datasource Deployment descriptors (*-ds.xml)

Finally, the annotation-property-replacement indicates whether system property replacement


will be performed on Java annotations. The default value is false.

  <annotation-property-replacement>false</annotation-property-replacement>

9.2.2. Managing EE Concurrency Utilities

EE Concurrency Utilities (JSR 236) can be used to simplify the management of multithreaded
applications. Instances of these utilities are managed by WildFly, and the related configuration. The
following components are included in the 'ee' subsystem:

Context Services: which creates contextual proxies from existent objects. WildFly Context Services
are also used to propagate the context from a Jakarta EE application invocation thread, to the
threads internally used by the other EE Concurrency Utilities. Context Service instances may be
created using the subsystem XML configuration:

  <context-services>
  <context-service name="default" jndi-name=
"java:jboss/ee/concurrency/context/default" use-transaction-setup-provider="true" />
  </context-services>
[source,xml]

The name attribute is mandatory, and it’s value should be a unique name within all Context
Services.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Context Service
should be placed.

The optional use-trasaction-setup-provider attribute indicates if the contextual proxies built by the
Context Service should suspend transactions in context, when invoking the proxy objects, and its
value defaults to true.

Management clients, such as the WildFly CLI, may also be used to configure Context Service
instances. An example to add and remove one named other:

174
/subsystem=ee/context-service=other:add(jndi-name=java\:jboss\/ee\/concurrency\/other)
/subsystem=ee/context-service=other:remove

Managed Thread Factories: The Managed Thread Factory allows Jakarta EE applications to create
new threads. WildFly Managed Thread Factory instances may also, optionally, use a Context Service
instance to propagate the Jakarta EE application thread’s context to the new threads. Instance
creation is done through the EE subsystem, by editing the subsystem XML configuration:

  <managed-thread-factories>
  <managed-thread-factory name="default" jndi-name=
"java:jboss/ee/concurrency/factory/default" context-service="default" priority="1" />
  </managed-thread-factories>

The name attribute is mandatory, and it’s value should be a unique name within all Managed
Thread Factories.

The jndi-name attribute is also mandatory, and defines where in the JNDI the Managed Thread
Factory should be placed.

The optional context-service references an existent Context Service by its name. If specified then
thread created by the factory will propagate the invocation context, present when creating the
thread.

The optional priority indicates the priority for new threads created by the factory, and defaults to 5.

Management clients, such as the WildFly CLI, may also be used to configure Managed Thread
Factory instances. An example to add and remove one named other:

/subsystem=ee/managed-thread-factory=other:add(jndi-
name=java\:jboss\/ee\/factory\/other)
/subsystem=ee/managed-thread-factory=other:remove

Managed Executor Services: The Managed Executor Service is the Jakarta EE adaptation of Java
SE Executor Service, providing to Jakarta EE applications the functionality of asynchronous task
execution. WildFly is responsible to manage the lifecycle of Managed Executor Service instances,
which are specified through the EE subsystem XML configuration:

175
<managed-executor-services>
  <managed-executor-service
  name="default"
  jndi-name="java:jboss/ee/concurrency/executor/default"
  context-service="default"
  thread-factory="default"
  hung-task-threshold="60000"
  core-threads="5"
  max-threads="25"
  keepalive-time="5000"
  queue-length="1000000"
  reject-policy="RETRY_ABORT" />
</managed-executor-services>

The mandatory core-threads attribute provides the number of threads to keep in the executor’s
pool, even if they are idle. If this is not defined or is set to 0, the core pool size will be calculated
based on the number of available processors.

You can optionally define a queue-length to indicate the number of tasks that can be stored in the
input queue. The default value is 0, which means the queue capacity is unlimited.

Here is how core-threads and queue-length work in combination:

• If queue-length is 0, or queue-length is Integer.MAX_VALUE (2147483647) and core-threads is 0,


direct handoff queuing strategy will be used and a synchronous queue will be created.

• If queue-length is Integer.MAX_VALUE but core-threads is not 0, an unbounded queue will be


used.

• For any other valid value for queue-length, a bounded queue wil be created.

The optional hung-task-threshold defines a runtime threshold value, in milliseconds, for tasks to
be considered hung by the executor. A value of 0 will never consider tasks to be hung.

The optional long-running-tasks is a hint to optimize the execution of long running tasks, and
defaults to false.

The optional max-threads defines the the maximum number of threads used by the executor,
which defaults to Integer.MAX_VALUE (2147483647).

The optional keepalive-time defines the time, in milliseconds, that an internal thread may be idle.
The attribute default value is 60000.

The optional reject-policy defines the policy to use when a task is rejected by the executor. The
attribute value may be the default ABORT, which means an exception should be thrown, or
RETRY_ABORT, which means the executor will try to submit it once more, before throwing an
exception.

Besides the configuration settings, you can also read the Runtime metrics for the Executor service
as follows:

176
/subsystem=ee/managed-executor-service=default:read-resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "active-thread-count" => 0,
  "completed-task-count" => 0L,
  "context-service" => "default",
  "core-threads" => undefined,
  "current-queue-size" => 0,
  "hung-task-threshold" => 60000L,
  "hung-thread-count" => 0,
  "jndi-name" => "java:jboss/ee/concurrency/executor/default",
  "keepalive-time" => 5000L,
  "long-running-tasks" => false,
  "max-thread-count" => 0,
  "max-threads" => undefined,
  "queue-length" => undefined,
  "reject-policy" => "ABORT",
  "task-count" => 0L,
  "thread-count" => 0,
  "thread-factory" => undefined,
  "thread-priority" => 5
  }
}

Here is a short description of each single Runtime attribute:

• active-thread-count: the approximate number of threads that are actively executing tasks.

• completed-task-count: the approximate total number of tasks that have completed execution.

• current-queue-size: the current size of the executor’s task queue.

• hung-thread-count: the number of executor threads that are hung.

• max-thread-count: the largest number of executor threads.

• task-count: the approximate total number of tasks that have ever been submitted for execution.

• thread-count: the current number of executor threads.

Managed Scheduled Executor Services: The Managed Scheduled Executor Service is the Jakarta
EE adaptation of Java SE Scheduled Executor Service, providing to Jakarta EE applications the
functionality of scheduling task execution. WildFly is responsible to manage the lifecycle of
Managed Scheduled Executor Service instances, which are specified through the EE subsystem XML
configuration:

177
<managed-scheduled-executor-services>
  <managed-scheduled-executor-service
  name="default"
  jndi-name="java:jboss/ee/concurrency/scheduler/default"
  context-service="default"
  thread-factory="default"
  hung-task-threshold="60000"
  core-threads="5"
  keepalive-time="5000"
  reject-policy="RETRY_ABORT" />
</managed-scheduled-executor-services>

The settings for this service are the same as those discussed in the Executor Service. Also, jus like
for the Managed Executor, you can query Runtime metrics as follows:

/subsystem=ee/managed-scheduled-executor-service=default:read-resource(include-
runtime=true)

9.2.3. Managing Default bindings

The Jakarta EE Specification mandates the existence of a default instance for a set of resources such
as:

• Datasource

• Context Service

• JMS Connection Factory

• Managed Executor Service

• Managed Scheduled Executor Service

• Managed Thread Factory

The ee subsystem is used to hook those resources by performing a JNDI lookup and using the names
in the default bindings configuration, before placing those in the standard JNDI names. Here are
the default JNDI bindings:

  <default-bindings
 context-service="java:jboss/ee/concurrency/context/default"
 datasource="java:jboss/datasources/ExampleDS"
 jms-connection-factory="java:jboss/DefaultJMSConnectionFactory"
 managed-executor-service="java:jboss/ee/concurrency/executor/default"
 managed-scheduled-executor-service="java:jboss/ee/concurrency/scheduler/default"
 managed-thread-factory="java:jboss/ee/concurrency/factory/default" />

Hereis, for example, how to change the default jms-connection-factory:

178
/subsystem=ee/service=default-bindings:write-attribute(name=jms-connection-
factory,value=java:jboss/MyJMSConnectionFactory)

9.3. Configuring the jaxrs subsystem


The jaxrs subsystem operates on the JAX-RS implementation of the application server (RESTEasy).
There are no specific operations you can execute against this subsystem, although you can define
globally a set of attributes which are otherwise configurable in your deployments.

You can check the global jaxrs attributes with the CLI as follows:

/subsystem=jaxrs:read-resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "jaxrs-2-0-request-matching" => false,
  "resteasy-add-charset" => true,
  "resteasy-buffer-exception-entity" => true,
  "resteasy-disable-html-sanitizer" => false,
  "resteasy-disable-providers" => undefined,
  "resteasy-document-expand-entity-references" => false,
  "resteasy-document-secure-disableDTDs" => true,
  "resteasy-document-secure-processing-feature" => true,
  "resteasy-gzip-max-input" => 10000000,
  "resteasy-jndi-resources" => undefined,
  "resteasy-language-mappings" => undefined,
  "resteasy-media-type-mappings" => undefined,
  "resteasy-media-type-param-mapping" => undefined,
  "resteasy-prefer-jackson-over-jsonb" => false,
  "resteasy-providers" => undefined,
  "resteasy-rfc7232preconditions" => false,
  "resteasy-role-based-security" => false,
  "resteasy-secure-random-max-use" => 100,
  "resteasy-use-builtin-providers" => true,
  "resteasy-use-container-form-params" => false,
  "resteasy-wider-request-matching" => false
  }
}

In the current implementation (available in WildFly 19), it is possible to configure RESTEasy context
parameters and providers as in this example:

/subsystem=jaxrs:write-
attribute(name=resteasy.document.secure.processing.feature,value="false")

179
9.4. Configuring the singleton subsystem
The singleton subsystem defines a set of policies that define how an HA singleton should behave. A
singleton policy can be used to instrument singleton deployments or to create singleton MSC
services.

The singleton subsystem is included in the HA profiles:

<subsystem xmlns="urn:jboss:domain:singleton:1.0">
  <singleton-policies default="default">
  <singleton-policy name="default" cache-container="server">
  <simple-election-policy/>
  </singleton-policy>
  </singleton-policies>
</subsystem>

What is configurable in the subsystem is first of all its policy. Out of the box, the singleton
subsystem uses the default policy which creates a circular list of cluster members, starting from the
oldest, which are in turn elected to host the singleton application. If you prefer rather a random
policy, you can at any time vary it from the CLI with:

/subsystem=singleton/singleton-policy=random-policy/election-policy=random:add()

You can switch back to the default ("simple") policy with:

/subsystem=singleton/singleton-policy=my-policy/election-policy=simple:add(position=-
1)

As you can see, in the above command we have set the position to -1, which means the policy will
create a circular list of cluster members starting from the youngest. (Position=0, the default value,
refers to the oldest node, 1 is second oldest, etc.)

The other important thing is the cache-container which must reference a valid cache from the
Infinispan subsystem. If you are using the standard configuration, the default "server" cache will be
used.

<cache-container name="server" aliases="singleton cluster" default-cache="default"


module="org.wildfly.clustering.server">
  <transport lock-timeout="60000"/>
  <replicated-cache name="default">
  <transaction mode="BATCH"/>
  </replicated-cache>
</cache-container>

180
9.4.1. Defining a Quorum for Singleton

You can choose to postpone the single provider election until a minimum number of active
members are available. This is recommended in case you have issues with network partitions. For
example, if you want to set a quorum of at least 3 members, then you can apply the following plicy:

/subsystem=singleton/singleton-policy=foo:write-attribute(name=quorum, value=3)

9.5. Configuring the naming subsystem


The Java Naming and Directory Interface (JNDI) is a Java API for a directory service that allows
Java software clients to discover and look up data and objects via a name. Like all other Java APIs
also, JNDI is independent of the underlying implementation, however it specifies a service provider
interface (SPI) that allows directory service implementations to be plugged into the framework.

The Enterprise resources (such as datasources and JMS destinations) are stored in the JNDI tree so
that they can be consumed by applications that are deployed on the application server.
Nevertheless, you can use JNDI to store attributes which will be used by the server/severs a bit like
application properties. In this case the advantage of using JNDI instead of basic Properties is that
JNDI provides a tree structure for bindings which you don’t have in a simple Property file.

WildFly ships with a naming subsystem that contains the bindings element. In order to add some
JNDI bindings to the application server just add some name/value attributes in it:

<subsystem xmlns="urn:jboss:domain:naming:2.0">
  <bindings>
  <simple name="java:/jndi/mykey" value="MyValue"/>
  </bindings>
  <remote-naming/>
</subsystem>

Please note that JNDI entries need to be bound in a namespace starting with one of [java:global,
java:jboss, java:/]"

You can achieve the same goal by using the CLI as follows:

/subsystem=naming/binding=java\:\/jndi\/mykey/:add(binding-type=simple,value=MyValue)

9.5.1. Naming Alias

Naming alias has been introduced to create a link from a JNDI binding to another. You can think
about it as a symbolic link in Unix Systems. This can be useful for example if you are migrating
from one application server JNDI binding structure to another and the JNDI bindings are stored in
your application code. In order to achieve naming aliases you can use the name and lookup
attribute of the lookup element:

181
<subsystem xmlns="urn:jboss:domain:naming:2.0">
  <bindings>
  <lookup name="java:global/MyOldEJB"
  lookup="java:global/my-ear/my-ejb-module/ExampleEJB"/>
  </bindings>
  <remote-naming/>
</subsystem>

Again, you can achieve the same goal using the CLI as follows:

/subsystem=naming/binding=java\:global\/MyOldEJB/:add(binding-
type=lookup,lookup=java:global/my-ear/my-ejb-module/ExampleEJB)

9.6. Configuring the batch-jberet subsystem


WildFly ships with a subsystem named batch, which allows to manage the Batch API for Java
applications. This JSR specifies a programming model for batch applications and a runtime for
scheduling and executing jobs. Out of the box, the following configuration is included:

<subsystem xmlns="urn:jboss:domain:batch-jberet:2.0">
  <default-job-repository name="in-memory" />
  <default-thread-pool name="batch" />
  <job-repository name="in-memory">
  <in-memory />
  </job-repository>
  <thread-pool name="batch">
  <max-threads count="10" />
  <keepalive-time time="30" unit="seconds" />
  </thread-pool>
</subsystem>

In terms of configuration, what is worth to know is that Job executions are stored in a repository,
which enables querying of current and historical job status. The default location of the job
repository is in-memory, which means that you can query the repository programmatically using
the Batch API. On the other hand, if you want to inspect the Job Repository using typical
administration tools, then you can opt for using a JDBC Repository, which can then be queried
using standard SQL commands.

Setting the job repository to use JDBC is just a matter of executing a CLI command:

/subsystem=batch-jberet/jdbc-job-repository=jdbc-repository:add(data-
source=PostgrePool)

In this example, we are setting the job repository to use the datasource, which is bound to the
PostgrePool.

182
Once that you have a JDBC repository, the tables will be automatically created for you once that you
start running jobs on the application server. Here is the list of tables created:

postgres> \dt
 Schema | Name | Type | Owner
--------+---------------------+-------+----------
 public | bindings | table | postgres
 public | job_execution | table | postgres
 public | job_instance | table | postgres
 public | large_messages | table | postgres
 public | messages | table | postgres
 public | page_store | table | postgres
 public | partition_execution | table | postgres
 public | step_execution | table | postgres

postgres> select * from JOB_INSTANCE ;


+---------------+---------+-----------+---------------------+
| JOBINSTANCEID | VERSION | JOBNAME | APPLICATIONNAME |
+---------------+---------+-----------+---------------------+
| 1 | NULL | simpleJob | javaee7-batch-chunk |
| 2 | NULL | simpleJob | javaee7-batch-chunk |
| 3 | NULL | simpleJob | javaee7-batch-chunk |
+---------------+---------+-----------+---------------------+

On the other hand, if you want some execution details about jobs, then you can query the
JOB_EXECUTION table:

postgres> select JOBEXECUTIONID, ENDTIME, BATCHSTATUS, EXITSTATUS from JOB_EXECUTION;

+----------------+---------------------+-------------+------------+
| JOBEXECUTIONID | ENDTIME | BATCHSTATUS | EXITSTATUS |
+----------------+---------------------+-------------+------------+
| 1 | 2014-06-17 15:48:35 | FAILED | FAILED |
| 2 | 2014-06-17 15:52:40 | COMPLETED | COMPLETED |
| 3 | 2014-06-17 15:58:28 | COMPLETED | COMPLETED |
+----------------+---------------------+-------------+------------+

9.7. Configuring the mail subsystem


WildFly mail subsystem is contained in all server configurations and exposes a Mail service bound
at the JNDI name "java:jboss/mail/Default":

183
<subsystem xmlns="urn:jboss:domain:mail:3.0">
  <mail-session name="default" jndi-name="java:jboss/mail/Default">
  <smtp-server outbound-socket-binding-ref="mail-smtp"/>
  </mail-session>
</subsystem>

The Mail Session in turn references an smtp host bound at localhost at port 25:

<outbound-socket-binding name="mail-smtp">
  <remote-destination host="localhost" port="25"/>
</outbound-socket-binding>

In order to configure the connection towards a pop/smtp server we need to set up username and
password in the mail-session element and, if necessary, enable ssl. The following CLI scripts can be
used to connect to GMail SMTP server using an example account (adjust user and password
accordingly):

/subsystem=mail/mail-session=default/server=smtp/:write-
attribute(name=username,value=myuser@gmail.com)
/subsystem=mail/mail-session=default/server=smtp/:write-
attribute(name=password,value=mypassword)
/subsystem=mail/mail-session=default/server=smtp/:write-attribute(name=ssl,value=true)
/subsystem=mail/mail-session=default/:write-
attribute(name=from,value=admin@mydomain.com)

Now reload your server configuration; your mail subsystem should look like this:

<subsystem xmlns="urn:jboss:domain:mail:3.0">
  <mail-session name="default" jndi-name="java:jboss/mail/Default"
  from="admin@mydomain.com">
  <smtp-server outbound-socket-binding-ref="mail-smtp" ssl="true"
  username="myuser@gmail.com" password="mypassword"/>
  </mail-session>
</subsystem>

Done with the mail session configuration, we will now set the outbound sockets for your outgoing
messages. This translates in setting host and port to GMail’s (or your mail provider) defaults:

/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=mail-smtp/:write-attribute(name=host,value=smtp.gmail.com)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=mail-smtp/:write-attribute(name=port,value=465)

The expected outcome of this action on your configuration will be:

184
<outbound-socket-binding name="mail-smtp">
  <remote-destination host="smtp.gmail.com" port="465"/>
</outbound-socket-binding>

Important notice for Google users: If you see the following error

javax.mail.AuthenticationFailedException: 535-5.7.1 Username and Password not accepted.

It means that you need to generate an application specific password instead of your original
password. You can generate one at the link https://accounts.google.com/IssuedAuthSubTokens and
use the generated application specific password in place your original password.

185
10. Chapter 10: Configure Logging
Logging is a common requirement for every middleware system that is used to trace error
messages, warnings or simply write info and statistics. WildFly logging is substantially based on the
J2SE built-in API named Java Util Logging API (JUL) that is included in the java.util.logging
package.

In this chapter, we are going to learn the following topics:

• At first, we will study the default logging configuration used by the application server

• Then we will learn how to configure Handlers which are in charge to receive a log message and
send it to a target

• Finally we will learn how to associate an Handler with a Logger element that binds the log to a
Java package

10.1. WildFly default logging configuration


WildFly by default emits its logs both on the terminal console and on a file. The Console logging is
mostly for development purposes, as you would probably start the application server as
background process in a production system; therefore, we will mostly concentrate on logs that are
traced to a file.

The default location for the file server log is dictated by the jboss.server.log.dir, which
corresponds, in a standalone installation, to the folder $JBOSS_HOME/standalone/log or, in a case of
domain mode, in the folder $JBOSS_HOME/domain/log. The number of files used varies as well
according to the application server mode:

• The standalone installation, by default, emits logging in the file named server.log

You can use the jboss.server.log.dir variable as first aid for configuring the location where logs are
written. For example, on a standalone installation, the following command will trace the server.log
files into the /home/user/logs folder:

$ ./standalone.sh -Djboss.server.log.dir=/home/user/logs

When running on Domain mode, you can customize the location of your host and process
controller logs by setting the jboss.domain.log.dir system property:

• The domain installation emits the host controller logging in a file named host-controller.log
which traces the Domain Controller activities. The single processes, which are triggered by the
host controller, are traced in a file named process-controller.log. Finally, each server that
belongs to a Domain emits loggings in the JBOSS_HOME/domain/servers/[servername]/server.log.
Since applications are targeted on server nodes, most of the times you will focus on the
individual server logs.

186
$ ./domain.sh -Djboss.domain.log.dir=/home/user/domainlogs

10.2. Configuring Log Handlers


An Handler receives a log event and export it to a destination. Out of the box, the application
server has defined a Console Handler, which writes logs on the server console, and a Periodic
Rotating File Handler, which writes logs (using a time based rotating policy) to a File.

There are eight types of Handlers that you can configure:

• Console Handler: as we said this traces logs on the server Console

• File: writes logs to a File without a specific (time/size) constraint

• Periodic: writes logs to a File by rotating logs on a time basis

• Size: writes logs to a File by rotating logs on a size basis

• Periodic / Size: write logs to a File rotating logs on a time basis or when a certain size is reached

• Async: defines a handler that is able to use an asynchronous thread to handle its sub-handlers.

• Custom: allows using your own class (that extends java.util.logging.Handler) to trace logs.

• SysLog handler: , allows to trace logs according the Operating System logger

10.2.1. Configuring the Periodic Rotating Handler

As we said, by default WildFly ships with a Periodic Rotating File Handler which daily rotates the
logs in a file named server.log. see the default properties of this handler:

187
/subsystem=logging/periodic-rotating-file-handler=FILE/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "append" => true,
  "autoflush" => true,
  "enabled" => true,
  "encoding" => undefined,
  "file" => {
  "relative-to" => "jboss.server.log.dir",
  "path" => "server.log"
  },
  "filter" => undefined,
  "filter-spec" => undefined,
  "formatter" => "%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%e%n",
  "level" => "ALL",
  "name" => "FILE",
  "named-formatter" => "PATTERN",
  "suffix" => ".yyyy-MM-dd"
  }
}

Here is a description of the attributes of the Handler:

• append determines if the log file will be appended at server startup (default true).

• autoflush determines if each log event will be flushed on the log file (default true).

• suffix is a key element of the Periodic Handler configuration as it determines how often the file
is rotated. The default yyyy-MM-dd rotates logs daily using a pattern type server.log.YYYY-
MM-DD. If you want to change the time rotation policy, just choose a format understood by
java.text.SimpleDateFormat; for example yyyy-MM-dd HH rotates the logs hourly.

• path specifies where logs are written. If you have specified a relative-to directory, the path will
be treated as a relative path, otherwise (if relative-to is blank) it will be the absolute path
where log files will be written.

• level attribute defines the log level associated with the handler. Message levels lower than this
value will be discarded. Out of the box this handler logs all the events matching the verbosity
level. If you want to apply some filters over the log events you can use the filter and filter-spec
as indicated in the section "Filtering Logs".

• Finally, the formatter element provides support for formatting LogRecords. The log formatting
inherits the same pattern strings for layout pattern of log4j, which was in turn inspired by old
C’s printf function.

10.2.1.1. Changing the path where the log is written

Changing the attributes of the handlers is pretty simple as you can execute a write-attribute on the
element you are interested to change. Varying the path where logs are written is a bit more
cumbersome as it requires changing nested attributes of the file element.

188
Here is how to set the daily rolling appender to use the jboss.server.log.dir/wildfly.log path:

/subsystem=logging/periodic-rotating-file-handler=FILE/:write-
attribute(name=file,value={"relative-to" => "jboss.server.log.dir","path" =>
"wildfly.log"})

The following command sets the log file path to the absolute path /tmp/server.log:

/subsystem=logging/periodic-rotating-file-handler=FILE/:write-
attribute(name=file,value={"path" => "/tmp/wildfly.log"})

10.2.1.2. Formatting the log output

The log output can be changed by means of the formatter element which uses a set of pattern
expressions to define the output of the logs. This section contains the list of expressions that can be
included in the formatter attribute:

Formatter quick help

• The string %d{HH:mm:ss,SSS} outputs the date of the logging event using the conversion
included in brackets.

• The string %-5pwill output the priority of the logging event

• The string [%c] is used to output the category of the logging event

• The string (%t) outputs the Thread that generated the logging event

• The string (%M) outputs the method that generated the logging event

• The string %s outputs the log message

• The string %n outputs the platform dependent line separator character

10.2.1.3. Filtering the logs

The level of verbosity of the log output can also be changed by means of filters which are applied
on the content of data to be logged. In order to do that, you can use a filter expression that is able
to include/exclude log messages based on their text content.

For example, if you were to get rid of logs containing the text "IJ000906" then you could enter the
following expression via CLI:

/subsystem=logging/periodic-rotating-file-handler=FILE/:write-attribute(name=filter-
spec,value=not(match("IJ000906")))

This will produce the following addition in your configuration:

189
<periodic-rotating-file-handler name="FILE" autoflush="true">
  <filter-spec value="not(match(&quot;IJ000906&quot;))"/>
</periodic-rotating-file-handler>

If, on the other hand, you were to choose to log any message containing either the text "JBAS" or
"JBWS022052" then you could opt for the "any" filtering expression:

/subsystem=logging/periodic-rotating-file-handler=FILE/:write-attribute(name=filter-
spec,value=any(match("JBAS"), match("JBWS022052"))))

The list of available filtering patterns can be read from the application server configuration at:
https://docs.jboss.org/author/display/WFLY10/Logging+Configuration

10.2.2. Adding a new Handler: the Size Rotating Handler

As we have seen, the default File Handler uses a rotation policy based on a time factor. If you want
rather keep an eye on how much your file grows, then you can define a Size Rotating handler; this
handler writes to a file, rotating the log after a the size of the file grows beyond a certain point and
keeping a fixed number of backups.

We will show here how to create a new Handler by means of both management instruments, the
Web console and the Command Line Interface.

10.2.2.1. Adding the handler from the Web console

Creating a new Handler is simpler by means of the Web console: in order to do that, select the Size
option from the Handler File Menu and click on Add:

In the next popup window, enter the Handler Name, and click on Next. In the next window enter
the File Path and File Path relative to, using the same criteria that we have learned for the
Periodic Handler. Click Finish to persist the Handler.

As you can see from the next picture, our handler named "SIZE" has a default Rotation Size Policy
of 2MB with a single log backup being kept as dictated by the Max Backup Index:

190
10.2.2.2. Adding the handler from the CLI

If you prefer, the same result can be achieved by means of the CLI with the following script:

/subsystem="logging"/size-rotating-file-
handler="SIZEHANDLER":add(append="true",autoflush="true",file={"relative-to"=>
"jboss.server.log.dir","path" =>"largelog.log"},max-backup-index=1,rotate-on-
boot=true,rotate-size=2m)

Your Size Handler is now configured; it however needs to be bound to a Logger in order to work.
See the section Configuring the Root Logger to learn how to replace the default FILE logging policy
with this one.

10.2.3. Creating a Custom Handler that writes logs to the Database

If you want a full control over your logs, then you can choose to create a Custom Handler that
extends the java.util.logging.Handler interface and overrides its abstract methods. In the
following example, we will show how to trace logs into a PostgreSQL database. We will use the data
source connection from Creating a Datasource using the CLI. Before starting, we need to create the
database tables, which will contain the logs. Here’s an example which assumes that you are using

191
PostgreSQL as database:

CREATE TABLE log_table(


id integer NOT NULL PRIMARY key auto_increment, timestamp
VARCHAR(255) , log_level VARCHAR(255) , class VARCHAR(255) , message
VARCHAR(1500));

Next step will be creating our custom Handler class named com.mastertheboss.JdbcLogger that
extends the java.util.logging.Handler interface. The Maven project that contains this class is
available on GitHub at: http://bit.ly/2tZM9BU

Once you have compiled and packaged the Maven project (mvn clean install) perform the following
steps:

• Install the dblogger.jar as a module (We will name it org.logger.postgres).

• Create a Custom Handler which will reference the org.logger.postgres module

• Assign the Custom Handler to a Logger, for instance to the Root logger.

The safest way to execute these steps is via a CLI batch script. Here is one that will do the job for us:

batch

#Add the module the application server


module add --name=org.logger.postgres --resources=/tmp/dblogger/target/dblogger.jar
--dependencies=javax.api,org.jboss.logging,org.postgres

#Create a Custom Handler named DBLogHandler


/subsystem=logging/custom-
handler=DBLogHandler/:add(class=com.mastertheboss.JdbcLogger,module=org.logger.postgre
s,level=INFO,formatter="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n",
properties={"driverClassName" => "org.postgresql.Driver","jdbcUrl" =>
"jdbc:postgresql://localhost:5432/postgres","username" => "postgres","password" =>
"postgres"})

# Add DBLogHandler to the Root Logger


/subsystem=logging/root-logger=ROOT:add-handler(name=DBLogHandler)

run-batch
reload

Now we’re done. Verify from your log_table that logs are being traced.

The above batch script is also available on GitHub at: http://bit.ly/3cnhmkS

10.2.4. Creating a Custom Handler that writes logs via Socket

Another custom handler can be used to send logs via TCP Sockets. You can then use a TCP Server to
listen for incoming logs events. In order to create a Socket Handler you need to perform these steps:

192
• Download the Socket Handler API: a Socket Handler is not native on WildFly configuration, thus
we need formerly download the JBoss log Manager extension.

• Install the Log Manager as a module and assign it to an Handler. In our case, we will assign it to
the ROOT handler.

The recommended way to execute the script is via a CLI batch file. Here is the CLI script that will do
the job:

batch

#Add the module the application server


module add --name=org.jboss.logmanager.ext
--dependencies=org.jboss.logmanager,javax.json.api,javax.xml.stream.api
--resources=jboss-logmanager-ext-1.0.0.Alpha5.jar

#Create a Custom Handler using the SocketHandler API


/subsystem=logging/custom-handler=socket-
handler:add(class=org.jboss.logmanager.ext.handlers.SocketHandler,module=org.jboss.log
manager.ext,named-formatter=PATTERN,properties={hostname=localhost, port=7080})

# Add the Custom Handler to the Root Logger


/subsystem=logging/root-logger=ROOT:add-handler(name=socket-handler)

run-batch

reload

You can find the source code for the above CLI Script at: http://bit.ly/2tVrOO9

Once you have configured the SocketHandler, all you need is a TCP Server which listen on port 7080
and captures the incoming messages.

10.2.5. Configuring Handlers to be asynchronous

The Async Handler can be used to log events asynchronously. Behind the scenes, this Handler uses
a bounded queue to store events. Every time a log is emitted, the handler returns immediately
after placing events in the bounded queue. An internal dispatcher thread serves the events
accumulated in the bounded queue.

The Async Handler is however a composite handler which attaches to other handlers to produce
asynchronous logging events. In the following example we are adding the Async Handler handler
which buffers log events in a queue of 512 events, using a BLOCK policy when the log events
overflow the queue size:

/subsystem=logging/async-handler=asynchandler/:add(queue-
length=512,level=ALL,overflow-action=BLOCK,subhandlers=["SIZE"])

193
10.3. Configuring the Root Logger
The Root logger is the ancestor of all loggers. All classes that don’t have a configured logger, will
inherit from the Root logger. As you can see, by default the Root Logger has two Handlers
associated with it:

• The Console logger which prints on the application server console log messages

• The File logger, which writes logs in the jboss.server.log.dir using a file name server.log.

/subsystem=logging/root-logger=ROOT/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "filter" => undefined,
  "filter-spec" => undefined,
  "handlers" => [
  "CONSOLE",
  "FILE"
  ],
  "level" => "INFO" }
}

By default, the Root logger has a log level of INFO, which means that will print logging messages
with a priority level of INFO or higher. You can change the default Root logger level by executing
the change-root-log operation which requires as argument the log level:

/subsystem=logging/root-logger=ROOT/:change-root-log-level(level=INFO)

Changing the Root Logger Level has a severe impact on your applications as it

 will alter the verbosity of all log messages from packages that have no specific
Logger attached to it.

For example, in order to remove the CONSOLE Handler from the Root logger you can execute the
following command:

/subsystem=logging/root-logger=ROOT/:remove-handler(name=CONSOLE)

On the other hand, you can restore the CONSOLE handler as follows:

/subsystem=logging/root-logger=ROOT/:add-handler(name=CONSOLE)

As an alternative, you can set the Handlers as a list, which requires a more complex syntax:

194
/subsystem=logging/root-logger=ROOT/:write-
attribute(name=handlers,value=["CONSOLE","FILE","SIZE"])

10.4. Configuring Logging Categories


So far we have seen how to configure the Root Logger which is a generic Logger that is able to
intercept all logs which do not belong to a specific Logger. Most of the time, you will need rather to
define some specific Loggers for your application packages so that you can easily change your
Logging configuration according to the characteristics of your projects. For this purpose, you will
need to define Logger Categories which are named entities using a package-like dot-separated
name such as "com.arjuna". The namespace is hierarchical and should typically be aligned with
the Java packaging namespace. The following Log Categories are available in the server
configuration:

/subsystem=logging/:read-children-resources(child-type=logger,recursive=false)
{
  "result" => {
  "com.arjuna" => {
  "category" => "com.arjuna",
  "filter-spec" => undefined,
  "handlers" => undefined,
  "level" => "WARN",
  "use-parent-handlers" => true
  },
  "org.apache.tomcat.util.modeler" => {
  "category" => "org.apache.tomcat.util.modeler",
  "filter-spec" => undefined,
  "handlers" => undefined,
  "level" => "WARN",
  "use-parent-handlers" => true
  },
  "org.jboss.as.config" => {
  "category" => "org.jboss.as.config",
  "filter-spec" => undefined,
  "handlers" => undefined,
  "level" => "DEBUG",
  "use-parent-handlers" => true
  },
  "sun.rmi" => {
  "category" => "sun.rmi",
  "filter-spec" => undefined,
  "handlers" => undefined,
  "level" => "WARN",
  "use-parent-handlers" => true
  }
  }
}

195
You can add new Loggers in your configuration: the only mandatory attribute is the logger
attribute. By adding the handlers attribute the logger will use the semantics of those Handlers. If
the use-parent-handler is set to true, the logger will use the handlers of its ancestors. (Every
handler has at least one ancestor which is the Root Logger).

Here is how to define the com.sample Logger which refers to the FILE handler with the log level
INFO and use-parent-handlers set to false:

/subsystem=logging/logger=com.sample/:add(handlers=["FILE"],level=INFO,use-parent-
handlers=false)

Please note that in case the Logger level is set to a different value than the
 Handler’s level, the Logger level will prevail, as it’s more specific attribute.

10.5. Other Logging configuration files


Besides the standard logging configuration contained in your XML files, you may have noticed that
there is a logging.properties file in the configuration directory. This file is actually generated by
the application server from your logging subsystem configuration. Most of the time you should not
attempt to modify this file at all; however, if you are inheriting a large JUL configuration from your
applications, you might experiment using this file as a replacement for your logging subsystem
configuration provided that you have removed your logging configuration from your server XML
file.

10.5.1. Using Log4j to trace your application logs

Earlier JBoss users might remember that it was possible to use log4j as default logging system; as
WildFly is based on JUL this is not possible anymore, yet you can use a feature named Per-
deployment Logging which allows using the following configuration files on an application basis:

• logging.properties

• jboss-logging.properties

• log4j.properties

• log4j.xml

• jboss-log4j.xml

In order to let the deployment scanner find the log configuration files, they need
to be placed in an appropriate folder: EAR files should contain the configuration
 in the META-INF directory. On the other hand, in a WAR or JAR deployment the
configuration files can be in either the META-INF or WEB-INF/classes directories.

Besides adding the log configuration file in the appropriate folder, you need to link the log library
to your application: for example, in order to let your application code use log4j’s module which is
bundled in the application server (though not implicitly loaded by it), you can include in your META-
INF/MANIFEST.MF file the following dependency:

196
Dependencies:org.apache.log4j

(See Advanced Classloading policies for more information about it).

10.5.2. Disabling the core logging API

If you are planning to exclude the default logging API for your deployments, then the simplest way
to achieve it is by means of setting to false the property add-logging-api-dependencies:

/subsystem=logging/:write-attribute(name=add-logging-api-dependencies,value=false)

When setting to false this attribute your deployments will not be processed by logging API
dependencies. Another option is to use the jboss-deployment-structure.xml to exclude the logging
subsystem from your deployments (Again, see Advanced Classloading policies for more information
about it).

10.6. Other ways to read the log files


Although log files are typically read and filtered using your operating system shell commands (such
as tail or grep), you can also use the management instruments to read the server log files.

10.6.1. Reading Logs with the Command Line Interface

Gathering information from server log files is quite intuitive; for example in order to get the list of
the server log files, simply execute the list-log-files command on the logging subsystem.

/subsystem=logging/:list-log-files
{
  "outcome" => "success",
  "result" => [
  {
  "file-name" => "server.log",
  "file-size" => 9695L,
  "last-modified-date" => "2014-02-13T15:46:04.365+0100"
  },
  {
  "file-name" => "server.log.2014-02-12",
  "file-size" => 9695L,
  "last-modified-date" => "2014-02-12T21:32:49.759+0100"
  }
  ]
}

Another interesting option for us, is the ability to display the actual log files content filtered by
some parameters such as the number of lines to read (lines parameter) and the lines to skip (skip
parameter) from the header. Here’s how to read the first 10 lines from the server.log file:

197
/subsystem=logging/:read-log-file(name=server.log,lines=10,skip=0)
{
  "outcome" => "success",
  "result" => [
  "2014-02-13 15:04:37,017 INFO [org.wildfly.extension.undertow] (MSC service
thread 1-2) JBAS017525: Started server default-server.",
  "2014-02-13 15:04:37,041 INFO [org.wildfly.extension.undertow] (MSC service
thread 1-8) JBAS017531: Host default-host starting",
  "2014-02-13 15:04:37,327 INFO [org.wildfly.extension.undertow] (MSC service
thread 1-5) JBAS017519: Undertow HTTP listener default listening on /127.0.0.1:8080",
. . . .
  ]
}

If you want a full read of your log file, simply provide -1 as argument to the "lines" parameter.

Additionally, it is worth mentioning that you can check your log files also through the attachment
CLI command, which has options both for displaying files and save them locally.

Here is how to display the content of server.log file using the attachment command:

attachment display --operation=/subsystem=logging/log-file=server.log:read


-resource(include-runtime)

And here is how to save it locally:

attachment save --operation=/subsystem=logging/log-file=server.log:read


-resource(include-runtime) --file=./server.log

10.6.2. Reading logs using the HTTP channel

Direct log download via HTTP is an handy option as you can download the log files directly from a
browser.

So, just in case you are running logs on localhost, here is how to trigger the download of the
server.log file by getting the stream from HTTP:

http://localhost:9990/management/subsystem/logging/log-
file/server.log?operation=attribute&name=stream&useStreamAsResponse

The culprit with the above code is that you have to enter manually the username and password for
connecting to through the management API. On the other hand, you can use the curl command to
set the username and password (admin/Password1!) in the HTTP header as follows:

198
curl --digest -L -D - http://127.0.0.1:9990/management?useStreamAsResponse --header
"Content-Type: application/json" -u admin:Password1! -d '{"operation":"read-
attribute","address":[{"subsystem":"logging"},{"log-
file":"server.log"}],"name":"stream"}'

199
11. Chapter 11: Configuring JMS Services
This chapter discusses about the configuration of Java Messaging Services on WildFly. In the new
10.X server architecture the messaging broker is now ActiveMQ Artemis which includes many new
features but also retains protocol compatibility with the HornetQ broker.

In order to learn all about JMS configuration we will follow these steps:

• At first we will learn the building blocks of ActiveMQ Artemis architecture

• Next we will learn how configure JMS connections

• In the next part, we will focus on creating JMS destinations including Connection Factories and
JMS Queues/Topics

• Finally, we will have a look at clustered configuration of ActiveMQ Artemis servers as part of a
WildFly cluster

11.1. ActiveMQ Artemis overview


First of all, let’s make clear a few concepts. Until WildFly 9 the default message broker was HornetQ
. In 2014 the HornetQ project was donated to Apache foundation and the codename used for the
new message broker is ActiveMQ Artemis. The good news is that ActiveMQ Artemis retains protocol
compatibility with the older HornetQ therefore no code changes are necessary in your JMS
applications.

Do not confuse ActiveMQ Artemis with ActiveMQ. ActiveMQ 5 is the "classic" JMS
1.1 long established architecture serving many generations of applications. On
the other hand, ActiveMQ Artemis is the new high-performance, non-blocking
 architecture for event-driven messaging applications compatible with JMS 1.1 &
2.0. Once ActiveMQ Artemis reaches a sufficient level of feature parity with the
5.x code-base it will become ActiveMQ 6.

11.1.1. Artemis ActiveMQ architecture

Apache ActiveMQ Artemis uses a clean cut design based on a set of Plain Old Java Objects (POJOs).
The only core dependency which is required is the netty which is required for remote transport of
messages.

Therefore ActiveMQ Artemis can be easily embedded in your own project, or instantiated in any
dependency injection framework such as Spring. The following picture, depicts an high level
overview of Artemis architecture:

200
Starting from the top, ActiveMQ Artemis uses for persistence its own fast journal. This journal can
be optimized by configuring libaio on your machine (which is the default when running on Linux).
In future releases of the broker, it is planned to include JDBC as persistence option.

At the core, the Artemis Server which is a server agnostic protocol which needs a Protocol
Manager in order to accept requests from clients.

A native simple API is provided if you want to interact with the Broker without using JMS. On the
other hand, native JMS client will use a JMS façade to translate request in a format which is
understood by the Artemis Server.

ActiveMQ Artemis also provides protocol implementations for clients using different protocols like
Stomp, OpenWire and AMQP

11.1.2. Socket Management in ActiveMQ Artemis

In terms of libraries, ActiveMQ Artemis is made up of just of a set of Plain Old Java Objects (POJOs)
which are compiled and packaged in a set of JAR archives. Therefore one advantage of this
framework is that it can be executed in many ways, either using a simple Java class or embedded
into an application server, as we will see in this chapter.

In terms of building blocks, ActiveMQ Artemis uses the terms Connector and Acceptor to describe
the incoming and outgoing connections to other JMS servers. More in detail:

• An acceptor defines which types of connections are accepted by the ActiveMQ Artemis server.

201
• A connector defines how to connect to a ActiveMQ Artemis server, and is used by the ActiveMQ
Artemis client.

Acceptors and Connectors can be of two types:

• in-vm-connector: which can be used by a local client (i.e. one running in the same JVM as the
server)

• netty-connector: can be used by a remote client and uses Netty over TCP for the
communication (See http://netty.io for more information about Netty project)

• http-connector: can be used by a remote client and uses Undertow Web Server to upgrade from
a HTTP connection

The following picture shows a sample architecture for two ActiveMQ Artemis server configurations,
the first one using in-vm connectors/acceptors and the second one running through different
virtual machines using http transport libraries.

As you can see from the above picture, the connector needs to use the same transport as the
acceptor so an in-vm acceptor can only be contacted by a client running on the same JVM while an
http acceptor can only accept connections from remote JVM clients. In terms on configuration,
remote connectors are the most configurable one as it can be used with a variety of API (Java NIO,
Asynchronous Linux IO) and it can use TCP sockets like SSL or tunnel over HTTP or HTTPs.

11.1.3. Starting WildFly with JMS Services

As we said at the beginning of this book, the messaging extensions are not available with the
default standalone configuration (standalone.xml); therefore, we need to enable one of the
available messaging-aware configurations like the standalone-full.xml and standalone-full-
ha.xml

Hence, for example, if you plan to run JMS applications on a non-clustered standalone WildFly
server, all you have to do is starting it like that:

$ ./standalone.sh -c standalone-full.xml

As for domain mode, you have to specify to use a full or full-ha profile and a corresponding socket

202
binding group:

<server-group name="main-server-group" profile="full">

  <socket-binding-group ref="full-sockets"/>

</server-group>

Once that the server is started, move to the next section which will show you how to configure JMS
services.

11.2. Configuring JMS Connections


Configuring the Acceptors and Connectors is our first stop in our journey. Out of the box, the
messaging subsystem includes a set of http acceptors and connectors. Here is how to query the list
of http-acceptor, which define a way in which connections can be made to the ActiveMQ Artemis
server over HTTP:

/subsystem=messaging-activemq/server=default:read-children-resources(child-type=http-
acceptor)
{
  "outcome" => "success",
  "result" => {
  "http-acceptor" => {
  "http-listener" => "default",
  "params" => undefined,
  "upgrade-legacy" => true
  },
  "http-acceptor-throughput" => {
  "http-listener" => "default",
  "params" => {
  "batch-delay" => "50",
  "direct-deliver" => "false"
  },
  "upgrade-legacy" => true
  }
  }
}

As you can see, the list contains two kinds of connectors:

• Standard http-connector: which provide a configuration completely based on defaults

• Throughput http-connector: which contains a specialized configuration in order to guarantee


an higher level of messaging throughput

Here is a description about the included parameters.

203
• batch-delay: allows the broker to batch up writes for a maximum of batch-delay milliseconds
before sending messages. This can increase overall throughput for very small messages. It does
so at the expense of an increase in average latency for message transfer.

• direct-deliver: when set to true, JMS message delivery is done on the same thread to which the
message arrived on. This can reduce latency at the expense of a lower throughput and
scalability, especially on multi-core machines.

Much the same way you can query connectors which are used by a client to define how it connects
to an ActiveMQ Artemis server:

/subsystem=messaging-activemq/server=default:read-children-resources(child-type=http-
connector)
{
  "outcome" => "success",
  "result" => {
  "http-connector" => {
  "endpoint" => "http-acceptor",
  "params" => undefined,
  "server-name" => undefined,
  "socket-binding" => "http"
  },
  "http-connector-throughput" => {
  "endpoint" => "http-acceptor-throughput",
  "params" => {"batch-delay" => "50"},
  "server-name" => undefined,
  "socket-binding" => "http"
  }
  }
}

11.2.1. Additional properties you can set on Connectors and Acceptors

The list of properties which can be set on your Connectors and Acceptors includes also the
following ones:

• use-nio: If this is true then Java non-blocking NIO will be used. If set to false, then old blocking
Java IO will be used.

• host: This specifies the host name or IP address to connect to (when configuring a connector) or
to listen on (when configuring an acceptor). The default value for this property is localhost.

• port: This specifies the port to connect to (when configuring a connector) or to listen on (when
configuring an acceptor). The default value for this property is 5445.

• tcp-no-delay: If this is true then Nagle’s algorithm will be enabled. The default value for this
property is true.

• tcp-send-buffer-size: This parameter determines the size of the TCP send buffer in bytes. The
default value for this property is 32768 bytes (32KB).

• tcp-receive-buffer-size: This parameter determines the size of the TCP receive buffer in bytes.

204
The default value for this property is 32768 bytes (32KB).

• nio-remoting-threads: When configured to use NIO, ActiveMQ Artemis will, by default, use a
number of threads equal to three times the number of cores (or hyper-threads) as reported by
Runtime.getRuntime().availableProcessors() for processing incoming packets.

Here is, for example, how to set the tcp-receive-buffer-size value to 64 KB

/subsystem=messaging-activemq/server=default/http-connector=http-connector/:write-
attribute(name=params,value={"tcp-receive-buffer-size"=> "65536"})

You need to reload your server configuration in order to enable the changes you have set.

11.2.2. Switching to Netty sockets

If you have been using earlier versions of the application server, you might be a little surprised that
the earlier netty connectors/acceptors are not included in the messaging configuration. As a matter
of fact, netty is still a vital component of the application server infrastructure and used, behind the
scenes, by several application server modules, including Undertow as well.

The general trend for the application server is however to reduce the number of ports to be used so
that the configuration is simpler and you can multiplex multiple protocol using a single channel
(HTTP); that makes your environment, out of the box, cloud friendly.

For these reasons, although not deprecated, netty acceptors and connectors are not configured as
default anymore. You can, at any time, restore netty acceptors or connectors in your server
configuration as follows:

batch

/subsystem=messaging-activemq/server=default/remote-acceptor=netty:add(socket-
binding=messaging)

/socket-binding-group=standard-sockets/socket-binding=messaging:add(port=5445)

run-batch

You can download the above script from here: http://bit.ly/2tUfjmh

Once that you reload your configuration, you will see that the netty acceptor has been included as
remote connector:

205
/subsystem=messaging-activemq/server=default:read-children-resources(child-
type=remote-acceptor)
{
  "outcome" => "success",
  "result" => {"netty" => {
  "params" => undefined,
  "socket-binding" => "messaging"
  }}
}

By executing a netstat on your machine, confirms that the messaging socket binding has been
started:

$ netstat -an | grep 5445


TCP 127.0.0.1:5445 0.0.0.0:0 LISTENING

You might wonder which option works the best in your case (http or netty
sockets). As we mentioned, the http solution has several administrative
advantages, which are good arguments in favor of this solution. On the other
 hand, in terms of performance the http connectors have an initial performance
penalty for upgrading the network protocol. So you should evaluate using netty
sockets if you are on the hook for extreme performance.

11.3. Creating JMS Destinations


Creating Queues and Topics is a frequent task for server administrators. For this reason, we will
show how to complete this task with both management instruments, starting from the Web
Administration console. Select the upper Configuration tab and, from there select Server >
default > Destinations > View as indicated by the following picture:

206
From there, you will be able to manage all JMS Destinations such as Queues, Topics, Diverts and so
on.

In order to add a new Destination, select it from the left Tab Menu and click on the Add Button. In
the following window, you should provide at least a Name for your destination and a valid JNDI
name. Valid JNDI names need to begin with either to the "java:/" or "java:/jboss".

207
 Hit "Enter" to separate each JNDI Entry.

If you are adding a new Queue, you can optionally mark the it as Durable or associate it with a
Selector.

A queue which is tagged as "durable" will persist messages. It means that


 messages can be delivered to the consumer also in the event of server crash.

Click Add when done and verify that the JMS destination has been added into the main Panel:

208
Finally, if you are planning to use your JMS endpoint from a remote consumer, consider creating an
alias for with the JNDI beginning with "java:jboss/exported" namespace. In our case, create an
additional JNDI binding for the DemoQueue named "
java:jboss/exported/jms/queue/demoQueue".

11.3.1. Built-in Queues

The default messaging subsystem includes two Queues which are necessary for handling some
special use cases.

• Dead Letter Queue: All messages that are not delivered correctly are sent to the Dead Letter
Queue (DLQ); this gives a chance to handle the message in a second time

• Expiry Queue: Contains messages whose time-to-live have expired and thus are removed from
the Queue .

11.3.2. Creating Queues and Topics using the Command Line Interface

The Command Line interface has a convenient shortcut for creating new queues and topics. Here is
how you can add a new Queue in a standalone server:

[standalone@localhost:9990 /] jms-queue add --queue-address=jms.queue.DemoQueue


--entries=java:/jms/queue/demoQueue

If you are running in Domain mode, you need to provide the --profile flag to specify which
configuration profile will be used:

[domain@localhost:9990 /] jms-queue add --queue-address=jms.queue.DemoQueue


--entries=java:/jms/queue/demoQueue --profile=full-ha

If you want to create a new Topic, the command to be executed is jms-topic as follows:

[standalone@localhost:9990 /] jms-topic add --topic-address=jms.topic.DemoTopic


--entries=java:/jms/topic/demoTopic

The counterpart in Domain mode follows here:

[domain@localhost:9990 /] jms-topic add --profile=full-ha --topic


-address=jms.topic.DemoTopic --entries=java:/jms/topic/demoTopic

11.3.2.1. Creating deployable JMS destinations

JMS destinations can be also be created on the fly by dropping a *-jms.xml file in the deployments
folder of your standalone server or packaging it along with your application. Here’s an example of
a JMS Queue and a JMS Topic:

209
<messaging-deployment xmlns="urn:jboss:messaging-activemq-deployment:1.0">
  <server>
  <jms-destinations>
  <jms-queue name="ExampleQueue">
  <entry name="java:/jms/queue/ExampleQueue"/>
  <durable>true</durable>
  </jms-queue>
  <jms-topic name="ExampleTopic">
  <entry name="java:/jms/topic/ExampleTopic"/>
  </jms-topic>
  </jms-destinations>
  </server>
</messaging-deployment>

Warning! Deployable resources are not manageable through the application

 server management interfaces; therefore should be used just for development or


testing purposes.

11.3.3. Customizing JMS destinations

You can customize the destinations through Address Settings . By default there is a single Address
Settings definition, which uses the wildcard "#", meaning that its properties will be valid across all
destinations.

210
/subsystem=messaging-activemq/server=default/address-setting=#/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "address-full-policy" => "PAGE",
  "dead-letter-address" => "jms.queue.DLQ",
  "expiry-address" => "jms.queue.ExpiryQueue",
  "expiry-delay" => -1L,
  "last-value-queue" => false,
  "max-delivery-attempts" => 10,
  "max-redelivery-delay" => 0L,
  "max-size-bytes" => 10485760L,
  "message-counter-history-day-limit" => 10,
  "page-max-cache-size" => 5,
  "page-size-bytes" => 2097152L,
  "redelivery-delay" => 0L,
  "redelivery-multiplier" => 1.0,
  "redistribution-delay" => -1L,
  "send-to-dla-on-no-route" => false,
  "slow-consumer-check-period" => 5L,
  "slow-consumer-policy" => "NOTIFY",
  "slow-consumer-threshold" => -1L
  },
}

All ActiveMQ Artemis Queues can be referenced using the namespace

 "jms.queue" while topics can be referenced using the "jms.topic" namespace.


That’s why using the expression "jms.queue.#" means all Queues defined.

The properties which have an higher impact in your applications are the following ones:

• dead-letter-address: All messages that are not delivered correctly are stored in a particular
queue, so they can be parsed in a second time

• expiry-address: Messages whose time-to-live have expired are removed from the queue and
sent to this expiry address

• redelivery-delay: You can configure some delay time (in ms.) to let the client return in a
successfully delivered state

• max-delivery-attempts: Defines how many times a cancelled message can be redelivered


before sending to the dead-letter-address

• max-size-bytes: This is the maximum amount of memory allowed to a JMS address (before
entering paging).

You can read a description of all Address Settings properties by mean of the following CLI
command:

211
/subsystem=messaging-activemq/server=default/address-setting=#/:read-resource-
description

11.4. Configuring Message Persistence


Configuring message persistence is a requirement for granting the reliability of messages.

There are two possible storage configurations available for your messages:

• File system Storage: using the high-performance internal journal

• JDBC Storage: using a relational Database

11.4.1. Configuring File system journal

ActiveMQ Artemis, by default, persists its messages using its own high-performance journal which,
in a Linux environment (kernel 2.6), can also benefit from using Linux’s Asynchronous IO library
(AIO).

The ActiveMQ Artemis journal is made up of a set of (append-only) files of on a disk. Each file is
initially created with a fixed size and filled with padding. As operations are performed on the
server, (e.g. add message, update message, delete message) records are appended to the journal.
When one journal file is full, ActiveMQ Artemis creates a new one. The attributes which are related
to message persistence are the following ones:

/subsystem=messaging-activemq/server=default:read-resource
{
  "outcome" => "success",
  "result" => {
. . . .
  "create-journal-dir" => true,
. . . .
  "journal-buffer-size" => undefined,
  "journal-buffer-timeout" => undefined,
  "journal-compact-min-files" => 10,
  "journal-compact-percentage" => 30,
  "journal-file-size" => 102400L,
  "journal-max-io" => undefined,
  "journal-min-files" => 2,
  "journal-sync-non-transactional" => true,
  "journal-sync-transactional" => true,
  "journal-type" => "ASYNCIO",
. . . .
  "persist-delivery-count-before-delivery" => false,
  "persist-id-cache" => true,
  "persistence-enabled" => true, ①
. . . .
}

212
① If this is set to false messages are only persisted in Memory.

Although the list of parameters is pretty large, we would like to stress the importance of the
persistence-enabled attribute, which determines if messages are persisted on the journal.
According to the journal-type parameter, there are two types of journals that determine the input-
out library to be used for message persistence.

When choosing NIO ActiveMQ Artemis use the Java NIO journal. On the other
hand, when choosing ASYNCIO will use the Linux asynchronous IO journal. If

 you choose AIO but are not running Linux (or you do not have libaio installed)
then ActiveMQ Artemis will detect this and automatically fall back to using NIO.
You can install libaio on RHEL or Fedora as the root user:

yum install libaio

The journal-min-files determines the minimum number of files the journal will maintain. When
ActiveMQ Artemis starts and there is no initial message data, ActiveMQ Artemis will pre-create
journal-min-files number of files.

The journal-file-size determines the maximum size (in bytes) for journal files. All the parameters
contained in this window can be also set via the CLI. For example, here’s how you can set the
journal file size via CLI:

/subsystem=messaging-activemq/server=default/:write-attribute(name=journal-file-
size,value=102400)

The journal-file-open-timeout attribute determines the timeout for opening the journal files
(defaults to 5 seconds):

/subsystem=messaging-activemq/server=default:write-attribute(name=journal-file-open-
timeout, value=10)

11.4.1.1. Changing the location where the Journal is persisted

The journal is written in the following default path:


$(jboss.server.base.dir)/data/activemq/journal

213
~/jboss/wildfly-20.0.0.Final/standalone:$ tree data

data
├── activemq
│   ├── bindings
│   │   ├── activemq-bindings-1.bindings
│   │   ├── activemq-bindings-2.bindings
│   │   ├── activemq-jms-1.jms
│   │   └── activemq-jms-2.jms
│   ├── journal // Default journal folder
│   │   ├── activemq-data-1.amq
│   │   ├── activemq-data-2.amq
│   │   └── server.lock
│   └── largemessages
. . . . .

You can execute the following steps in batch mode to change the default journal directory. In this
example we store data in /var/activemq (The script is available on Github at: http://bit.ly/2FIuc0a)

batch

/subsystem=messaging-activemq/server=default/path=journal-directory/:undefine-
attribute(name=relative-to)

/subsystem=messaging-activemq/server=default/path=journal-directory/:write-
attribute(name=path,value=/var/activemq)

run-batch

11.4.1.2. Configuring Journal’s Max Disk Usage

To control the maximum amount of data that Artemis can use for the journal, you can set the
global-max-disk-usage attribute on your server. For example:

/subsystem=messaging-activemq/server=default:write-attribute(name=global-max-disk-
usage, value=75)

The global-max-disk-usage is a percentage of your disk usage. If this threshold is reached, the
Artemis server will apply a block policy to prevent further disk usage. Out of the box, the global-
max-disk-usage is set to "100%", which means no restriction on the disk usage applies.

11.4.1.3. Configuring Message Paging

Although ActiveMQ Artemis is able to support a huge amount of messages, when the system is
getting low in memory, you have to option to page them to disk. This is quite similar to the ordinary
file system paging which occurs when the amount of RAM in not enough to handle all the running
applications. The parameter which determines if you are handling paging or not is max-size-bytes

214
which is set by default to (approx.) 10 MB:

/subsystem=messaging-activemq/server=default/address-setting=#:read-
attribute(name=max-size-bytes)
{
  "outcome" => "success",
  "result" => 10485760L
}

By increasing this value, you allow a larger set of data in memory, thus reducing the pagination:

/subsystem=messaging-activemq/server=default/address-setting=#/:write-
attribute(name=max-size-bytes,value=20971520)

On the other hand, when max-size-bytes is set to -1 pagination will be disabled.

The pagination policy is governed by the address-full-policy parameter. If the value is PAGE then
further messages (over the max-size-bytes) will be paged to disk. If the value is DROP then further
messages will be silently dropped. If the value is BLOCK then client message producers will block
when they try and send further messages. Finally, if the value is FAIL then the messages will be
dropped and the client message producers will receive an exception

11.4.1.4. Configuring the paging folder

Each address has an individual folder where messages are stored in multiple files (page files).

By default the page folder is configured through the following attributes which results in creating a
folder named pagingdir under the jboss.server.data.dir:

/subsystem=messaging-activemq/server=default/path=paging-directory/:write-
attribute(name=path,value=pagingdir)

/subsystem=messaging-activemq/server=default/path=paging-directory/:write-
attribute(name=relative-to,value=jboss.server.data.dir)

Each file will contain messages up to a max configured size (page-size-bytes). The system will
navigate on the files as needed, and it will remove the page file as soon as all the messages are
acknowledged up to that point.

The following picture recaps the typical flow of a JMS message for different values of the max-size-
bytes and address-full-policy properties:

215
11.4.2. Configuring JDBC Storage for messages

JDBC Storage of your messages is a good option to have a centralized and reliable repository for
your data which will be stored in Database tables.

ActiveMQ Artemis currently has support for a limited number of database vendors (older versions
may work but mileage may vary):

• PostgreSQL 9.4.x

• MySQL 5.7.x

• Apache Derby 10.11.1.1

So, for example, if you have installed a MySQL 5.7 Datasource named "MySQLPool", then you can
set the option "journal-datasource" to the value of "MySQLPool":

/subsystem=messaging-activemq/server=default:write-attribute(name=journal-
datasource,value=MySQLPool

Reload your configuration and check that the following tables have been created on the DB:

216
mysql> show tables;
+-----------------------+
| Tables_in_mysqlschema
+-----------------------+
| BINDINGS
| JMS_BINDINGS
| LARGE_MESSAGES
| MESSAGES
| PAGE_STORE
+-----------------------+
5 rows in set (0.00 sec)

11.4.2.1. Varying the default Journal Table Names

WildFly uses a separate JDBC table to store messaging information. The names of these tables can
be configured through the following properties:

• journal-bindings-table

• journal-jms-bindings-table

• journal-messages-table

• journal-large-messages-table

• journal-page-store-table

Here is, for example, how to revert the attribute "journal-page-store-table" to use the Table
"PAGE_JMS":

/subsystem=messaging-activemq/server=default:write-attribute(name=journal-page-store-
table,value="\"PAGE_JMS\"")

11.5. Routing Messages to other destinations


One common requirement for applications which are strongly based on messaging systems is to
provide a way to route messages to other servers, without the need to perform any change in the
application client logic. ActiveMQ Artemis offers the following options:

• Divert messages from one destination to another on the same server

• Bridge messages between two JMS Brokers

Although both options can be used for the same purpose, the difference between them is that a
Bridge implies a connection between two ActiveMQ Artemis servers, whilst a message Divert
operates on the same ActiveMQ Artemis server.

11.5.1. Diverting messages to other destinations

As we just said, message divertion operates on the same ActiveMQ Artemis server and can

217
provide a simple yet effective way to route messages from one destination to another; besides this,
a message divert is also able to perform some filtering and use a custom transformer class to
transform the message’s body or properties before it is diverted.

Let’s see first a simple use case: we will divert the messages arriving to the ExampleQueue to the
ExpiryQueue. This requires defining the following mandatory attributes:

• routing-name: This is the name associated with the Divert

• divert-address: This is the source of JMS Messages

• forwarding-address: This is the target destination of the JMS Messages

/subsystem=messaging-activemq/server=default/divert=DivertDemo/:add(divert-
address=jms.queue.ExampleQueue,forwarding-address=jms.queue.ExpiryQueue,routing-
name=DivertDemo)

Besides the mandatory attributes, you can set some additional options such as:

• exclusive: This option determines if the divert is exclusive, meaning that the message is
diverted to the new address, and does not go to the old address at all. If the divert is qualified as
non-exclusive, the message continues to go the old address, and a copy of it is also sent to the
new address

• filter: This is an optional filter string. If specified, only messages that match the filter
expression specified will be diverted. The filter string follows the ActiveMQ Artemis filter
expression syntax described in the ActiveMQ Artemis documentation
(http://activemq.apache.org/artemis/docs/1.0.0/filter-expressions.html).

• transformer-class-name: The name of a class used to transform the message’s body or


properties before it is diverted.

Here follows an example of Transformer, taken from the ActiveMQ Artemis repository, which sets
a property of the JMS message named "time_of_forward" to the current time, when the Transformer
has been triggered:

package org.apache.activemq.artemis.jms.example;
import org.apache.activemq.artemis.api.core.SimpleString;
import org.apache.activemq.artemis.core.server.ServerMessage;
import org.apache.activemq.artemis.core.server.cluster.Transformer;

public class AddForwardingTimeTransformer implements Transformer {


  public ServerMessage transform(final ServerMessage message) {
  message.putLongProperty(new SimpleString("time_of_forward"), System
.currentTimeMillis());

  return message;
  }
}

218
11.5.2. Creating a Bridge between two ActiveMQ Artemis servers

JMS Core Bridges are logical software applications that allow us to connect to ActiveMQ Artemis
servers so that it is possible to consume a message from a source queue or topic and then send
them to a target queue or topic on another ActiveMQ Artemis server.

Do not confuse core bridges with JMS Bridges which are a different component
 used to connect any JMS 1.1 compliant broker.

In order to set up a bridge between two JMS Servers, you should define:

• A queueName which is the unique name of the local queue that the bridge consumes from.

• A list of staticConnectors which you will use to reach the target destination.

• A forwarding-address: that is, the target server that the message will be forwarded to. If a
forwarding address is not specified then the original destination of the message will be
retained.

As an example, we will show at first how to route messages from a WildFly server running on
localhost, to another WildFly server running on localhost with a port offset of 100.

11.5.2.1. ActiveMQ Artemis target configuration

First off, let’s start the target server using a full/full-ha profile and a port offset of 100 units:

./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=server2


-Djboss.socket.binding.port-offset=100

Then, create the target Queue configuration:

/subsystem="messaging-activemq"/server="default"/jms-
queue="JMSBridgeSourceQueue":add(entries=["queue/JMSBridgeTargetQueue","java:jboss/exp
orted/jms/queues/JMSBridgeTargetQueue"])

Finally, create an Application user that will be used by the Bridge to connect to our Target server:

219
$ ./add-user.sh -a -u jmsuser -p password1! -g guest

It is not mandatory to create an Application user to authenticate your bridged


connection. If no Application user is not specified, the default cluster password
 specified by the cluster-password attribute in the root messaging subsystem
resource will be used.

11.5.2.2. ActiveMQ Artemis source configuration

In order to configure the JMS Bridge on the source server, let’s start as well a WildFly server (again
with a full/full-ha profile):

./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=server1

Then, log into your CLI shell and execute the following script, which will create the source JMS
destination, define the Remote Connection Factory used for the transport and the HTTP connector
used by the Factory:

batch
# Configure Source Queue
 /subsystem=messaging-activemq/server=default/queue=JMSBridgeSourceQueue:add(queue-
address=queue/sourceQueue)

# Configure Outbound socket


/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=messaging-remote/:add(host=localhost,port=8180)

# Configure Http Connector


/subsystem=messaging-activemq/server=default/http-connector=bridge-
connector/:add(endpoint=http-acceptor,socket-binding=messaging-remote)

# Configure Remote Connection Factory


/subsystem=messaging-activemq/server=default/connection-
factory=RemoteConnectionFactory:write-attribute(name=connectors,value=["bridge-
connector"])
/subsystem=messaging-activemq/server=default/connection-
factory=RemoteConnectionFactory:write-
attribute(name=entries,value=["java:jboss/exported/jms/RemoteConnectionFactory"])

# Configure JMS Bridge


/subsystem=messaging-activemq/server=default/bridge=core-bridge:add(static-
connectors=["bridge-connector"],queue-name="JMSBridgeSourceQueue", forwarding-
address="jms.queue.JMSBridgeTargetQueue", user="jmsuser", password="password1!")

#Execute batch script


run-batch

220
You can download the above script from here: http://bit.ly/2plZPBN

The following server definition will be created:

 <server name="default">
  <cluster password="secretpassword"/>
  <statistics enabled="${wildfly.messaging-activemq.statistics-
enabled:${wildfly.statistics-enabled:false}}"/>
  <queue name="JMSBridgeSourceQueue" address="queue/sourceQueue" />

  <security-setting name="#">
  <role name="guest" send="true" consume="true" create-non-durable-queue="true"
delete-non-durable-queue="true"/>
  </security-setting>
  <address-setting name="#" dead-letter-address="jms.queue.DLQ" expiry-address=
"jms.queue.ExpiryQueue" max-size-bytes="10485760" page-size-bytes="2097152" message-
counter-history-day-limit="10" redistribution-delay="1000"/>
  <http-connector name="http-connector" socket-binding="http" endpoint="http-
acceptor"/>
  <http-connector name="http-connector-throughput" socket-binding="http" endpoint=
"http-acceptor-throughput">
  <param name="batch-delay" value="50"/>
  </http-connector>
  <http-connector name="bridge-connector" socket-binding="messaging-remote"
endpoint="http-acceptor"/>
  <in-vm-connector name="in-vm" server-id="0">
  <param name="buffer-pooling" value="false"/>
  </in-vm-connector>
  <http-acceptor name="http-acceptor" http-listener="default"/>
  <http-acceptor name="http-acceptor-throughput" http-listener="default">
  <param name="batch-delay" value="50"/>
  <param name="direct-deliver" value="false"/>
  </http-acceptor>
  <in-vm-acceptor name="in-vm" server-id="0">
  <param name="buffer-pooling" value="false"/>
  </in-vm-acceptor>
  <jgroups-broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster"
connectors="http-connector"/>
  <jgroups-discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
  <cluster-connection name="my-cluster" address="jms" connector-name="http-
connector" discovery-group="dg-group1"/>
  <bridge name="core-bridge" queue-name="JMSBridgeSourceQueue" forwarding-address=
"jms.queue.JMSBridgeTargetQueue" user="jmsuser" password="password1!" static-
connectors="bridge-connector"/>
  <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
  <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>

  <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory"


connectors="in-vm"/>
  <connection-factory name="RemoteConnectionFactory" entries=

221
"java:jboss/exported/jms/RemoteConnectionFactory" connectors="bridge-connector" ha=
"true" block-on-acknowledge="true" reconnect-attempts="-1"/>
  <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA
java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
  </server>
. . . .
  <socket-binding-group name="standard-sockets" default-interface="public" port-
offset="${jboss.socket.binding.port-offset:0}">
  . . . .
  <outbound-socket-binding name="messaging-remote">
  <remote-destination host="localhost" port="8180"/>
  </outbound-socket-binding>
  </socket-binding-group>

Now your Bridge configuration is complete. Reload your server configuration and check from the
Console that the Bridge is active:

10:02:43,742 INFO [org.apache.activemq.artemis.core.server] (Thread-4 (ActiveMQ-


server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@2ed13c78))
AMQ221027: Bridge BridgeImpl@4b89c319 [name=core-bridge,
queue=QueueImpl[name=JMSBridgeSourceQueue, postOffice=PostOfficeImpl
[server=ActiveMQServerImpl::serverUUID=950ba170-5004-11ea-9ae6-02423aae608d],
temp=false]@15aa8044 targetConnector=ServerLocatorImpl (identity=Bridge core-bridge)
[initialConnectors=[TransportConfiguration(name=bridge-connector, factory=org-apache-
activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory)
?httpUpgradeEndpoint=http-
acceptor&activemqServerName=default&httpUpgradeEnabled=true&port=8180&localAddress=127
-0-0-1&host=localhost], discoveryGroupConfiguration=null]] is connected

11.5.2.3. HA of Bridges

Bridges are resilient to source or destination unavailability so they can be used specifically on a
Wide Area Network (WAN). In such a scenario, the other server could be located on a different
location of your WAN, thus your bridge can help you to reconnect when the connections becomes
available again.

In order for your Bridge to support high availability, you can configure the ha attribute as in the
following example:

<bridge name="core-bridge"
  queue-name="coreQueueA"
  forwarding-address="jms.queue.JMSBridgeTargetQueue"
  user="jmsuser"
  password="password1!"
  ha="true"
  static-connectors="bridge-connector"/>

222
11.6. JMS Clustering
This section discusses about clustering the ActiveMQ Artemis messaging system. Clustering in
ActiveMQ Artemis is done by providing multiple server instances acting like a single entity both
from the consumer or producer side. The obvious advantage is that you will have an increased
throughput since messages are distributed across different JMS servers.

The other important advantage of clustering is high availability, which allows one JMS Server to
have one or more redundant servers that will be used as fallback solution in case of server failure.
This means in practice, that you will have an array of JMS Servers called live servers which will be
used by default; then each live server can have one or more backup servers: a backup server is
owned by only one live server and is not operational until failover occurs.

Backup servers are passive JMS servers which act in passive mode, announcing
 its status and waiting to take over the live servers work.

When a live server crashes , its backup server will change its status from backup to live server and
another server (if available) will be elected as backup server.

ActiveMQ Artemis clustering strategies can be configured according to the ha-policy attribute of a
messaging server:

/subsystem=messaging-activemq/server=default/ha-policy=
live-only replication-colocated replication-master replication-slave shared-store-
colocated shared-store-master shared-store-slave

The following ha-policy are available:

• live-only: The server has no HA capabilities apart from being able to scale down.

• replication-master: The server is acts as a live server with its own data directory. Data is
synchronized through the network.

• replication-slave: The server is acts as a backup server with its own data directory. Data is
synchronized through the network.

• replication-colocated: When selected this option, the live server will start a backup server in
the same JVM either using replication to synchronize data.

• shared-store-master: The server acts as a live server and shares the same data directory in the
cluster using a shared file system.

• shared-store-slave: The server acts as a backup server and shares the same data directory in
the cluster using a shared file system.

• shared-store-colocated: When selected this option, the live server will start a backup server in
the same JVM either using a shared file system.

223
Colocated backup servers will inherit its configuration from the live server it
originated from, except for its name, which will be set to colocated_backup_n
(where n is the number of backups the server has created). When configuring
colocated backup servers it is important to keep in mind two things:

1. First, each server element in the configuration will need its own remote-
connector and remote-acceptor that listen on unique ports. For example, a
live server can be configured to listen on port 5445, while its backup uses
 5446. The ports are defined in socket-binding elements that must be added to
the default socket-binding-group. Cluster-related configuration elements in
each server configuration will use the new remote-connector. The relevant
configuration is included in each of the examples that follow.

2. Secondly, you need to properly configure paths for journal related directories.
For example, in a shared store colocated topology, both the live server and its
colocated backup, must be configured to share directory locations for the
binding and message journals, for large messages, and for paging.

The following picture depicts an example of JMS Clustering using a Shared Store:

As you can see, when using a shared store, both live and backup servers share the same entire
data directory using a shared file system. This means the paging directory, journal directory, large
messages and binding journal. The following picture depicts this case:

When failover occurs and a backup server takes over, it will load the persistent storage from the
shared file system and clients can connect to it.

For performance reasons it is highly recommended to use Fiber Channel or

 HyperSCSI to share the journal directory, instead of a file-based protocol like NFS
or SMB/CIFS.

The following picture depicts an example of JMS Clustering using Data replication:

224
As you can see, when using replication, the live and the backup servers do not share the same
storage and all data synchronization is done through network traffic. Therefore, all (persistent)
data traffic received by the live server will be duplicated to the backup. In the following section we
will see how to configure a JMS Cluster using Data Replication and Shared Store.

11.6.1. JMS Cluster configuration using Data replication

In order to run this example, we will be using two standalone servers which will be unpacked in
two different folders:

$ mkdir nodeA
$ mkdir nodeB

$ unzip wildfly-20.0.0.Final.zip -d nodeA


$ unzip wildfly-20.0.0.Final.zip -d nodeB

Now start the first server using the full-ha profile:

$ ./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=nodeA

Connect to the server using the CLI:

./jboss-cli.sh -c

The first change you should apply to your configuration is the cluster is the cluster password, in
order to prevent any unwanted remote client to connect to the server:

225
/subsystem=messaging-activemq/server=default/:write-attribute(name=cluster-
password,value=secretpassword)

Next, we need to configure the ha-policy, which determines the High Availability policy to be used
by your cluster. Here is how elect one messaging server to be "replication-master" by setting the ha-
policy accordingly:

/subsystem=messaging-activemq/server=default/ha-policy=replication-master:add

You need to reload your configuration for changes to take effect:

reload

Now let’s query the ha-policy uses replication setting the current server as master.

 /subsystem=messaging-activemq/server=default:read-children-resources(child-type=ha-
policy)
{
  "outcome" => "success",
  "result" => {"replication-master" => { ①
  "check-for-live-server" => true,
  "cluster-name" => undefined,
  "group-name" => undefined,
  "initial-replication-sync-timeout" => 30000L
  }}
}

① The HA Policy used by your JMS CLuster

The attribute check-for-live-server when set to true checks are performed for a (live) server using
our own server ID when starting up. This option is only necessary for performing 'fail-back' on
replicating servers.

The cluster-name can be used to configure multiple cluster connections by setting an unique
cluster name. This setting is used by a replicating backups and by live servers that may attempt fail-
back.

Finally, group-name, when set remote backup servers will only pair with live servers with
matching group-name.

Now let’s move to the second WildFly server. We will start it using a port offset to avoid a conflict
with the live server:

$ ./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=nodeB


-Djboss.socket.binding.port-offset=100

226
Next connect to the server using the CLI and the target port (in our case we use an offset of 100
units):

./jboss-cli.sh -c --controller=localhost:10090

In order to connect to the cluster, we need to set as well the cluster-password to be consistent with
the first server:

[standalone@localhost:10090 /] /subsystem=messaging-activemq/server=default/:write-
attribute(name=cluster-password,value=secretpassword)

Now, in order to elect another server as backup using replication-slave ha-policy, we will execute
the following command:

[standalone@localhost:10090 /] /subsystem=messaging-activemq/server=default/ha-
policy=replication-slave:add

You need to reload your configuration for changes to take effect:

reload

Finally, verify that the ha-policy has been updated accordingly:

 /subsystem=messaging-activemq/server=default:read-children-resources(child-type=ha-
policy)
{
  "outcome" => "success",
  "result" => {"replication-slave" => {
  "allow-failback" => true,
  "cluster-name" => undefined,
  "group-name" => undefined,
  "initial-replication-sync-timeout" => 30000L,
  "max-saved-replicated-journal-size" => 2,
  "restart-backup" => true,
  "scale-down" => undefined,
  "scale-down-cluster-name" => undefined,
  "scale-down-connectors" => undefined,
  "scale-down-discovery-group" => undefined,
  "scale-down-group-name" => undefined
  }}
}

To confirm that the backup server has started and waiting to fail before it gets active, the following
log will display in the console:

227
10:11:08,494 INFO [org.apache.activemq.artemis.core.server] (AMQ229000: Activation
for server ActiveMQServerImpl::serverUUID=null) AMQ221109: Apache ActiveMQ Artemis
Backup Server version 2.10.1 [null] started, waiting live to fail before it gets
active
10:11:09,054 INFO [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-
client-netty-threads)) AMQ221024: Backup server
ActiveMQServerImpl::serverUUID=8d2ccc85-2e0c-11ea-86ee-dc8b283ae3be is synchronized
with live-server.
10:11:12,020 INFO [org.apache.activemq.artemis.core.server] (Thread-1 (ActiveMQ-
server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$6@1859d2bc))
AMQ221031: backup announced

As proof of concept, if you crash the live server of your cluster, the backup will then become the
new master:

10:15:38,831 INFO [org.apache.activemq.artemis.core.server] (AMQ229000: Activation


for server ActiveMQServerImpl::serverUUID=null) AMQ221007: Server is now live
10:15:38,861 INFO [org.jboss.as.connector.deployment] (MSC service thread 1-5)
WFLYJCA0007: Registered connection factory java:/JmsXA
10:15:38,888 INFO [org.apache.activemq.artemis.ra] (MSC service thread 1-5)
AMQ151007: Resource adaptor started

11.6.1.1. Verifying that the backup server is synchronized with the live server

If your JMS Cluster is configured with replicated journal then it may take some time to backup to
synchronize with live server. Once backup is in sync with live then following information appears
in server.log:

13:20:00,739 INFO [org.apache.activemq.artemis.core.server] (Thread-3 (ActiveMQ-


client-netty-threads-457000966)) AMQ221024: Backup server
ActiveMQServerImpl::serverUUID=bc015b34-fd73-11e5-80ca-1b35f669abb8 is synchronized
with live-server.
13:20:01,500 INFO [org.apache.activemq.artemis.core.server] (Thread-2 (ActiveMQ-
server-org.apache.activemq.artemis.core.server.impl.ActiveMQServerImpl$2@41f992ab-
83559664)) AMQ221031: backup announced

Reading server logs to see whether backup is in sync is not convenient and user friendly way
though. Since WildFly 18 you can verify if master and slave servers are in sync. Here is how to
check whether the master is synchronized with the backup server:

228
 /subsystem=messaging-activemq/server=default/ha-policy=replication-master:read-
resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "check-for-live-server" => true,
  "cluster-name" => undefined,
  "group-name" => undefined,
  "initial-replication-sync-timeout" => 30000L,
  "synchronized-with-backup" => true
  }
}

Conversely, from the backup server you can check against the replication-slave policy to verify that
the backup server is in sync with the master:

/subsystem=messaging-activemq/server=default/ha-policy=replication-slave:read-
resource(include-runtime=true)
{
  "outcome" => "success",
  "result" => {
  "allow-failback" => true,
  "cluster-name" => undefined,
  "group-name" => undefined,
  "initial-replication-sync-timeout" => 30000L,
  "max-saved-replicated-journal-size" => 2,
  "restart-backup" => true,
  "synchronized-with-live" => true,
  "scale-down" => undefined,
  "scale-down-cluster-name" => undefined,
  "scale-down-connectors" => undefined,
  "scale-down-discovery-group" => undefined,
  "scale-down-group-name" => undefined
  }
}

11.6.2. JMS Cluster configuration using Shared Store

In the second example, we will be configuring a JMS Cluster using a shared store. Start from a clean
WildFly installation of two standalone servers as in the first example. Start the first server using the
full-ha profile:

$ ./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=nodeA

Then, connect to the server using the CLI:

229
./jboss-cli.sh -c

The first change you should apply to your configuration is the cluster is the cluster password, in
order to prevent any unwanted remote client to connect to the server:

/subsystem=messaging-activemq/server=default/:write-attribute(name=cluster-
password,value=secretpassword)

Next, set the ha-policy to be "shared-store-master" with the attribute

/subsystem=messaging-activemq/server=default/ha-policy=shared-store-
master:add(failover-on-server-shutdown=true)

When the attribute failover-on-server-shutdown is set to true, the server must


 failover when it is normally shutdown. The default is false.

Then, we need to choose a shared storage for a set of directories that are used by Artemis. So we
will create the following set of folders:

$ mkdir /home/artemis/shared/bindings
$ mkdir /home/artemis/shared/journal
$ mkdir /home/artemis/shared/largemessages
$ mkdir /home/artemis/shared/paging

Once that the directory structure is ready, let’s switch the default Artemis paths to the shared
storage folders:

/subsystem=messaging-activemq/server=default/path=bindings-directory:write-
attribute(name=path,value=/home/artemis/shared/bindings)
/subsystem=messaging-activemq/server=default/path=journal-directory:write-
attribute(name=path,value=/home/artemis/shared/journal)
/subsystem=messaging-activemq/server=default/path=large-messages-directory:write-
attribute(name=path,value=/home/artemis/shared/largemessages)
/subsystem=messaging-activemq/server=default/path=paging-directory:write-
attribute(name=path,value=/home/artemis/shared/paging)

You need to reload your configuration for changes to take effect:

reload

Now query the ha-policy attribute to verify that it’s consistent with our changes:

230
/subsystem=messaging-activemq/server=default:read-children-resources(child-type=ha-
policy)
{
  "outcome" => "success",
  "result" => {"shared-store-master" => {"failover-on-server-shutdown" => true}}
}

Now let’s move to the second WildFly server. We will start it using a port offset to avoid a conflict
with the live server:

$ ./standalone.sh -c standalone-full-ha.xml -Djboss.node.name=nodeB


-Djboss.socket.binding.port-offset=100

Next connect to the server using the CLI and the target port (in our case we use an offset of 100
units):

./jboss-cli.sh -c --controller=localhost:10090

In order to connect to the cluster, we need to set as well the cluster-password to be consistent with
the first server:

/subsystem=messaging-activemq/server=default/:write-attribute(name=cluster-
password,value=secretpassword)

Now, configure the server to be a backup server, using a shared store:

/subsystem=messaging-activemq/server=default/ha-policy=shared-store-
slave:add(failover-on-server-shutdown=true)

The backup server will use the same sets of directories as the live server:

/subsystem=messaging-activemq/server=default/path=bindings-directory:write-
attribute(name=path,value=/home/artemis/shared/bindings)
/subsystem=messaging-activemq/server=default/path=journal-directory:write-
attribute(name=path,value=/home/artemis/shared/journal)
/subsystem=messaging-activemq/server=default/path=large-messages-directory:write-
attribute(name=path,value=/home/artemis/shared/largemessages)
/subsystem=messaging-activemq/server=default/path=paging-directory:write-
attribute(name=path,value=/home/artemis/shared/paging)

You need to reload your configuration for changes to take effect:

231
reload

Verify that that the configuration of the ha-policy is consistent with the changes we have applied:

/subsystem=messaging-activemq/server=default:read-children-resources(child-type=ha-
policy)
{
  "outcome" => "success",
  "result" => {"shared-store-slave" => {
  "allow-failback" => true,
  "failover-on-server-shutdown" => true,
  "restart-backup" => true,
  "scale-down" => undefined,
  "scale-down-cluster-name" => undefined,
  "scale-down-connectors" => undefined,
  "scale-down-discovery-group" => undefined,
  "scale-down-group-name" => undefined
  }}
}

11.6.3. Server Discovery

Another important part of clustering is server discovery which is a mechanism by which servers
can propagate their connection details to messaging clients and other servers.

This information, named Cluster Topology, is actually sent around normal JMS connections to
clients and to other servers over cluster connections. This being the case we need a way of
establishing the initial first connection. This can either be done using UDP or by providing a list of
initial connectors.

Server discovery, by default, uses UDP multicast to broadcast server connection settings. If UDP is
disabled on your network you won’t be able to use this, and will have to specify servers explicitly
when setting up a cluster or using a messaging client. Server discovery uses two key
components:Broadcast Groups and Discovery Groups

11.6.3.1. Broadcast Groups

A Broadcast Group is the means by which a server broadcasts connectors over the network. A
connector defines a way in which a client (or other server) can make connections to the server.
Let’s take a look at the default broadcast-group configuration:

232
/subsystem=messaging-activemq/server=default/broadcast-group=bg-group1/:read-
resource()
{
  "outcome" => "success",
  "result" => {
  "broadcast-period" => 2000L,
  "connectors" => ["http-connector"],
  "jgroups-channel" => "activemq-cluster",
  "jgroups-stack" => undefined,
  "socket-binding" => undefined
  }
}

Here is a short description about the attributes:

• broadcast-period is the period in milliseconds between consecutive broadcasts.

• connectors specifies the names of connectors that will be broadcast.

• jgroups-channel specifies the the name used by a JGroups channel to join a cluster.

• jgroups-stack: if specified, is the name of a stack defined in the jgroups subsystem that is used
to form a cluster.

• socket-binding: The broadcast group socket binding.

The jgroups-stack is undefined by default. This means that, out of the box, it will use the default
jgroups stack which is UDP. You can change it to use TCP as follows:

/subsystem=messaging-activemq/server=default/broadcast-group=bg-group1:write-
attribute(name=jgroups-stack,value=tcp)

11.6.3.2. Discovery Groups

A Discovery Group defines how connector information is received from a multicast address. To
accomplish its tasks the discovery group maintains a list of connector pairs - one for each broadcast
by a different server. As it receives broadcasts on the multicast group address from a particular
server it updates its entry in the list for that server. If it has not received a broadcast from a
particular server for a length of time it will remove that server’s entry from its list.

Here is the default configuration of the discovery-group available in the messaging-activemq


subsystem:

233
/subsystem=messaging-activemq/server=default/discovery-group=dg-group1/:read-
resource()
{
  "outcome" => "success",
  "result" => {
  "initial-wait-timeout" => 10000L,
  "jgroups-channel" => "activemq-cluster",
  "jgroups-stack" => undefined,
  "refresh-timeout" => 10000,
  "socket-binding" => undefined
}

Here is a short description about the attributes:

• initial-wait-timeout: Period, in ms, to wait for an initial broadcast to give us at least one node
in the cluster.

• jgroups-channel: The name used by a JGroups channel to join a cluster

• jgroups-stack: if specified, is the name of a stack defined in the jgroups subsystem that is used
to form a cluster.

• refresh-timeout: Period the discovery group waits after receiving the last broadcast from a
particular server before removing that server’s connector pair entry from its list

• socket-binding: The discovery group socket binding.

Also the discovery groups relies on UDP by default. You can change it to use TCP by setting the
jgroups-stack parameter as follows:

/subsystem=messaging-activemq/server=default/discovery-group=dg-group1:write-
attribute(name=jgroups-stack,value=tcp)

11.6.4. Configuring a static discovery of cluster nodes

The default mechanism used by ActiveMQ Artemis for discovering cluster members relies on UDP
broadcast and discovery mechanism. This approach should be preferred in case you want to
achieve a dynamic cluster composition, however in some cases a static approach could be required,
for example in case your cluster nodes are located on another subnetwork.

Switching to a static discovery of messaging servers requires some steps to be performed in the
server configuration which can be summarized as follows:

• Create the outbound sockets pointing to the static server nodes

• Create the cluster endpoints in your messaging subsystem, pointing to the outbound sockets

• Remove the broadcast group and discovery groups, which are UDP based

• Assign a static list of endpoints to the cluster via the static-connectors list.

Performing all these steps can be done through a batch CLI script that can avoid deadlocks when

234
switching from the broadcast groups/discovery groups to the static-connectors:

batch

# Create outbound sockets for JMS Servers


/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=node1/:add(host=192.168.10.1,port=8080)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=node2/:add(host=192.168.10.2,port=8080)

# Map the socket Bindings with an http connector and define the endpoint name
/subsystem=messaging-activemq/server=default/http-connector=cluster-
server1/:add(endpoint=http-acceptor,socket-binding=node1)
/subsystem=messaging-activemq/server=default/http-connector=cluster-
server2/:add(endpoint=http-acceptor,socket-binding=node2)

# Undefine the broadcast group and discovery groups


/subsystem=messaging-activemq/server=default/broadcast-group=bg-group1/:remove
/subsystem=messaging-activemq/server=default/discovery-group=dg-group1/:remove

# Unset the discovery-group attribute for the cluster


/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster:undefine-
attribute(name=discovery-group)

# Include in your cluster the list of static connectors


/subsystem=messaging-activemq/server=default/cluster-connection=my-cluster/:write-
attribute(name=static-connectors,value=["cluster-server1","cluster-server2"])

run-batch

You can download the above script from here: http://bit.ly/2FRbC27

11.6.5. JMS Cluster behind a load balancer

If your WildFly Messaging clusters (Artemis Clusters) sits behind http load balancers you can
change the behavior of the JMS connection factory by setting the attribute use-topology-for-load-
balancing. Example:

/subsystem=messaging-activemq/server=default/connection-
factory=InVmConnectionFactory:write-attribute(name=use-topology-for-load-
balancing,value=true)

When this attribute is set to true, instead of using the initial connector to obtain the cluster topology
to connect to the cluster, it will only use the initial connector for all its connections to the cluster.
This translates in disabling automatic topology updates on clients.

You can apply this setting both to the connection-factory resource and to the
 pooled-connection-factory resource

235
12. Chapter 12: Classloading and modules
This chapter will be an insider’s in-depth look into the application server’s Classloading mechanism
that is an essential point for application developers and deployment administrators. Since the
classloading is based on the JBoss modules project, we will at first give details about the application
server modules infrastructure. In the next part of the chapter, we will learn how to solve
dependencies and advanced classloading policies. Finally, we will tap into a new strategy for
provisioning standard or custom server installation using the Galleon tool. Here is in detail our
track:

• An introduction to the application server modules

• How to configure modules on the application server

• Configuring dependencies on other modules using WildFly deployment descriptors

• Creating a server distribution using Galleon

12.1. What are modules ?


Class loading in WildFly is quite different compared to earlier (4/5/6) versions of JBoss AS. Class
loading is now based on the JBoss Modules project, which is a standalone implementation of a
modular (non-hierarchical) class loading and execution environment for Java. In other words,
rather than a single Class loader which loads all JARs into a flat class path, each library becomes a
module which only links against the exact modules it depends on, and nothing more. It implements
a thread-safe, fast, and highly concurrent delegating class loader model, coupled to an extensible
module resolution system, which combine to form a unique, simple and powerful system for
application execution and distribution.

So returning to our initial question, a module is a logical grouping of resources (like classes and
configuration files) used for class loading and dependency management. Depending on the way
modules are packaged, we can identify two different types of modules:

• Static Modules: these modules are installed as a tree of directories in the application server’s
modules directory. Each module contains one or more JAR files and a configuration file
(module.xml) that defines its unique name. All the libraries which are contained in WildFly
distribution are static modules and include both the application server core libraries and the
Java EE APIs. You can also install as static module third party libraries, which are used across
your applications.

Modules are only loaded when required. This usually only occurs when an application is deployed
that has explicit or implicit dependencies.

• Dynamic Modules: these modules are created and loaded by the application server when a
library (e.g. EAR, JAR,WAR) is deployed. The name of a dynamic module is derived from the
name of the deployed archive. Because deployments are loaded as modules, they can also
configure dependencies and be used as dependencies by other deployments.

236
12.2. Configuring static modules
WildFly statically loads its modules based on a module path environment variable named
JBOSS_MODULEPATH. This variable defaults to the JBOSS_HOME/modules folder therefore this is
the standard location where you can find WildFly core modules:

By setting the JBOSS_MODULEPATH, you can specify additional paths for your modules. Here’s an
example: (Linux users):

JBOSS_MODULESPATH=/usr/libs/custom-modules:$JBOSS_HOME/modules

You can alternatively boot the server using the -mp switch which by default uses
 the path specified by JBOSS_MODULESPATH variable.

The above behavior makes it relatively straightforward to define a common repository for your
WildFly installations: in the following example, there’s a shared module repository in
/var/lib/modules and a corresponding symbolic link in each distribution pointing to the common
repository:

12.2.1. How to install a new module

By configuring an application under the modules folder is the most flexible option as it allows
defining the exact dependencies contained in your library. In addition, it allows defining different
library versions (e.g. two different implementations of JSF API) which are qualified as slots.

You can install new modules at the root of your JBOSS_MODULESPATH therefore allowing a clear
distinction between the base distribution modules and your own libraries:

237
The first step for installing a module is obviously creating a path for your module. The choice of the
path name is up to you; however, you need to include a main folder at the end of your path structure
which will contain the module’s default libraries and its configuration file.

12.2.1.1. Example: How to install Jython library as a module

Let’s see as an example how to install the Jython libraries which can be used as a Java interpreter
for the Python language. In order to do that, you just need to have your library jar file ready
(jython-standalone-2.5.2.jar). Then launch jboss-cli script and issue the following command:

module add --name=org.jython --resources=/usr/libs/jython-standalone-2.5.2.jar


--dependencies=javax.api

This will create the following module structure under JBOSS_HOME/modules:

The module add command will create as well the following module.xml configuration file:

<module xmlns="urn:jboss:module:1.1" name="org.jython">


  <resources>
  <resource-root path="jython-standalone-2.5.2.jar"/>
  </resources>
  <dependencies>
  <module name="javax.api"/>
  </dependencies>
</module>

This basically says that the module is bound to the name "org.jython" and has a dependency on the
javax.api module.

238
12.2.1.2. How to use an installed module in your application

In order to use this module in your applications you have to trigger a dependency on the module.
This can be done by adding into the META-INF/MANIFEST.MF file a:

Dependencies: [modulename]

Example:

Dependencies: org.jython

Please note that the module name does not have to match with the package name

 of the library. The actual module name is specified in the module.xml file by the
name attribute of the module element.

You are not limited to use a single dependency in your Manifest file, as you can add multiple
dependencies separated by a comma. For example, here is how to configure a multiple dependency
towards two modules (org.jython and log4j):

Dependencies: org.jython,org.apache.log4j

In case your application is contained in an Enterprise Archive, if you want that your dependency is
exported to all submodules you can add the export keyword to your EAR’s Manifest file. For
example, here is how to export jython and log4j dependencies to other submodules contained in an
Enterprise Archive:

Dependencies:
org.jython,org.apache.log4j export

12.2.1.3. How to turn your modules in a global module

The above dependency strategy is the default and recommended one for your applications. It is
however also possible to define a module as a global module, which means that it will be accessible
to all deployments without adding any entry to the application Manifest file. This can be done
through the ee subsystem which includes an element called global-modules. You need to modify
this attribute by setting the list of global modules to be used by the application server.

Here is how you can set it to include org.apache.log4j module among the global modules:

/subsystem="ee":write-attribute(name=global-
modules,value=[{name="org.apache.log4j",slot="main"}])

The outcome in your server configuration will be the following entry:

239
<subsystem xmlns="urn:jboss:domain:ee:4.0">
  <global-modules>
  <module name="org.apache.log4j" slot="main"/>
  </global-modules>
. . . .

This subsystem, also includes other properties which affect the overall server behavior. For
example, the Isolated Subdeployment behavior flag which by default (false) means the
subdeployments can see classes belonging to other subdeployments within the .ear (More about it
in the section "Configuring classloading isolation").

/subsystem=ee:write-attribute(name=ear-subdeployments-isolated,value=true)

Finally, the properties jboss-descriptor-property-replacement and spec-descriptor-property-


replacement are used respectively to allow property replacements of strings in the JBoss XML
descriptors and in the Java EE descriptors.

Here is how to set to true the jboss-descriptor-property-replacement:

/subsystem=ee:write-attribute(name=jboss-descriptor-property-replacement,value=true)

12.2.1.4. How to use global directories for your modules

WildFly 19 has the ability to define global directories for your modules. This can be an handy
solution if the name of a shared library changes very often or if there are many libraries you want
to share. This can be done through the 'ee' subsystem which allows the configuration of a global
directory that will be scanned to automatically include .jar files and resources as a single additional
dependency. This module dependency is added as a system dependency on each deployed
application. Here is You can configure a global directory using the following command:

/subsystem=ee/global-directory=my-common-libs:add(path=lib, relative-
to=jboss.home.dir)

In the above example, we are using a relative path to point to the global directory. You can also use
an absolute path:

 /subsystem=ee/global-directory=my-common-libs:add(path=/home/jboss/tools/libs)

When a global-directory is created, WildFly defines a module which has a Jar Resource loader for
each jar file included in this directory and its subdirectories. For example, suppose you have
configured one global directory pointing to the following directory tree:

240
/my-common-libs/commons-math3-3.6.1.jar
/my-common-libs/poi-4.1.1.jar

The JBoss Modules module generated after scanning this global-directory will be equivalent to the
following module.xml:

<module xmlns="urn:jboss:module:1.5" name="deployment.external.global-directory.my-


common-libs">
  <resources>
  <resource-root path="/home/jboss/tools/libs/commons-math3-3.6.1.jar"/>
  <resource-root path="/home/jboss/tools/libs/poi-4.1.1.jar"/>
  </resources>

  <dependencies>
  <module name="javaee.api"/>
  </dependencies>
</module>

If you have included files in your global directory, they can be loaded as well by your applications
using the context ClassLoader of your current thread:

Thread.currentThread().getContextClassLoader().getResourceAsStream("file.properties");

The module created from the shared directory is loaded as soon as the first
application is deployed in the server after creating the global-directory. That
means, if the server is started/restarted and there are no applications deployed,
 then the global directory is neither scanned nor the module loaded. Any change
in any of the contents of the global-directory will require a server reload to make
them available to the deployed applications.

12.2.1.5. How to deploy extension-type dependencies

The recommended strategy for creating new modules is to install them in the modules folder of the
application server. For the sake of completeness, we will mention another option which can be used
in standalone mode to create extension-list type dependencies. This can be done by dropping
your modules into the standalone/lib/ext folder and refer it from the MANIFEST.MF file.

When done, the dependency to this library will be added just to the module created for your
deployment.

12.3. Configuring dynamic modules


Anything in WildFly is a module, therefore if you deploy a library into the AS (for example by
dropping it into the deployments folder) then it’s automatically elected as a module. You will not be
able to use all the flexible options contained in module.xml yet, in some circumstances, it can be

241
useful. As an example, you can drop a JDBC driver into the deployment folder and it will be
automatically deployed and a module created out of it.

12.3.1. How to use dynamic modules in your applications

You can use a dynamic module in your applications just the same way we have seen for installed
module. The only thing we need to know is the actual module name.

Here’s the rule to determine the module name: applications that are packaged as top-level archives
(such as WAR, JAR, and SAR) are assigned the following module name:

deployment.[archivename]

For example, a Web application named WebExample1.war will be deployed as module name:

deployment.WebExample1.war

Therefore, you can reference this module with the following entry in MANIFEST.MF

Dependencies: deployment.WebExample1.war

On the other hand, on applications that contain nested deployments (such as the EAR archive),
every single archive will be assigned a module name using this classification:

deployment.[ear archivename].[sub deployment archive name]

So, the same Web application, if contained in the archive EnterpriseApp.ear, will be deployed with
the name:

deployment.EnterpriseApp.ear.WebExample1.war

12.4. Configuring module Dependencies


So far we have seen how to install a new module and how to specify that an application is
dependent on that module. The previous example does not cover all aspects of application
dependencies; actually, the application server recognizes two types of dependencies: explicit and
implicit.

• The Java EE core libraries are qualified as implicit dependencies, so they are automatically
added to your application when the deployer detects their usage.

• The other module libraries need to be explicitly declared by the user in the application’s
Manifest file or in a custom descriptor file named jboss-deployment-structure.xml.

242
12.4.1. Implicit dependencies

Implicit dependencies include two sets of core modules, which are added to your application
without an explicit dependency assertion. The first set of module, include the following core
modules, which are automatically added to your applications:

The second set of implicit dependency is added on a condition and includes almost all modules that
are based on configuration files or annotations. For example, the javax.ejb dependency is triggered
if the user annotates the class with an EJB annotation (e.g. @Stateless) or if the ejb-jar.xml file is
included in the application.

243
12.4.2. Explicit dependencies

Modules, which are not qualified as implicit dependencies, need to be declared by the user. In our
initial example, the org.jython is mentioned in the Manifest file therefore the application server
will link the library to the application.

Dependencies: org.jython

If you need a fine grained control over your dependencies and classloading policies you can use the
file jboss-deployment-structure.xml which is an application server custom descriptor discussed in
the next section of this chapter.

12.5. Advanced Classloading policies


The first common usage of the jboss-deployment-structure.xml file is to set application dependency
against modules. The advantage of using this file (compared to the Manifest’s entry) is that you can
define dependencies across top-level deployments and subdeployments.

In the following example, we have defined a top-level dependency (The file itextpdf-5.4.3.jar
which has been added in the deployments folder) which is exported to all submodules of an
Enterprise Archive (ear):

<jboss-deployment-structure>
  <deployment>
  <dependencies>
  <module name="deployment.itextpdf-5.4.3.jar" export="TRUE"/>
  </dependencies>
  </deployment>
</jboss-deployment-structure>

If we want a more restrictive policy, we can include the dependency just for the sub-module named
myapp.war which is included in the EAR:

<jboss-deployment-structure>
  <sub-deployment name="myapp.war">
  <dependencies>
  <module name="deployment.itextpdf-5.4.3.jar" />
  </dependencies>
  </sub-deployment>
</jboss-deployment-structure>

The above examples are using deployment-based dependencies; you can however reference your
modules installed in the modules folder as in the following example where we are referencing log4j
libraries:

244
<jboss-deployment-structure>
  <deployment>
  <dependencies>
  <module name="org.apache.log4j" export="TRUE"/>
  </dependencies>
  </deployment>
</jboss-deployment-structure>

If you need to provide a fine-grained control over your dependencies, you can exclude/include
some packages from your dependencies. Let’s take as an example the following application which is
composed of these artifacts:

MyApp.ear
|
|-- MyWebApp.war
|
|-- lib/itextpdf-5.4.3.jar

As it is, you don’t need configuring the jboss-deployment-structure.xml to use the itext classes,
which are picked up from the lib folder. However, what if you want to select which packages to use
of included itext library? That can be done by defining the itext library as a module and include a
filter in it, which excludes for example the com/itextpdf/awt/geom package:

<jboss-deployment-structure>
  <sub-deployment name="MyWebApp.war">
  <dependencies>
  <module name="deployment.itextpdf-5.4.3.jar" />
  </dependencies>
  </sub-deployment>
  <module name="deployment.itextpdf-5.4.3.jar" >
  <resources>
  <resource-root path="itextpdf-5.4.3.jar" >
  <filter>
  <exclude path="com/itextpdf/awt/geom" />
  </filter>
  </resource-root>
  </resources>
  </module>
</jboss-deployment-structure>

12.5.1. How to prevent your modules from being loaded

In this second section, we will show how to exclude some modules from being loaded by WildFly.
Here’s for example how to prevent your application to use dom4j libraries and use the XOM
(http://www.xom.nu/) object module which we have installed as module name "org.xom":

245
<jboss-deployment-structure>
  <deployment>
  <exclusions>
  <module name="org.dom4j" />
  </exclusions>
  <dependencies>
  <module name="org.xom" />
  </dependencies>
  </deployment>
</jboss-deployment-structure>

As a footnote, please be aware that you can use the slot parameter in the module name in order to
specify a dependency against a particular release of a module. In the following example we want to
replace the default (main) implementation of the com.mysql module with the one contained in the
slot named "1.26". Here’s the view of the com.mysql folder under your modules tree:

~/jboss/wildfly-20.0.0.Final/modules:$ tree com


com
└── mysql
  ├── 1.26
  │   ├── module.xml
  │   └── mysql-connector-java-5.1.26-bin.jar
  └── main
  ├── module.xml
  └── mysql-connector-java-5.1.31-bin.jar

This is the corresponding configuration needed to use the slot 1.26 for your JDBC Driver:

<jboss-deployment-structure>
  <deployment>
  <exclusions>
  <module name="com.mysql" />
  </exclusions>
  <dependencies>
  <module name="com.mysql" slot="1.26" /> ①
  </dependencies>
  </deployment>
</jboss-deployment-structure>

① The slot name does not have to be a version number: you can use any name of your like as long
it does exist in your module file system.

12.5.2. How to prevent a subsystem from being loaded

Another feature of the jboss-deployment-structure.xml file is the ability to prevent the classloader
to use a subsystem included in the application server. In the following example, we are excluding
the resteasy subsystem that is used for consuming REST Messages:

246
<jboss-deployment-structure>
  <deployment>
  <exclude-subsystems>
  <subsystem name="resteasy" />
  </exclude-subsystems>
  </deployment>
</jboss-deployment-structure>

Please note that if the exclude-subsystem is specified for the top-level archive, it will be inherited by
sub deployments, unless the sub deployments specify their own (possibly empty) list.

12.5.3. Configuring classloading isolation

The jboss-deployment-structure.xml can be used also to configure the application server


classloading policy. The default policy, which is used to solve conflicts between multiple versions of
the same class, is the following:

• The highest priority is given to modules, automatically loaded by the container (e.g. the Java EE
APIs).

• Next, libraries that are indicated by the user either using the MANIFEST.MF mechanism
(Dependencies:) or the jboss-deployment-structure.xml file).

• Then, libraries that are packed within the application itself, such as classes contained in WEB-
INF/lib or WEB-INF/classes.

• Finally, libraries that are packed within the same EAR archive (in the EAR’s lib folder).

Now that we are aware about the default classloading policies, let’s see a concrete example.

The following application (myapp.ear) includes libraries: an EJB archive, a Web application and an
utility library in the lib folder.

myapp.ear
 |
 |__web.war
 |
 |__ejb.jar
 |
 |__lib/utility.jar

What is the default behavior of the server’s classloader in this example?

• WEB application classes are able to use the EJB classes

• EJB classes are not able to see the WEB application classes (which are loaded by a different
classloader as per Java EE specification).

• Both WEB classes and EJB classes are able to use the utility.jar

By setting to true the ear-subdeployments-isolated (default false), you can alter this behavior:

247
<jboss-deployment-structure>
  <ear-subdeployments-isolated>true</ear-subdeployments-isolated>
</jboss-deployment-structure>

Here is the new behavior now:

• WEB application classes are not able to use the EJB classes

• EJB classes are still not able to see the WEB application classes (the ear-subdeployments-isolated
has no effect on this archive)

• Both WEB classes and EJB classes are still able to use the utility.jar (the ear-subdeployments-
isolated has no effect on the archives in the lib folder)

As a final note, you should be aware that, as per Java EE specification, you can alter the default
name for the shared libraries (lib) by adding a library-directory element in your application.xml:

<library-directory>mylibs</library-directory>

12.5.4. Sticking to Java EE compatibility

Using the Dependencies declaration in your Manifest file is a custom classloading strategy adopted
by WildFly and JBoss AS 7. That’s a quite powerful add-on, yet if you are on the hook for Java EE
portability you should evaluate using the Class-Path Manifest entry in your artifacts. This can be
used within an EAR to set up dependencies between sub deployments and also to allow modules
access to additional jars deployed in an ear that are not sub deployments and are not in the EAR/lib
directory. Here’s for example how to state a dependency to the utility.jar library which is
packaged in your EAR archive:

Manifest-Version: 1.0

Class-Path: utility.jar

As you can see, one important difference with the WildFly Dependencies strategy is that Java EE’s
Classpath needs to directly reference an artifact; therefore, if you plan to upgrade your libraries
you need to keep your Manifest file in sync. Besides this, the biggest downside of this approach is
that you can only use libraries that are packaged along with your application, whilst the
Dependencies option can reference modules which are loaded anywhere from the application
server.

12.6. Provisioning WildFly using Galleon


The Project Galleon can be used to provision default or custom versions of WildFly. This is
particularly useful in the era of cloud and microservices, where you want to start only with the
environment you need for your applications. Out of the box, the application server provides a set of
different configurations that cover a large set of scenarios. Nevertheless, each application has

248
different requirements, so probably there is no configuration that can fit exactly your needs:

So far, the common way to customize the application server was to manually remove extensions
and subsystems from the configuration file, deleting as well the related libraries in the modules
folder. This approach is however tedious and error prone. Let’s see how we can do it faster and
better with Galleon. First off, we will learn which are the building blocks of Galleon, then we will
install it and start provisioning WildFly by using it.

• Features : The minimal configurable unit of Galleon’s configuration model is called a feature. A
feature is described by a feature specification.

• Feature specs : A feature specification describes how a feature can be configured. Feature specs
are identified by their names that must be unique in the scope of the feature-pack.

• Feature-packs : A feature-pack is a released unit of software as ZIP archive that can be installed
or uninstalled using Galleon tools. A Feature-pack contains mainly a set of metadata describing
the default configuration(s) of the product the feature-pack represents, dependencies on other
feature-packs and the package set that should be installed by default. For example, wildfly-core
feature pack includes the core runtime that is used by the Wildfly application server.

• Layers: A layer is used to define some specific complete configurations (that can be used
individually or in combination) and expose them to users to compose a desired configuration.
The jaxrs layer, for example, includes all the technology (modules and configuration) needed to
run JAX-RS application on WildFly

12.6.1. Getting started with Galleon

To use Galleon, download the Galleon tool from https://github.com/wildfly/galleon/releases . Unzip


the tool and start it with:

./galleon.sh

This will start Galleon command line. By typing the help you can see the list of available commands:

249
[bin]$ help

== Commands to achieve main provisioning use cases ==


check-updates Get available updates for a full installation or an identified feature
pack
  --dir Installation directory
  --feature-packs The feature pack producers to check update for
  --include-all-dependencies Include dependencies when checking for updates
find <arg> Find feature pack locations that match the pattern
  <arg> Feature pack location and/or layer pattern
  --layers Comma separated list of layer name patterns
  --resolved-only Look-up in resolved feature-packs only
  --universe Provide a universe id in order to search for feature packs located in
not installed universe
get-changes Display the files modified, added or removed from an installation
  --dir Installation directory
get-info Display information on an installation directory
  --dir Installation directory
  --type Type of information to display (all, configs, dependencies, layers,
options, patches, universes)
install <arg> Installs specified feature pack
--More(35%)--

We will list the available feature-packs with the command list-feature-packs :

[bin]$ list-feature-packs
=============== ============== ============
Product Update Channel Latest Build
=============== ============== ============
wildfly 18.0/final 18.0.1.Final
wildfly current/final 19.0.0.Final
wildfly 17.0/final 17.0.1.Final
wildfly-core current/final 11.0.0.Final
wildfly-core 10.0/final 10.0.3.Final
wildfly-core 9.0/final 9.0.2.Final
wildfly-servlet 18.0/final 18.0.1.Final
wildfly-servlet current/final 19.0.0.Final
wildfly-servlet 17.0/final 17.0.1.Final

As you can see, there are 3 available feature-packs out of the box: the wildfly feature pack, which
contains the standard distribution of the application server. The wildfly-core, which contains the
core runtime that is used by the Wildfly application server and wildfly-servlet,
(https://github.com/wildfly/wildfly/tree/master/servlet-feature-pack) which features a subset of
wildfly to be used to deploy just Web applications.

Let’s see now how to provision a full WildFly 19 distribution from the command line:

250
[bin]$ install wildfly:current --dir=wildfly-19

In a matter of a minute, the installation will be provisioned under the folder wildfly-16:

Feature-packs resolved.
Packages installed.
JBoss modules installed.
Configurations generated.
Feature pack installed.
======= ============ ==============
Product Build Update Channel
======= ============ ==============
wildfly 19.0.0.Final current

12.6.2. Exploring Galleon command line

Just like WildFly’s CLI, also Galleon command line is able to auto-complete and suggest the
available options for a particular command. For example, let’s say we wanted to provision WildFly
from wildfly-servlet feature pack. Let’s just type install wildfly and hit the Tab key:

[bin]$ install wildfly


wildfly-core:current wildfly-servlet:current wildfly:current

As you can see, the available alternatives are printed on the command line. So if we wanted to
provision a WildFly server based on wildfly-servlet current feature pack, we can just do:

[bin]$ install wildfly-servlet:current --dir=undertow-server

By checking into the undertow-server folder, we will find a WildFly distribution which ships with a
standalone.xml file and a standalone-load-balancer.xml file containing just the required extensions:

251
  <extensions>
  <extension module="org.jboss.as.deployment-scanner"/>
  <extension module="org.jboss.as.ee"/>
  <extension module="org.jboss.as.jmx"/>
  <extension module="org.jboss.as.logging"/>
  <extension module="org.jboss.as.naming"/>
  <extension module="org.jboss.as.security"/>
  <extension module="org.wildfly.extension.core-management"/>
  <extension module="org.wildfly.extension.elytron"/>
  <extension module="org.wildfly.extension.io"/>
  <extension module="org.wildfly.extension.request-controller"/>
  <extension module="org.wildfly.extension.security.manager"/>
  <extension module="org.wildfly.extension.undertow"/>
  </extensions>

Much the same way, also the modules folder of the application, will contain only the required
modules needed to start the Web server and its dependencies:

tree -L 4 modules
modules
└── system
  └── layers
  └── base
  ├── ch
  ├── com
  ├── ibm
  ├── io
  ├── javax
  ├── org
  └── sun

12.6.3. Installing different versions of a feature-pack

As we have seen, the list-feature-packs shows the current available feature-packs. It is however
possible to install also older versions of a feature pack. In order to do that, let’s find all available
feature packs that are available in your current galleon distribution:

252
[bin]$ find .
Search done.

Found 153 feature pack locations.

wildfly-core:10.0#10.0.0.Beta1
wildfly-core:10.0#10.0.0.Beta2
wildfly-core:10.0#10.0.0.Beta3
wildfly-core:10.0#10.0.0.Beta4
wildfly-core:10.0#10.0.0.Beta5
wildfly-core:10.0#10.0.0.Beta6
wildfly-core:10.0#10.0.0.Beta7
wildfly-core:10.0#10.0.0.Beta8
wildfly-core:10.0#10.0.0.Beta9
wildfly-core:10.0#10.0.0.CR1
wildfly-core:10.0#10.0.0.Final
 . . . . . . . .

We have truncated the output for the sake of brevity. There are however 153 feature pack locations
available. We can, for example, install WildFly 15 distribution as follows:

[bin]$ install wildfly:current#15.0.0.Final --dir=wildfly-15


Feature-packs resolved.
Packages installed.
JBoss modules installed.
Configurations generated.
Feature pack installed.
======= ============ ==============
Product Build Update Channel
======= ============ ==============
wildfly 15.0.0.Final current

12.6.4. Choosing the layers to include in the installations

Layers are meant to define a certain (specialized) part of the final configuration, defining specific
package dependencies. You can choose to include only some of the available layers by passing the
--layers parameter to the install command. For example, if we were to install only the undertow
layer, we would then execute:

[bin]$ install wildfly:current --layers=undertow --dir=my-undertow

The list of available layers can be checked from the folder galleon-
pack/src/main/resources/layers/standalone of Wildfly’s source distribution
(https://github.com/wildfly/wildfly).

253
$ ls

 rwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 cdi


drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 cloud-profile
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 ee
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 h2-database
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 jaxrs
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 jms-activemq
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 jpa
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 microprofile
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 resource-adapters
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 undertow
drwxrwxr-x. 2 wildfly wildfly 4096 Feb 14 01:58 vault

Therefore, if we wanted to install just the cdi and jaxrs layers, the following command will do it:

[bin]$ install wildfly:current --dir=my-wildfly-minimal--layers=cdi,jaxrs

Finally, it’s worth mentioning that you can also use the galleon command line in non-interactive
mode, passing the arguments to the galleon.sh script as follows:

$ galleon.sh install wildfly:current --dir=my-wildfly-server --layers=cdi,jaxrs

254
13. Chapter 13: Clustering
Clusters in an application server enhance scalability and availability, which are related concepts. In
order to achieve the benefits of clustering, you need to manage the configuration of several
components like the clustering transport at first, the replication/distribution of data across cluster
members and the techniques used to balance load between nodes. As you can imagine, there is a lot
of ground to cover so we will start right away with the list of topics that follows:

• An overview of WildFly’s clustering building blocks

• Configuring application server for clustering in standalone mode and domain mode

• Configuring the cluster Transport libraries

• Configuring Cluster Caches using Infinispan subsystem

In the next chapter, we will provide some complimentary information that is essential to balance
the load of Web applications.

13.1. WildFly clustering building blocks


The following picture shows the clustering building blocks from a component-centric viewpoint:

As you can see, there the backbone of WildFly clustering is the JGroups library, which provides a
reliable multicast system used by cluster members to find each other and communicate.

Multicast is a protocol where data is transmitted simultaneously to a group of


hosts that have joined the appropriate multicast group. You can think about
 multicast as a radio or television streaming where only those tuned to a
particular frequency receive the streaming.

Next comes Infinispan, which is a data grid platform that is used by the application server to keep
in sync the application data in the cluster by means of a replicated and transactional JSR-107
compatible cache. Infinispan is used both as a Cache for standard session mechanisms (HTTP
Sessions and SFSB session data) and as advanced caching mechanism for JPA and Hibernate objects

255
(aka second level cache).

13.2. Clustering standalone nodes


As we already know, the application server includes the following standalone configurations that
are cluster-aware:

• standalone-ha.xml

• standalone-ha-full.xml

Therefore, in order to start a cluster of application servers in cluster mode you have to select one of
these configurations. Additionally, you need to specify a server node’s name if your cluster nodes
are going to be bound to on the same IP Address.

13.2.1. Clustering standalone servers on different machines

The first and simpler use case is a clustering configuration where each server is bound on a
different IP Address; that’s usually the case of an installation on different machines. In the
following example, the first cluster node is started on a machine bound at the IP Address
192.168.10.1 and the second one at the IP Address 192.168.10.2:

$ ./standalone.sh -b 192.168.10.1 -c standalone-ha.xml

$ ./standalone.sh -b 192.168.10.2 -c standalone-ha.xml

Our cluster configuration is visually depicted by the following image:

13.2.2. Clustering standalone servers on the same machine

In the second use case, we are going to start more than one AS instance on the same machine; in
order to avoid conflicts, we need to specify a server node name on each server JVM and a port-
offset for the second (and eventually other nodes). For this purpose, we will use the
jboss.node.name and jboss.socket.binding.port-offset as shown by the following code:

256
$ ./standalone.sh -c standalone-ha.xml -Djboss.node.name=nodeA

$ ./standalone.sh -c standalone-ha.xml -Djboss.node.name=nodeB


-Djboss.socket.binding.port-offset=150

13.3. Configuring a cluster of domain nodes


In Configuring WildFly in Domain mode, we have learned that the domain configuration is
federated in a single file named domain.xml. Within this file, there are four built-in profiles
namely:

• The default profile, which can be used for non-clustered environments

• The ha profile for clustered environments

• The full profile which includes the messaging extension to the default profile

• The full-ha profile which includes both the messaging extension and the clustering capabilities

Therefore, in order to use clustering you have to make sure that your server groups are using one
cluster aware profile, such as the "ha" and "full-ha". You can check your server groups’ profile by
looking into the server-group properties:

[domain@localhost:9990 /] /server-group=main-server-group:read-resource
{
  "outcome" => "success",
  "result" => {
  "management-subsystem-endpoint" => false,
  "profile" => "full",
  "socket-binding-default-interface" => undefined,
  "socket-binding-group" => "full-sockets",
  "socket-binding-port-offset" => 0,
  . . . .
  }
}

257
The above server group is not fit for running a cluster, therefore we need to set a cluster aware
profile and a corresponding set of sockets:

batch

/server-group=main-server-group:write-attribute(name=profile,value=ha)

/server-group=main-server-group:write-attribute(name=socket-binding-group,value=ha-
sockets)

run-batch

Execute the above script and reload your host if you want the changes to take effect. Supposing
your servers are located on the host named "master":

/host=master:reload

13.3.1. Enabling clustering services

WildFly clustering service is an on-demand service. This means that, even if you have started a
cluster aware configuration, the cluster service won’t start until you deploy a cluster-aware
application. Enabling clustering for your applications can be achieved in different ways depending
on the type of application you are going to deploy:

If you are deploying a Web based application, then you need to declare it as "distributable" in the
web.xml configuration file to have your HTTP session state replicated across the cluster:

<web-app>
  <distributable />
</web-app>

If you are deploying an EJB based application, clustering services will start automatically and you
do not need any special annotation or XML configuration element. In the earlier versions of the
application server, you used to demarcate your EJB with the annotation
@org.jboss.ejb3.annotation.Clustered to trigger clustering services, as in the following example:

@Stateful
@Clustered

public class
ClusteredStatelefulBean { ... }

This annotation is now deprecated and will be ignored by the application server (and the same
stands for the <clustered>true</clustered> element that you could include into the jboss-ejb3.xml
deployment descriptor). So ultimately, with the new release of the application server, the state of

258
your Stateful Bean is automatically replicated across cluster nodes without any effort from your
side.

13.3.2. Configuring HTTP Session in a cluster

As said, by adding the "distributable" stanza in your web.xml is sufficient to achieve that your HTTP
Session survives a server crash/restart, provided that at least one server is still available. You can
further specialize the way HTTP Session is managed through the distributable-web subsystem,
which has been introduced in WildFly 17 to manage a set of session management profiles that
encapsulate the configuration of a distributable session manager:

  <subsystem xmlns="urn:jboss:domain:distributable-web:2.0" default-session-


management="default" default-single-sign-on-management="default">

You can check the default-session-management attribute from the CLI as follows:

 /subsystem=distributable-web:read-attribute(name=default-session-management)
{
  "outcome" => "success",
  "result" => "default"
}

The HTTP Session Management is handled under the hoods by Infinispan therefore if you want to
check its settings, you have to reach the infinispan-session-management attribute under the
distributable-web subsystem. Example:

 /subsystem=distributable-web/infinispan-session-management=default:read-resource
{
  "outcome" => "success",
  "result" => {
  "cache" => undefined,
  "cache-container" => "web",
  "granularity" => "SESSION",
  "affinity" => {"primary-owner" => undefined}
  }
}

As you can see, there are several configurable attribute for the infinispan-session-management:

• cache-container: This references the Infinispan cache-container into which session data will be
stored.

• cache: This references a cache for the related cache-container. If undefined, the default cache of
the associated cache container will be used.

• granularity: This defines the session manager mapping for the individual cache entries:

• affinity: This attribute defines the affinity that a web request should have for a given server.

259
13.3.2.1. Configuring HTTP Session Granularity

The granularity attribute can have the following values:

• SESSION: Stores all session attributes within a single cache entry. This is generally more
expensive than ATTRIBUTE granularity, but preserves any cross-attribute object references.

• ATTRIBUTE: Stores each session attribute within a separate cache entry. This is generally more
efficient than SESSION granularity, but does not preserve any cross-attribute object references.

By default, WildFly’s distributed session manager uses SESSION granularity, meaning that all
session attributes are stored within a single cache entry. While this ensures that any object
references shared between session attributes are preserved following replication/persistence, it
means that a change to a single attribute results in the replication/persistence of all attributes.

If your application does not share any object references between attributes, users are strongly
advised to use ATTRIBUTE granularity. Using ATTRIBUTE granularity, each session attribute is
stored in a separate cache entry. This means that a given request is only required to
replicate/persist those attributes that were added/modified/removed/mutated in a given request.
For read-heavy applications, this can dramatically reduce the replication/persistence payload per
request. Here is how you can set the granularity to "ATTRIBUTE" for the default session manager:

/subsystem=distributable-web/infinispan-session-management=default/:write-
attribute(name=granularity,value=ATTRIBUTE)

13.3.2.2. Configuring HTTP Session Affinity

The affinity attribute defines the affinity that an HTTP request has for a given WildFly server. The
affinity of the associated web session determines the algorithm for generating the route to be
appended onto the session ID (within the JSESSIONID cookie, or when encoding URLs). Possible
values are:

• affinity=none: HTTP requests won’t have affinity to any particular node.

• affinity=local: HTTP requests will have an affinity to the server that last handled a request for a
given session. This is the standard sticky session behavior.

• affinity=primary-owner: HTTP requests will have an affinity to the primary owner of a given
session. This is the default setting. Behaves the same as affinity=local if the backing cache is not
distributed nor replicated.

• affinity=ranked: HTTP requests will have an affinity for the first available member in a list that
include the primary and the backup owners, and for the member that last handled a session.
Behaves the same as affinity=local if cache is not distributed nor replicated.

The ranked affinity supports the following attributes:

• delimiter: The delimiter used to separate the individual routes within the encoded session
identifier.

• max routes: The maximum number of routes to encode into the session identifier.

260
Here is how to set HTTP Session affinity to be ranked, with max-routes set to 2:

/subsystem=distributable-web/infinispan-session-
management=default/affinity=ranked:add(max-routes=2)

13.3.2.3. Using a custom session management profile

You can define a new Session Management Profile as follows:

/subsystem=distributable-web/infinispan-session-management=custom-profile/:add(cache-
container=web,granularity=ATTRIBUTE)

Now, in order to link the "custom-manager" Session Management Profile to your application,
several options are available: Include a WEB-INF/distributable.xml file in your application, which
links your customer profile:

<?xml version="1.0" encoding="UTF-8"?>


<distributable-web xmlns="urn:jboss:distributable-web:1.0">
  <session-management name="custom-profile"/>
</distributable-web>

You can also link the Session Management Profile through the existing jboss-all.xml:

<?xml version="1.0" encoding="UTF-8"?>


<jboss xmlns="urn:jboss:1.0">
  <distributable-web xmlns="urn:jboss:distributable-web:1.0">
  <session-management name="custom-profile"/>
  </distributable-web>
</jboss>

13.3.2.4. Defining Session Management Profile at application level

It is also possible to use deployment-specific settings for your session management profile. This can
be done by adding an infinispan-session-management element into the WEB-INF/distributable-
web.xml file. Example:

<?xml version="1.0" encoding="UTF-8"?>


<distributable-web xmlns="urn:jboss:distributable-web:1.0">
  <infinispan-session-management cache-container="web" cache="demo-cache" granularity
="ATTRIBUTE">
  <local-affinity/>
  </infinispan-session-management>
</distributable-web>

As an alternative, also the file META-INF/jboss-all.xml can contain deployment specific settings for

261
your session profile:

<?xml version="1.0" encoding="UTF-8"?>


<jboss xmlns="urn:jboss:1.0">
  <distributable-web xmlns="urn:jboss:distributable-web:2.0">
  <infinispan-session-management cache-container="web" granularity="SESSION">
  <primary-owner-affinity/>
  </infinispan-session-management>
  </distributable-web>
</jboss>

13.3.2.5. Using jboss-web.xml to manage max-active-sessions

Prior to WildFly 17, the file jboss-web.xml contained the core settings for configuring HTTP Session
replication. Here is the basic structure of it:

<jboss-web>
  ...
  <max-active-sessions>...</max-active-sessions>
  ...
  <!-- THIS IS DEPRECATED SINCE WILDFLY 17 !!! -->
  <replication-config>
  <replication-granularity>...</replication-granularity>
  <cache-name>...</cache-name>
  </replication-config>
  ...
</jboss-web>

Since WildFly 17, the replication-config section has been deprecated. You can still use max-active-
sessions to impose a limit to the number of currently active sessions. Please notice that this limit
works differently, depending if you are running an application in a cluster or not (that is if you are
using <distributable/> in web.xml).

• Clustered Web applications: if you reach the max-active-sessions and a new request arrives,
the request is accepted and an old session is passivated to disk using a Least Recently Used
(LRU) algorithm

• Non-Clustered Web applications: when exceeding the max-active-sessions limit, a new request
will fail with an IllegalStateException.

13.3.2.6. Storing HTTP Session Data in a remote Infinispan cluster

The distributable-web subsystem can be configured to store web session data in a remote
Infinispan cluster using the HotRod protocol. Storing web session data in a remote cluster allows
the cache layer to scale independently of the application servers. Here is a sample architecture
schema:

262
As you can see, a remote cluster of two Infinispan servers is up and running and listening for
HotRod connections on the following ports:

• nodeA: Port 11222 (default)

• nodeB: Port 11322 (offset 100)

At first, define a Socket binding towards the remote Infinispan servers:

/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=infinispan-server-1:add(host=localhost, port=11222)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-
binding=infinispan-server-2:add(host=localhost, port=11322)

Then, create a remote-cache-container which connects to the remote destinations:

/subsystem=infinispan/remote-cache-container=datagrid:add(default-remote-
cluster=infinispan-cluster)
/subsystem=infinispan/remote-cache-container=datagrid/remote-cluster=infinispan-
cluster:add(socket-bindings=[infinispan-server-1, infinispan-server-2])

Done with the remote-cache-container, now create an hotrod-session-management under the


distributable-web which connects to the remote-cache-container:

/subsystem=distributable-web/hotrod-session-
management=ExampleRemoteSessionStore:add(remote-cache-container=datagrid, cache-
configuration=default, granularity=ATTRIBUTE)

As an alternative, if you prefer to use a deployment specific configuration for your applications,
then you can include in the /WEB-INF/distributable-web.xml a reference to the remote-cache-

263
container as in this example:

<distributable-web xmlns="urn:jboss:distributable-web:2.0">
  <hotrod-session-management remote-cache-container="datagrid" cache-configuration=
"default" granularity="ATTRIBUTE">
  <no-affinity/>
  </hotrod-session-management>
</distributable-web>

13.4. Configuring the Cluster transport


The backbone of JBoss clustering is the JGroups library, which provides the communication
between members of the cluster using a multicast transmission.

The basic building block of JGroups is the Channel, which resembles to a standard socket. Basically
channels are the means by which applications can connect and send messages to each other in a
cluster.

When a cluster-aware application is deployed, the JGroups library launches a set of Channels that
have the ability to discover each other dynamically through the exchange of multicast packets.
Nodes that join the cluster at a later time have their state automatically initialized and
synchronized by the rest of the group.

For example, when a clustered web application is deployed, the web channel is activated:

264
16:26:33,355 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(ServerService Thread Pool -- 74) ISPN000078: Starting JGroups channel web
16:26:33,389 INFO [org.infinispan.remoting.transport.jgroups.JGroupsTransport]
(ServerService Thread Pool -- 74) ISPN000094: Received new cluster view for channel
web: [francesco-pc|0] (1) [francesco-pc]

All messages sent and received over the Channel have to pass through the Protocol stack, which is
the second main element of the JGroups framework. The protocol stack is made up of a list of
protocol layers in a bi-directional list.

Outgoing requests go down the JGroups stack, and incoming requests climb up in the stack. For
example, you might have in your protocol stack a fragmentation layer that might break up a
message into several smaller messages, adding a header with an ID to each fragment, and re-
assembling the fragments on the receiver’s side. Here’s an example of the default Protocol stack
which uses UDP (default) as transport protocol:

The JGroups configuration ships also with the tcp stack that uses a Multicast ping to detect cluster
members and tcp to communicate with nodes. The following picture depicts the standard tcp stack:

265
13.4.1. Changing the Protocol Stack used by JGroups

As we said, out of the box, the UDP Protocol is used to forge your cluster. You can however change
to other network protocol easily, and you can do it in two ways. The first option is by setting the
stack property of the default channel (named "ee"):

<subsystem xmlns="urn:jboss:domain:jgroups:6.0">
  <channels default="ee">
  <channel name="ee" stack="udp"/>
  </channels>
  . . . . . .
</subsystem>

Here is how to change the "ee" channel to use tcp instead, using the CLI:

/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcp)

Another option for varying the protocol stack is by means of the default-stack attribute of the
jgroups subsystem:

/subsystem=jgroups/:write-attribute(name=default-stack,value=tcp)

If you configure both the default’s channel stack and the jgroups default-stack the

 former setting will prevail; since the channel setting is more specific that
makes sense.

13.4.2. Configuring a full TCP stack

Although the default configuration includes a stack named "tcp", this stack still relies on multicast
for letting cluster members finding each other. If you want to use a static cluster view, by

266
specifying the list of server nodes (including address and port), then you need to use TCPPING at
the top of the tcp stack (instead of MPING). The recommended way to do that is to use the <socket-
discovery-protocol /> element which points to set the socket bindings (one for each cluster note).
This decouples the cluster definition from the JGroups configuration

<stack name="tcpping">
  <transport type="TCP" socket-binding="jgroups-tcp"/>
  <socket-discovery-protocol type="TCPPING" socket-bindings="jgroups-host-a jgroups-
host-b"/>
  <protocol type="MERGE3"/>
  <protocol type="FD_SOCK"/>
  <protocol type="FD_ALL"/>
  <protocol type="VERIFY_SUSPECT"/>
  <protocol type="pbcast.NAKACK2"/>
  <protocol type="UNICAST3"/>
  <protocol type="pbcast.STABLE"/>
  <protocol type="pbcast.GMS"/>
  <protocol type="MFC"/>
  <protocol type="FRAG3"/>
</stack>

Then in your socket-binding-group define the list of cluster members:

<socket-binding-group name="standard-sockets" default-interface="public" port-offset=


"${jboss.socket.binding.port-offset:0}">

  <!-- other configuration here -->


  <outbound-socket-binding name="jgroups-host-a">
  <remote-destination host="localhost" port="7600"/>
  </outbound-socket-binding>
  <outbound-socket-binding name="jgroups-host-b">
  <remote-destination host="localhost" port="7750"/>
  </outbound-socket-binding>
</socket-binding-group>

You can change to the default channel’s stack to use tcpping as follows:

/subsystem=jgroups/channel=ee:write-attribute(name=stack,value=tcpping)

13.4.2.1. Legacy tcpping configuration

If you are using an older version of WildFly (8,9,10) then you need to rely on the legacy TCPPING
configuration that uses properties to define the cluster members ("initial_hosts". Here is a sample
configuration based on TCPPING, which uses the default Domain set up (two nodes bound on one
local inteface with a port offset of 150):

267
<stack name="tcpping">
  <transport type="TCP" socket-binding="jgroups-tcp"/>
  <protocol type="org.jgroups.protocols.TCPPING">
  <property name="initial_hosts">
  127.0.0.1[7600],127.0.0.1[7750]
  </property>
  <property name="port_range">
  10
  </property>
  </protocol>
  <protocol type="MERGE3"/>
  <protocol type="FD_SOCK"/>
  <protocol type="FD_ALL"/>
  <protocol type="VERIFY_SUSPECT"/>
  <protocol type="pbcast.NAKACK2"/>
  <protocol type="UNICAST3"/>
  <protocol type="pbcast.STABLE"/>
  <protocol type="pbcast.GMS"/>
  <protocol type="MFC"/>
  <protocol type="FRAG3"/>
</stack>

13.4.3. Other JGroups stacks

WildFly includes also support, at JGroups level, for Azure Cloud service. This is not included in the
default profile of the server, but you will find a demo configuration in the
$JBOSS_HOME/docs/examples/configs/standalone-azure-ha.xml. Here is the relevant configuration
which instead of the default JGroups' PING protocol has the azure.AZURE_PING protocol:

268
<stacks>
  <stack name="udp">
  <transport type="UDP" socket-binding="jgroups-udp"/>
  <protocol type="azure.AZURE_PING">
  <property name="storage_account_name"
>${jboss.jgroups.azure_ping.storage_account_name}</property>
  <property name="storage_access_key"
>${jboss.jgroups.azure_ping.storage_access_key}</property>
  <property name="container">
${jboss.jgroups.azure_ping.container}</property>
  </protocol>
  <protocol type="MERGE3"/>
  <socket-protocol type="FD_SOCK" socket-binding="jgroups-udp-fd"/>
  <protocol type="FD_ALL"/>
  <protocol type="VERIFY_SUSPECT"/>
  <protocol type="pbcast.NAKACK2"/>
  <protocol type="UNICAST3"/>
  <protocol type="pbcast.STABLE"/>
  <protocol type="pbcast.GMS"/>
  <protocol type="UFC"/>
  <protocol type="FRAG3"/>
  </stack>
  <stack name="tcp">
  <transport type="TCP" socket-binding="jgroups-tcp"/>
  <protocol type="azure.AZURE_PING">
  <property name="storage_account_name"
>${jboss.jgroups.azure_ping.storage_account_name}</property>
  <property name="storage_access_key"
>${jboss.jgroups.azure_ping.storage_access_key}</property>
  <property name="container">
${jboss.jgroups.azure_ping.container}</property>
  </protocol>
  <protocol type="MERGE3"/>
  <socket-protocol type="FD_SOCK" socket-binding="jgroups-tcp-fd"/>
  <protocol type="FD_ALL"/>
  <protocol type="VERIFY_SUSPECT"/>
  <protocol type="pbcast.NAKACK2"/>
  <protocol type="UNICAST3"/>
  <protocol type="pbcast.STABLE"/>
  <protocol type="pbcast.GMS"/>
  <protocol type="FRAG3"/>
  </stack>
</stacks>

Once that you have copied the file standalone-azure-full-ha.xml in the configuration folder of the
application server, simply start it including the correct values for the System Properties. Example:

269
$ ./standalone.sh -c standalone-azure-full-ha.xml
-Djboss.jgroups.azure_ping.storage_account_name="A"
-Djboss.jgroups.azure_ping.storage_access_key="B"
-Djboss.jgroups.azure_ping.container="C"

13.4.4. Configuring the Transport Properties

You can configure and tune JGroups at two different levels:

• At the top protocol stack level (udp, tcp)

• At the single protocols which are used for the transport (e.g. FD, MFC etc.).

In this section we will learn how to set properties at the top protocol stack level. These properties
are available by querying the transport attribute of each stack. Here is for example how to query
for the UDP protocol properties:

/subsystem=jgroups/stack=udp/transport=UDP/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "default-executor" => undefined,
  "diagnostics-socket-binding" => undefined,
  "machine" => undefined,
  "module" => "org.jgroups",
  "oob-executor" => undefined,
  "properties" => {},
  "rack" => undefined,
  "shared" => false,
  "site" => undefined,
  "socket-binding" => "jgroups-udp",
  "thread-factory" => undefined,
  "timer-executor" => undefined,
  "property" => undefined,
  "thread-pool" => {
  "timer" => undefined,
  "default" => undefined,
  "internal" => undefined,
  "oob" => undefined
  }
  },
}

Most of the times, you would be fine with the default settings of your protocol stack; however some
knowledge about the following attributes can help to improve the performance and reliability of
your cluster:

• default-executor: The thread pool executor which is used to handle the incoming cluster
messages.

270
• socket-binding: The socket binding specification for this protocol layer. It is used to specify IP
interfaces and ports for communication.

• diagnostic-socket-binding: This is the diagnostics socket binding used for probing the
communication in the cluster.

• shared: If true, the underlying transport is shared by all channels using this stack.

• machine: The machine (i.e. host) identifier for this node. Used by Infinispan topology-aware
consistent hash.

• site: The site (i.e. data center) identifier for this node. Used by Infinispan topology-aware
consistent hash.

• rack: The rack (i.e. server rack) identifier for this node. Used by Infinispan topology-aware
consistent hash.

Check the section Configuring a Distributed cache to see how the properties

 "machine", "site" and "rack" can affect the distribution of your keys in the cluster,
when using the distributed cache mode.

13.4.5. Configuring the Protocol Properties

Customizing the single Protocol properties requires some exposure to the details of the single
Protocols. You can generally use this option if you want to fine tune the communication in a cluster
and generally if you are not satisfied with the default settings.

We do recommend, before changing any property, to consult the JGroups Protocol documentation,
available at http://www.jgroups.org/manual/index.html#protlist which contains an exhaustive
description of each property.

On other hand, if you are satisfied with a short description contained in the server meta-model,
then you can query them using the CLI, through the default (ee) channel. For example, here is how
to get a description of properties available in the PING protocol:

271
/subsystem=jgroups/channel=ee/protocol=PING:read-resource-description
{
  "outcome" => "success",
  "result" => {
  "description" => "PING",
  "storage" => "runtime-only",
  "attributes" => {
  "always_send_physical_addr_with_discovery_request" => {
  "type" => BOOLEAN,
  "description" => "When sending a discovery request, always send the
physical address and logical name too",
  "nillable" => false,
  "access-type" => "metric",
  "storage" => "runtime"
  },
  "async_discovery" => {
  "type" => BOOLEAN,
  "description" => "If true then the discovery is done on a separate
timer thread. Should be set to true when discovery is blocking and/or takes more than
a few milliseconds",
  "nillable" => false,
  "access-type" => "metric",
  "storage" => "runtime"
  },
. . . .
}

In the following example, we are setting the timeout property in the PING discovery protocol for
the UDP stack::

/subsystem=jgroups/stack=udp/protocol=PING/property=timeout/:add(value=100)

This results in the following update in your configuration:

<stack name="udp">
  <transport type="UDP" socket-binding="jgroups-udp"/>
  <protocol type="PING">
  <property name="timeout">100</property>
  </protocol>
. . . . .
</stack>

13.5. Configuring Clustering Caches


The second building block of WildFly clustering is Infinispan, which is an advanced Data Grid
Platform that can be used to cache and synchronize cluster data across its members. The core

272
interface of Infinispan is org.infinispan.Cache, which extends standard
java.util.concurrent.ConcurrentMap, with lots of additional features, such as:

• Data eviction to allow a maximum number of cache entries

• Data expiration which support the expiration of data based on a time span

• JTA transaction compatibility

• Persistence of entries to a cache store, to maintain copies that would survire server failures

Each cache has a specific setting for the above features so let’s see in detail the set the list of caches
available in WildFly. The caches in the Infinispan subsystem are organized as a set of Cache
Containers: you can query for the list of available caches through the infinispan subsystem as
indicated by the following query:

/subsystem=infinispan/:read-resource()
{
  "outcome" => "success",
  "result" => {
  "cache-container" => {
  "web" => undefined,
  "server" => undefined,
  "ejb" => undefined,
  "hibernate" => undefined
  },
  "remote-cache-container" => undefined
  }
}

As you can see, out of the box there are four built-in Cache Container :

• The web Cache Container used for replication of HTTP sessions

• The server Cache Container used as general purpose replication of objects in a cluster

• The ejb Cache Container used for replication of Stateful EJB Session data

• The hibernate Cache Container used as foundation for second-level entity cache by
JPA/Hibernate

The above cache containers are used internally by the application server; you can however define
new ones for the purpose of having your own replicated/distributed cache solution.

13.5.1. Configuring the Cache Container top level attributes

Each Cache Container has a specific configuration, this means that you can define specialized
cache configurations in your application server. The properties of each Cache container are evident
from the following query:

273
/subsystem=infinispan/cache-container=web/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "aliases" => undefined,
  "default-cache" => "dist",
  "eviction-executor" => undefined,
  "jndi-name" => undefined,
  "listener-executor" => undefined,
  "module" => "org.wildfly.clustering.web.infinispan",
  "replication-queue-executor" => undefined,
  "start" => "LAZY",
  "statistics-enabled" => false,
  "distributed-cache" => {"dist" => undefined},
  "invalidation-cache" => undefined,
  "local-cache" => undefined,
  "replicated-cache" => undefined,
  "transport" => {
  "jgroups" => undefined,
  "TRANSPORT" => undefined
  }
. . .
}

The default-cache attribute controls the cache to be used by the Cache container. In this example,
the default-cache is set to "dist" which points to a #distributed_cache[Distributed Cache]. The
start attribute configures the cache container start mode, which can be EAGER (immediate start) or
LAZY (on-demand start).

If you are deploying an application which initializes at start up a Cache (e.g.

 Hibernate 2LC) you need to setup this attribute to EAGER to avoid an application
startup failure.

The JNDI attribute controls the assigned JNDI name for the cache container. Here’s for example
how to use the Web cache container in your Enterprise Applications

@Resource(lookup="java:jboss/infinispan/container/web")
private CacheContainer container;

The eviction-executor attribute references a Thread pool executor from the threads subsystem. It
controls the allocation and execution of runnable tasks to handle evictions. You should consider
configuring this attribute if you make a regular use of cache evictions in order to keep control over
the amount of memory available.

The replication-queue-executor attribute references as well a Thread pool executor and controls
the allocation and execution of runnable tasks to handle asynchronous cache operations. By
default, the web and ejb cache containers use an asynchronous replication mechanism, therefore

274
you should consider setting this attribute if you have a consistent amount of data to be replicated.

The listener-executor attribute references also a defined Thread pool executor and governs the
allocation and execution of runnable tasks in the replication queue. You should consider using it if
you are emitting frequent notifications to asynchronous listeners.

Please be aware that if you have plans to move to the JBoss Enterprise
Application Platform (which requires a subscription from RedHat), managing
 the infinispan caches requires a separate subscription to a product named JBoss
Data Grid. (<cite>www.jboss.org/products/datagrid</cite>).

13.5.2. Configuring the Cache Container Transport

In the beginning of this chapter, we have learned the core cluster transport configuration that is
carried out by the JGroups library. Besides the core transport settings, you can also specialize the
transport settings just for a single Cache container. In order to do that, you need to operate on the
transport attribute of each Cache container:

/subsystem=infinispan/cache-container=web/transport=jgroups:read-resource()
{
  "outcome" => "success",
  "result" => {
  "channel" => undefined,
  "cluster" => undefined,
  "executor" => undefined,
  "lock-timeout" => 60000L,
  "stack" => undefined
  }
}

The stack attribute defines the JGroup Stack to be used for transport. By default UDP is used,
however if you have requirements that are incompatible for this stack (e.g. some cluster nodes on a
different sub-network) you can fine-tune the cache transport to use TCP.

The executor attribute governs the Thread pool to be used for Cache transport.

The lock-timeout attribute configures the time-out to be used when obtaining locks for the
transport.

The cluster attribute configures the name of the group communication cluster. This is the name
which will be seen in debugging logs.

For example, here is how to change the protocol stack of the web Cache to use TCP:

/subsystem=infinispan/cache-container=web/transport=jgroups:write-
attribute(name=stack,value=tcp)

275
13.6. Configuring ejb and web Cache containers
Both Stateful Session Beans and Web application are capable to hold the client’s state. In order to
enable fault tolerance of the client’s state, you need to create backup copies of it across the cluster
and keep them in sync. This is done transparently by Infinispan Cache Containers and can use two
different modes:

• replicated-cache: This element is used by caches that replicate its state across all nodes of the
cluster.

• distributed-cache: This element is used by caches that distribute its state across a set of nodes
of the cluster. It is the default for ejb and web applications.

Each caching mode can be synchronous or asynchronous. Asynchronous replication is faster,


because synchronous replication requires acknowledgments from all servers in a cluster, thus
increasing the round-trip time. However, when a synchronous update returns successfully, the
caller has a guarantee that the update has been successfully applied to all members.

13.6.1. Configuring a Replicated cache

In a replicated cache, all nodes in a cluster hold all keys i.e. if a cache entry exists on one nodes, it
will also exist on all other nodes. The Replication strategy is the simplest way to guarantee high
availability to your cluster; as soon as your application is modifying a session attribute (e.g. a
session.setAttribute for a web application), the change is propagated across all nodes of the cluster.

276
The replication strategy proves to be an efficient mechanism for clustered applications that mostly
read data, or for applications that are distributed over a limited number of cluster nodes
(Infinispan recommends 10 as a reasonable upper bound on the number of replicated nodes).

13.6.1.1. Creating a replicated cache

The replicated cache strategy is not the default any more, therefore if you want to switch to this
caching strategy you need to formerly add the Cache and then assign it to a specific Cache
Container. Let’s see how to do it for the ejb cache container

/subsystem=infinispan/cache-container=ejb/replicated-cache=repl/:add(mode=ASYNC)

/subsystem=infinispan/cache-container=ejb/:write-attribute(name=default-
cache,value=repl)

On the other hand, here is how to perform the same on the web cache:

/subsystem=infinispan/cache-container=web/replicated-cache=repl/:add(mode=ASYNC)

/subsystem=infinispan/cache-container=web/:write-attribute(name=default-
cache,value=repl)

13.6.2. Configuring a Distributed cache

When using cache distribution, cache entries are copied to a fixed number of cluster nodes (2, by
default) regardless of the cluster size. Distribution uses a consistent hashing algorithm to determine
which nodes will store a given entry and can be used to enable your clusters to achieve "linear
scalability". The number of copies represents a trade-off between performance and durability of
data. The more copies you maintain, the lower performance will be, but also the lower the risk of
losing data due to server outages.

277
You can use the owners parameter (default 2) to define the number of cluster-wide replicas for
each cache entry. Here’s how to set this parameter to 3 for the "ejb" Cache container:

/subsystem=infinispan/cache-container=ejb/distributed-cache=dist/:write-
attribute(name=owners,value=3)

13.6.2.1. Providing hints to the Distributed cache

Earlier in this chapter we have learned about the site, rack and machine attributes of the jgroups'
transport configuration. These attributes come into play when using a distributed cache as they are
used by the Infinispan topology-aware consistent hash function, which when using distribution
mode, prevents dist mode replicas from being stored on the same host, rack or site.

For example, when running a cluster in domain mode, the following configuration will prevent to
store multiple cluster keys on the same host:

<subsystem xmlns="urn:jboss:domain:jgroups:2.0" default-stack="udp">


  <stack name="udp">
  <transport type="UDP" socket-binding="jgroups-udp"
  machine="${jboss.host.name}" />
  . . . .
  </stack>
</subsystem>

13.6.2.2. Adding L1 cache to a distributed cache

When running in distributed mode it is possible to configure a special kind of cache named "L1"
cache (also known as "near cache" in competing products) which temporarily holds entries of the
cache. When an L1 cache is available, that is consulted locally before checking caches on remote
servers. L1 entries are invalidated when the entry is changed elsewhere in the cluster so you are
sure you don’t have stale entries cached in L1.

278
In order to enable the L1 cache you can set a positive value for the l1-lifespan attribute that is
available in distributed caches. Here is an example:

/subsystem=infinispan/cache-container=ejb/distributed-cache=dist/:write-
attribute(name=l1-lifespan,value=60000)

13.6.3. Configuring ejb and web Cache containers

Once defined the cache strategy, we can focus on the core configuration settings which can be used
to prevent out of memory scenarios (too many items loaded in memory), to handle concurrency
scenarios and choose an appropriate storage for the cache.

13.6.3.1. Configuring cache eviction

Eviction is used to prevent your application from running out of memory. The following CLI query
shows the default eviction policy used by the ejb Cache container (same for the web cache):

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=eviction/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "max-entries" => -1L,
  "strategy" => "NONE"
}

The max-entries parameter controls the maximum number of items to be held in memory, while
strategy lets you can choose among the following eviction strategies: 'UNORDERED', 'FIFO', 'LRU',
'LIRS' and 'NONE' (to disable eviction).

LIRS is a variation of the LRU algorithm that addresses weak access locality

 shortcomings of LRU. For more information about it, please refer to


http://dl.acm.org/citation.cfm?id=511334.511340.

By default there is no eviction (-1) planned for the web distributed cache. You can however set, for
example, a limit on the max-entries to be 10000 units:

/subsystem=infinispan/cache-container=web/distributed-
cache=dist/component=eviction/:write-attribute(name=max-entries,value=10000)

Much the same way, you can set the strategy attribute as follows:

/subsystem=infinispan/cache-container=web/distributed-
cache=dist/component=eviction/:write-attribute(name=strategy,value=LRU)

279
Cache entries which are evicted from memory are passivated, which means that they can be
recovered if needed in the future. See Configuring EJB and Web application cache Storage for more
information.

13.6.3.2. Configuring cache expiration

Cache expiration allows you to attach lifespan and/or maximum idle times to entries. Entries that
exceed these times are treated as invalid and are removed. When entries expire, they are not
passivated like evicted entries, but they are removed globally from memory, cache stores, and
cluster-wide.:

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=expiration/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "interval" => 60000L,
  "lifespan" => -1L,
  "max-idle" => -1L
  },
}

The max-idle parameter determines the maximum idle time a cache entry will be maintained in
the cache, in milliseconds. If the idle time is exceeded, the entry will be expired cluster-wide. -1
means the entries never expire.

The lifespan parameter determines the lifespan of a cache entry, after which the entry is expired
cluster-wide, in milliseconds. -1 means the entries never expire.

Finally, the interval parameter is the interval (in milliseconds) between subsequent runs to purge
expired entries from memory and any cache stores. If you wish to disable the periodic eviction
process altogether, set it to -1.

So by default, no cache entry will expiry. Here is how you can set the max-idle time to one hour
(3600000 ms) for the ejb Cache container:

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=expiration/:write-attribute(name=max-idle,value=3600000)

13.6.3.3. Configuring locking for entries

Whenever data is replicated/distributed across the cluster nodes, you must be aware that your data
can be accessed or modified concurrently, by several threads. In order to manage concurrent
access, Infinispan makes use of some key features of Multi-Versioned Concurrency Control
(MVCC). This is, a technique already adopted by relational databases and other data stores.

By implementing some MVCC key features you can leverage high-level performance, especially for
applications that mostly read data, since:

280
• Concurrent readers and writers are allowed

• Readers and writers do not block one another

• Write skews can be detected and handled

• Internal locks can be striped

The above locking strategies are an integral part of Infinispan’s consistency model, however some
aspects of it can be tuned. For example, the following attributes (contained in the component
element) can be varied:

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=locking/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "acquire-timeout" => 15000L,
  "concurrency-level" => 1000,
  "isolation" => "REPEATABLE_READ",
  "striping" => false
  },
}

The default isolation level used for the "ejb" and "web" cache is REPEATABLE_READ.

In this isolation level, a thread sees a consistent snapshot of any given entry, even if concurrent
threads update the entry. As such, it may see historic values of an entry, but this will be stable and
consistent with a previous lookup in the same transaction.

If you want an higher level, you can set it to SERIALIZABLE, as indicated by the following CLI:

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=locking/:write-attribute(name=isolation,value=SERIALIZABLE)

Another configuration attribute is striping. When set to true, a pool of shared locks is maintained
for all entries that need to be locked. Otherwise, a lock is created per entry in the cache. Lock
striping helps control memory footprint but may reduce concurrency in the system. The default
value for this attribute is false. You can set it to true as follows:

/subsystem=infinispan/cache-container=ejb/distributed-
cache=dist/component=locking/:write-attribute(name=striping,value=true)

The acquire-timeout attribute determines the maximum time to attempt a particular lock
acquisition.

Finally, you can adjust the concurrency-level according to the number of concurrent threads
interacting with the cache.

281
13.6.3.4. Configuring EJB and Web application cache Storage

The ejb and web Cache containers both use a file system as data store. In order to query the
FileStore properties of your Cache, select the file-store element of your cache (in our example the
"dist" cache):

/subsystem=infinispan/cache-container=ejb/distributed-cache=dist/file-
store=FILE_STORE/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "fetch-state" => true,
  "passivation" => true,
  "path" => undefined,
  "preload" => false,
  "properties" => undefined,
  "purge" => true,
  "relative-to" => "jboss.server.data.dir",
  "shared" => false,
  "singleton" => false,
  "property" => undefined,
  "write" => {"through" => undefined}
  },
  "response-headers" => {"process-state" => "reload-required"}
}

The shared attribute indicates that the file store is shared among different cache instances. Setting
this to true prevents repeated and unnecessary writes of the same data to the cache loader by
different cache instances.

The preload attribute, if true, when the cache starts, data stored in the cache loader will be pre-
loaded into memory. Can be used to provide a "warm-cache" on startup, however there is a
performance penalty as startup time is affected by this process.

The passivation attribute, when set to true, the cache will enforce entry passivation and activation
on eviction in a cache. Cache passivation is the process of removing an object from in-memory
cache and writing it to a secondary data store on eviction. Cache Activation is the process of
restoring an object from the data store into the in-memory cache when it’s needed to be used. In
both cases, when passivation is set to true, configured cache will be used to read from the data
store and write to the data store.

The fetch-state attribute determines whether or not to fetch the persistent state of a cache when
joining a cluster. If cache store is configured to be shared, since caches access the same cache store,
fetch persistent state is ignored.

The purge attribute empties the specified cache loader when the cache starts up.

Finally, the singleton when set to true enables modifications to be stored by only one node in the
cluster, the coordinator. Essentially, whenever any data comes in to some node it is always
replicated(or distributed) so as to keep the caches in-memory states in sync; the coordinator,

282
though, has the sole responsibility of pushing that state to disk.

Some additional properties like path (the location where the file store is persisted) or relative-to
(the root path where the cache will be persisted) can be set using the CLI, like in the following
example where we are setting the File Store path:

/subsystem=infinispan/cache-container=ejb/distributed-cache=dist/file-
store=FILE_STORE/:write-attribute(name=path,value=filestorepath)

13.6.3.5. Using a JDBC Cache store

The default file cache store is a good approach for many basic clustering scenarios. However, bear
in mind the following limitations:

• Usage on shared filesystems such as NFS, Windows shares, and other similar technologies,
should be avoided as these do not implement proper POSIX file locking, and can cause data
corruption

• Filesystems are inherently not transactional, so when attempting to use your cache in a
transactional context, failures when writing to the file (which happens during the commit
phase) cannot be recovered

A valid alternative approach can be using a JDBC cache store, which persists data in a relational
database using a JDBC driver. There are three implementations of the JDBC cache store, which are
as follows:

• JdbcBinaryCacheStore

• JdbcStringBasedCacheStore

• JdbcMixedCacheStore

The JdbcBinaryCacheStore is a standard JDBC based solution that can store any type of key for
your entries. This can be obtained by storing all the Map buckets (slot of array elements) as rows
into the database table. This provides greater flexibility, at the price of coarse-grained access
granularity and inherent performance.

The JdbcStringBasedCacheStore implementation will store each entry within a row in the table
(rather than grouping multiple entries into a row). This assures a better granularity and
performance than JdbcBinaryCacheStore, but it requires that all cache keys are Strings.

Finally, JdbcMixedCacheStore is a hybrid implementation which, based on the key type, delegates
to either JdbcBinaryCacheStore or JdbcStringBasedCacheStore, so you have the best of both worlds.

13.6.3.6. Example: Defining a JDBC Cache Store

Although the Admin console includes a subpanel which is dedicated to JDBC Cache store, it is more
convenient to use the Command Line Interface in order to quickly configure it and have access to
the different JDBC implementations available.

In the following example, we will learn how to create a Binary keyed JDBC store: the first thing

283
you need to do is replacing the current file store for your cache. For example, supposing you want
to operate on the "web" cache container, start by issuing this command:

/subsystem=infinispan/cache-container=web/distributed-cache=dist/store=file:remove

Next step will be adding the Binary keyed JDBC store which is bound, in this example, to the default
ExampleDS Datasource, using a set of three fields to store data:

/subsystem=infinispan/cache-container=web/distributed-cache=dist/binary-keyed-jdbc-
store=BINARY_KEYED_JDBC_STORE:add(datasource=java:jboss/datasources/ExampleDS,binary-
keyed-table={"id-column" => {"name" => "ID_COLUMN","type" => "VARCHAR(255)"},"data-
column" => {"name" => "DATA_COLUMN","type" => "BINARY"},"timestamp-column" => {"name"
=> "TIMESTAMP_COLUMN","type" => "BIGINT"}})

Reload your configuration for changes to take effect.

13.6.4. Controlling Passivation of HTTP Sessions

Besides using Infinispan Cache Manager settings, Web application can configure passivation of
HTTP Sessions using some directive which can be included on application basis into the jboss-
web.xml file. These directives have been included mostly for backward compatibility with earlier
versions of the application server yet they introduce the concept of passivating session data, which
can be restored later. Let’s see a concrete example:

<jboss-web>
  <max-active-sessions>100</max-active-sessions>
</jboss-web>

In this example, if session creation would cause the number of active sessions to exceed 100 active
sessions, then the oldest session known to the Session Manager will passivate to make room for the
new session.

Please note that <passivation-config/> element (and its sub-elements) used in the earlier releases
of the application server have been deprecated therefore you shouldn’t add anything else for fine
tuning your HTTP Session passivation.

13.7. Configuring hibernate Cache Container


The hibernate Cache containers can be used by Hibernate or JPA applications; although they are
capable to hold data in a cache, much like "ejb" and "web" caches, their purpose is not to achieve
high availability but a better performance. As a matter of fact, the data which is transferred into
this cache has a "natural" storage in the database; hence we don’t use it to avoid losing data but to
reduce database trips from our applications.

284
The hibernate cache holds an important difference compared to "ejb" and "web"
caches. As a matter of fact, whenever a new entity or collection is read from

 database and needs to be cached, it’s only cached locally in order to reduce intra-
cluster traffic. Infinispan aims to determine, with its internal algorithm, which
node holds the cached data.

This option cannot be changed. As you might understand this strategy is a bit more complex than
"ejb" and "web" replication and requires the interaction of several caches behind the scenes. Here
are the core elements of it:

• An invalidation cache is required to inform the other cluster node when data has been updated
on one of the server nodes

• A local cache is needed if you want to store locally the Entities loaded by a query.

• A replicated cache for storing timestamps that keeps track of the last update timestamp for
each table

13.7.1. Configuring Hibernate cache for Entities and Collections

When you are performing CRUD (Create, Read, Update, Delete) operations using Hibernate/JPA the
hibernate invalidation cache comes into play. Actually, when you execute a change in your set of
data (e.g. an update to a row) no replication/distribution happens across the cluster. Instead, a
notification is sent to all nodes when data changes, causing them to evict their stale copies of the
updated entry (if any).

By using invalidation, you can achieve the following benefits:

• Each cluster node looks up for changes only when needed (e.g. when the application requests a
data fetching which has been marked as dirty).

• The network traffic does not lead to network congestion as the invalidation messages contain a
very little data payload.

Invalidation messages are sent synchronously, so that an ack is expected from other nodes:

285
/subsystem=infinispan/cache-container=hibernate/invalidation-cache=entity/:read-
attribute(name=mode)
{
  "outcome" => "success",
  "result" => "SYNC"
}

In order to tune the hibernate cache to your needs, it is crucial to learn the settings related to
eviction, expiration and locking.

13.7.1.1. Configuring eviction for hibernate cache

Entities which are referenced in the hibernate cache by default are evicted using the LRU strategy
when the number of max-entries exceeds the 10000 units:

/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=eviction/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "max-entries" => 10000L,
  "strategy" => "LRU"
  }
}

Obviously, if you increase the max-entries parameter you will have higher changes to recover
Entities from the cache (reducing database roundtrips); this comes however at the cost of
additional memory required to store your data. Here is how you can double the amount of Entities
stored in cache:

/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=eviction/:write-attribute(name=max-entries,value=20000)

13.7.1.2. Configuring expiration for hibernate cache

The invalidation cache used by hibernate cache container contains a default max-idle time for
cache entries. This means that, by default, Entities referenced in the cache will expire when idle for
a certain amount of time (100 seconds). The check performed on expired entities by default wakes
up every 60 seconds:

286
/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=expiration/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "interval" => 60000L,
  "lifespan" => -1L,
  "max-idle" => 100000L
  }
}

There is no lifespan attached to Entities referenced in the cache; this means that Entities will not
be removed from the cache, unless they are idle. You can combine both options, for example by
specifying a lifespan equal to 200000ms:

/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=expiration/:write-attribute(name=lifespan,value=200000)

13.7.1.3. Configuring locking for hibernate cache

Concurrent access to Entities stored in the cache is a key point for applications using a database
storage. The default isolation level for the hibernate cache is READ_COMMITTED which matches
the isolation level of most databases like Oracle or PostgreSQL:

/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=locking/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "acquire-timeout" => 15000L,
  "concurrency-level" => 1000,
  "isolation" => "READ_COMMITTED",
  "striping" => false
  }
}

It would make sense to configure REPEATABLE_READ in case the application evicts/clears entities
from the Hibernate Session and then expects to repeatably re-read them in the same transaction.
Here is how you can alter the isolation level to use REPEATABLE_READ:

/subsystem=infinispan/cache-container=hibernate/invalidation-
cache=entity/component=locking/:write-attribute(name=isolation,value=REPEATABLE_READ)

287
13.7.1.4. Configuring Hibernate cache for queries

Besides the hibernate invalidation cache there is another type of cache named local-query that is
used for caching Hibernate/JPA queries; By default, the query cache is configured so that queries
are only cached locally.

Alternatively, you can configure Hibernate/JPA query caching to use replication if a set of
conditions are met:

• The query used are quite expensive

• The query are very likely to be repeated in different cluster nodes

• The query is unlikely to be invalidated out of the cache (Note: Hibernate must aggressively
invalidate query results from the cache each time any instance of one of the Entity classes
involved in the query’s WHERE clause changes. All such query results are invalidated, even if
the change made to the Entity instance would not have affected the query result)

If you want to switch to a replicated cache for queries, you need to create at first the replicated-
cache element in the hibernate Cache container:

/subsystem=infinispan/cache-container=hibernate/replicated-cache=query-
replicated/:add(mode=ASYNC)

Once done, switch the default hibernate cache to use the replicated-cache as follows:

/subsystem=infinispan/cache-container=hibernate/:write-attribute(name=default-
cache,value=query-replicated)

13.7.1.5. Configuring the Timestamp cache

The last cache that is included in the "hibernate" Cache Container is named timestamp-cache. The
timestamp-cache keeps track of the last update timestamp for each table (this timestamp is updated
for any table modification). Any time the query cache is checked for a query, the timestamp-cache is
checked for all tables in the query. If the timestamp of the last update on a table is greater than the
time the query results were cached, the entry is removed and the lookup is a miss.

By default, the timestamps cache is configured with asynchronous operation mode. Since all nodes
of the cluster must store all the timestamps relative to table changes, local or invalidated cluster
modes are not allowed. For the same reason, no eviction or expiration is allowed for timestamp
caches either.

You can however vary the default operation mode. Here is how to change it to be synchronous:

/subsystem=infinispan/cache-container=hibernate/replicated-cache=timestamps/:write-
attribute(name=mode,value=SYNC)

When using a SYNC operation mode, the remote-timeout parameter comes into play as it can set a

288
timeout used for waiting for the acknowledgment, after which the call is aborted and an exception
is thrown:

/subsystem=infinispan/cache-container=hibernate/replicated-cache=timestamps/:write-
attribute(name=remote-timeout,value=17500)

289
14. Chapter 14: Load balancing applications
This chapter discusses about the other key aspect of clustering, which is load balancing. As the
point of access to Java EE application is traditionally the web tier, we will cover mostly how to
balance request across your Web applications using a set of software components. Later on, we will
show also how to balance request for EJB applications. Here is in detail what we will discuss:

• At first, we will learn how to configure and install the Apache Tomcat mod_jk, which was the de
facto load balancer solution in earlier application server versions and can still be used in
WildFly clusters.

• Next, we will learn how to configure and install mod_cluster library, which buys you additional
dynamical capabilities.

• In the last part of this chapter, we will shortly review the configuration needed for balancing
remote calls to clustered Enterprise Java Beans.

14.1. Configuring Apache mod_jk


Mod_jk has been in the past years the most-used solution for fronting JBoss AS with Apache web
server. All requests first come to the Apache web server. The Apache web server accepts and
processes any static resource requests, such as requests for HTML pages or graphical images. Then,
with the help of mod_jk, the Apache web server redirects requests for any JSP or Servlet component
to a JBoss Web server instance(s).

The main advantage of keeping using this library is that it has been solidly tested in productions in
countless projects and that, although it lacks in dynamicity (as we will see in a minute), for a simple
and static cluster configuration it is just what you need to get running.

14.1.1. Configuring Apache Web server side

In order to install mod_jk, as first step download the latest stable Apache mod_jk connectors from
http://tomcat.apache.org/download-connectors.cgi .Once completed the download, copy the
connector to the modules folder of your Apache 2 distribution:

$ cp mod_jk.so $APACHE_HOME/modules

Mod_jk configuration will be stored in a file apart, hence include this line in your httpd.conf:

Include conf/mod-jk.conf

Now create the file mod-jk.conf in your Apache configuration folder. This file will contain the
mod_jk configuration including the web context we are going to route from Apache to WildFly.

290
LoadModule jk_module
modules/mod_jk.so

# Where to find workers.properties

JkWorkersFile conf/workers.properties

# Where to put jk logs

JkLogFile logs/mod_jk.log

# Set the jk log level

JkLogLevel info

# Mount your applications

JkMount /myapp/* loadbalancer

JkShmFile logs/jk.shm

You can download the above script from here: http://bit.ly/2G1GfVH

Here is a description for the most important settings:

The LoadModule directive references the mod_jk library you have downloaded. You must indicate
the exact same name with the "modules" file path prefix.

The JkMount directive tells Apache which URLs it should forward to the mod_jk module. In the
above file, all requests with URL path /myapp/* are sent to the mod_jk load-balancer.

The JkWorkersFile references in turn the cluster configuration and thus contains the (static) list of
nodes that are part of the Web farm. The worker file, named workers.properties, follows here:

291
worker.list=loadbalancer,status

# Define Node1

worker.node1.port=8009
worker.node1.host=localhost
worker.node1.type=ajp13
worker.node1.lbfactor=1

# Define Node2

worker.node2.port=8159
worker.node2.host=localhost
worker.node2.type=ajp13
worker.node2.lbfactor=1

# Load-balancing behavior

worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1
worker.status.type=status

The above configuration can be used for example on a cluster of nodes running on localhost with a
port offset of 150 for the second node (hint: you can use a Domain mode configuration with the
ha/full-ha profile for quickly testing this example).

You can download the above script from here: http://bit.ly/2IwAnSV

14.1.2. Configuring WildFly to receive AJP requests

Done with mod_jk, now let’s move to WildFly configuration. First of all, check that your current
version of WildFly includes an ajp listener:

/subsystem=undertow/server=default-server/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "default-host" => "default-host",
  "servlet-container" => "default",
  "ajp-listener" => {"ajp" => undefined},
  "host" => {"default-host" => undefined},
  "http-listener" => {"default" => undefined},
  "https-listener" => undefined
  }
}

In our case the ajp-listener is included in the configuration. In case the ajp-listener does not show

292
up in your configuration, then you can add it as follows:

/subsystem=undertow/server=default-server/ajp-listener=default-ajp:add(socket-
binding=ajp)

So with the above configuration, all incoming requests included in the JKMount directive by
Apache configuration, will be transparently routed to WildFly.

14.2. Configuring mod_cluster


Mod_cluster is an HTTP-based load balancer which, like mod_jk, can be used to forward a request
to a set of application server instances. The key difference compared with mod_jk is that the
communication channel follows a different direction: instead of a single one-way direction
connection from the Web server to the application server, mod_cluster uses a back channel from
backend server to httpd. As the information is pushed from the server side, it can carry critical
information such as cluster nodes life-cycle information and load balancing information. This in
turn buys you the following benefits:

In terms of configuration:

• The httpd side does not need to know cluster topology in advance, so the configuration is
dynamic and not static

• As a consequence you need very little configuration on the httpd side

In terms of load balancing:

• You have an improved load balancing as main calculations are done on the backend servers,
where more information is available

• You have a fine grained web application lifecycle control

The simplest and recommended way to use Undertow as load balancer in front of your cluster. The
next section will show how to do it.

14.2.1. Undertow as mod_cluster Front end

One of the new features of the application server (available since WildFly 9) is the ability to act as a
mod_cluster based front-end for your cluster. This will remove the need to use a native Web server
like Apache (with mod_cluster libs installed on it) as load balancer to a cluster of WildFly servers.

Let’s see how we can set it up practically:

293
So, from one side we have a WildFly server which acts as front-end, configured to route requests
using the Mod_cluster Management Protocol (MCMP). On the backend, we have the regular WildFly
cluster running the ha or full-ha profile. The following are the steps to configure the Front end and
the Back end.

14.2.1.1. Configuring the Back end

Back end servers obviously require that an ha or full-ha profile for your cluster to work. For
example:

$ ./standalone.sh -c standalone-ha.xml

Next, make sure at first that clustering is enabled by deploying a cluster-aware application on it:

[domain@localhost:9990 /] deploy web-cluster-demo.war

14.2.1.2. Configuring the Front end

WildFly includes in its domain.xml configuration file a server profile named "load-balancer" which is
a minimal server configuration featuring an Undertow Web server. For the standalone server, this
profile is contained in the file standalone-load-balancer.xml. Let’s start a WildFly server which uses
the load-balancer profile and that is bound to the address 192.168.10.1:

$ ./standalone.sh -c standalone-load-balancer.xml -Djboss.bind.address=192.168.10.1

You should be able to see from your Server’s console that all Web contexts found in the cluster,
have been registered by the Load balancer:

294
17:43:17,274 INFO [io.undertow] (default task-1) UT005045: Registering context /, for
node fedora
17:43:17,276 INFO [io.undertow] (default task-1) UT005045: Registering context
/wildfly-services, for node fedora
17:47:27,149 INFO [io.undertow] (default task-1) UT005045: Registering context /web-
cluster-demo, for node fedora

In terms of configuration, as you can see from the following snippet, an Undertow filter is used is
used to dispatch the incoming traffic to the cluster of application servers through mcmp protocol:

<subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server"


default-virtual-host="default-host" default-servlet-container="default" statistics-
enabled="${wildfly.undertow.statistics-enabled:${wildfly.statistics-enabled:false}}">
  <buffer-cache name="default"/>
  <server name="default-server">
  <http-listener name="default" socket-binding="http" redirect-socket="https"
enable-http2="true"/>
  <http-listener name="management" socket-binding="mcmp-management" enable-
http2="true"/>
  <host name="default-host" alias="localhost">
  <filter-ref name="load-balancer"/>
  </host>
  </server>
  <servlet-container name="default"/>
  <filters>
  ①
  <mod-cluster name="load-balancer" management-socket-binding="mcmp-management"
advertise-socket-binding="modcluster" enable-http2="true" max-retries="3">
  <single-affinity/>
  </mod-cluster>
  </filters>
</subsystem>

① The Undertow’s filter is used to balance the incoming requests through the advertising
mechanism of modcluster.

14.2.1.3. Testing Undertow’s mod_cluster load balancer

Assumed that your front-end WildFly is about at the address 192.168.10.1, then you can reach the
application web-cluster-demo as follows:

295
14.2.2. Manually configuring the Undertow filter

If you are using a version of WildFly prior to 10.1, you will need to manually add the Undertow
filter, which will use mod_cluster’s advertise ports (port=23364, multicast-address=224.0.1.105).
Here is the batch script we will need to execute on the WildFly front end CLI:

batch

/subsystem=undertow/configuration=filter/mod-cluster=modcluster:add(management-socket-
binding=http,advertise-socket-binding=modcluster)

/subsystem=undertow/server=default-server/host=default-host/filter-ref=modcluster:add

# The following is needed only if you are not running an ha profile !**

/socket-binding-group=standard-sockets/socket-binding=modcluster:add(port=23364,
multicast-address=224.0.1.105)

run-batch

You can download the above script from here: http://bit.ly/2FZR8aS

Now reload your configuration for the changes to take effect.

[standalone@localhost:9990/] reload

As a result, the modcluster filter has been added to the default-host server:

296
<subsystem xmlns="urn:jboss:domain:undertow:7.0" default-server="default-server"
default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other">
  <buffer-cache name="default"/>
  <server name="default-server">
  <http-listener name="default" socket-binding="http" redirect-socket="https"/>
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  . . . . .
  <filter-ref name="modcluster"/>
  </host>
  </server>
  <servlet-container name="default">
  <jsp-config/>
  <websockets/>
  </servlet-container>
  <handlers>
  <file name="welcome-content" path="${jboss.home.dir}/welcome-content"/>
  </handlers>
  <filters>
  <response-header name="server-header" header-name="Server" header-value=
"WildFly/9"/>
  <response-header name="x-powered-by-header" header-name="X-Powered-By" header-
value="Undertow/1"/>
  <mod-cluster name="modcluster" management-socket-binding="http" advertise-
socket-binding="modcluster"/>
  </filters>
</subsystem>

14.2.3. Advanced mod_cluster configuration

The configuration that we have learned so far assumes some defaults such as:

• A single mod_cluster configuration for your Back end of WildFly servers

• Web contexts enabled by default

• Sticky sessions

Although it might be perfectly fine to rely on these defaults, you should be also able to deal with
alternative scenarios. The following sections discuss about them in detail.

14.2.4. Mod-Cluster Multiplicity

In the modcluster subsystem, a named proxy resource is coupled with an Undertow listener (and
server) by specifying load balancer discovery and a load balancer factor. WildFly 14 introduced the
concept of Mod-Cluster Multiplicity which can be used to achieve the following goals:

• You can have multiple modcluster configurations, each one associated with different Undertow
server.

297
• You can have a single Undertow server which is registered with different group of proxies.

Let’s see an example of a multi-proxy configuration for mod_cluster. In order to do that, we will
create another proxy element which references another socket binding multicast configuration:

/socket-binding-group=standard-sockets/socket-binding=modcluster-2:add(multicast-
address=224.0.1.106,multicast-port=23364)
/subsystem=modcluster/proxy=other:add(connector=default,advertise-socket=modcluster-2)

In terms of configuration here is the updated configuration which shows two named proxies, one is
the "default" and the "other":

<subsystem xmlns="urn:jboss:domain:modcluster:4.0">
  <proxy name="default" advertise-socket="modcluster" listener="ajp">
  <dynamic-load-provider>
  <load-metric type="cpu"/>
  </dynamic-load-provider>
  </proxy>
  <proxy name="other" advertise-socket="modcluster-2" listener="default">
  <simple-load-provider/>
  </proxy>
</subsystem>

<socket-binding-group name="standard-sockets" default-interface="public" port-offset=


"${jboss.socket.binding.port-offset:0}">
 . . . .
  <socket-binding name="modcluster" multicast-address=
"${jboss.modcluster.multicast.address:224.0.1.105}" multicast-port="23364"/>
  <socket-binding name="modcluster-2" multicast-address="224.0.1.106" multicast-
port="23364"/>

</socket-binding-group>

Besides multi-proxy support, it is also possible to specify additional Undertow servers (1:n) which
are available to one mod_cluster proxy.

In the following example, the default mod_cluster proxy references one Undertow listener ("ajp").
On the other hand, the other mod_cluster proxy references the "ajp-2" Undertow listener:

298
/socket-binding-group=standard-sockets/socket-binding=ajp-2:add(port=8010)
/subsystem=undertow/server=other-server:add
/subsystem=undertow/server=other-server/ajp-listener=ajp-2:add(socket-binding=ajp-2)
/subsystem=undertow/server=other-server/host=other-host:add(default-web-
module=other.war
/subsystem=undertow/server=other-server/host=other-
host/location=other:add(handler=welcome-content
/subsystem=undertow/server=other-server/host=other-host:write-
attribute(name=alias,value=[localhost]))
/subsystem=modcluster/proxy=other:add(connector=ajp-2,balancer=other-
balancer,advertise-socket=modcluster)
reload

Here is the updated subsystem after the above CLI commands:

299
<subsystem xmlns="urn:jboss:domain:modcluster:4.0">
  <proxy name="default" advertise-socket="modcluster" listener="ajp">
  <dynamic-load-provider>
  <load-metric type="cpu"/>
  </dynamic-load-provider>
  </proxy>
  <proxy name="other" advertise-socket="modcluster" balancer="other-balancer"
listener="ajp-2">
  <simple-load-provider/>
  </proxy>
</subsystem>

<subsystem xmlns="urn:jboss:domain:undertow:7.0" default-server="default-server"


default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other">

  <server name="default-server">
  <ajp-listener name="ajp" socket-binding="ajp"/>
  <http-listener name="default" socket-binding="http" redirect-socket="
https" enable-http2="true"/>
  <https-listener name="https" socket-binding="https" security-realm=
"ApplicationRealm" enable-http2="true"/>
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  <http-invoker security-realm="ApplicationRealm"/>
  </host>
  </server>
  <server name="other-server">
  <ajp-listener name="ajp-2" socket-binding="ajp-2"/>
  <host name="other-host" alias="localhost" default-web-module="other.war">
  <location name="other" handler="welcome-content"/>
  </host>
  </server>
</subsystem>

  <socket-binding-group name="standard-sockets" default-interface="public" port-


offset="${jboss.socket.binding.port-offset:0}">
  <socket-binding name="ajp" port="${jboss.ajp.port:8009}"/>
  <socket-binding name="ajp-2" port="8010"/>
 </socket-binding-group>

14.2.4.1. How to configure mod_cluster to exclude a Web context

When using mod_cluster, by default, all web contexts which are deployed on the cluster nodes are
accessible through the Front end load balancer. You can disable this feature, by setting to false the
attribute auto-enable-contexts:

300
/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=auto-
enable-contexts,value=false)

On the other hand, you can disable selectively some web contexts. For example, if you were to
exclude the context named "mywebapp", the following operation would do it:

/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=excluded-
contexts,value="mywebapp")

14.2.4.2. Configuring Sticky Sessions with mod_cluster

The term "sticky session" means routing a request for a particular session to the same physical
machine that serviced the first request for that session.

In a clustered WildFly environment, calls between nodes in the cluster are, by default, balanced
using the jvmRoute parameters, which is automatically generated for you at server startup.

Sticky sessions are enabled by default as you can see from the following CLI query:

/subsystem=modcluster/mod-cluster-config=configuration/:read-resource
{
  "outcome" => "success",
  "result" => {
  . . . . .
  "sticky-session" => true,
  "sticky-session-force" => false,
  "sticky-session-remove" => false,
  . . . .
  }

The other parameters which can vary the sticky session behavior are:

sticky-session-force: when set to "Yes" returns an error if the request can’t be routed according to
JVMRoute. If set to "no" routes it to another node. Default: "false".

sticky-session-remove: when set to "Yes", session information is removed in case of failover.


Default: "false".

Let’s see in the next section how to switch from the Sticky Session behaviour to a Ranked load
balancing.

14.2.4.3. Configuring Ranked Loadbalancing

One of the new features included in WildFly 18 is the ability to load balance requests using a
ranked order of preference. The default session affinity algorithm in a WilFly cluster is set by the
"affinity" attribute:

301
/subsystem=undertow/configuration=filter/mod-cluster=load-balancer:read-resource()
{
  "outcome" => "success",
  "result" => {
  "advertise-frequency" => 10000,
  . . . .
  "affinity" => {"single" => undefined},
  "balancer" => undefined
  }
}

As you can see, by default web requests have an affinity for the member that last handled a given
session. This option corresponds to traditional sticky session behavior.

By using ranked session affinity WildFly will be able to annotate the JSESSIONID with multiple
routes, ranked in order of preference. Thus, if the primary owner of a given session is inactive, the
load balancer can attempt to route the request to the next route in the list. This ensures that
requests will be directed to the next best worker in the event that the primary owner is inactive,
and prevents requests from "spraying" across the cluster.

The load balancer must be explicitly configured to enable support for parsing ranked routing
through the following CLI command:

/subsystem=undertow/configuration=filter/mod-cluster=load-balancer/affinity=ranked:add

In this case, Web requests will have an affinity for the first available node in a list typically
comprised of: [primary owner, backup nodes, local node].

14.2.4.4. Configuring Metrics

A key feature of mod_cluster is the ability to use server-side load metrics to determine how best to
balance requests. The built-in configuration of mod_cluster distributes HTTP requests based on the
CPU load on the nodes of the cluster:

/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-
provider=configuration/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "decay" => 2,
  "history" => 9,
  "custom-load-metric" => undefined,
  "load-metric" => {"cpu" => undefined}
}

302
Formerly the application server shipped with the metric 'busyness'. This metric
was used to represent the thread pool usage of the application server. This

 doesn’t translate cleanly to the current Web server (Undertow) architecture. It


now represents number of currently being processed requests. You are supposed
to set capacity explicitly on this metric.

You can mix and match the metric types to achieve custom load balancing policies. The list of
metrics which are available are:

• cpu: metric based on CPU load

• mem: metric based on System memory usage

• heap: metric based on Heap memory usage as a percentage of max heap size

• sessions: metric based on the number of web sessions

• requests: metric based on the amount of requests/sec

• send-traffic: metric based on the amount of outgoing requests traffic

• receive-traffic: computes metric based on the amount of incoming requests POST traffic

• busyness: computes metric based on the percentage of connector Threads from the Thread Pool
that are busy servicing requests.

• connection-pool: computes metric based on the percentage of connections from a JCA


connection pool that are in use.

In order to add new metrics you can use the Command Line Interface or edit directly the XML
configuration file. For example, supposing we want to add a couple of dynamic metrics, based
respectively on cpu and memory:

/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-
provider=configuration/load-metric=cpu-metric/:add(type=cpu,capacity=1.0,weight=2)

/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-
provider=configuration/load-metric=heap-metric/:add(type=heap,capacity=1.0,weight=1)

The most important factors when computing load balancing are the weight and capacity
properties.

• The weight indicates the impact of a metric with respect to the other metrics. In the first
example, the CPU metric will have twice the impact on the sessions that have a load factor
metric of 1.

• The capacity, on the other hand, can be used for a fine-grained control on the load metrics. By
setting a different capacity to each metric, you can actually favor one node instead of another
while preserving the metric weights.

In order to make easier the addiction of new server side metrics you can use the CLI in graphical
mode which greatly helps choosing the metric type and its attributes as shown here:

303
14.2.4.5. Configuring Initial Load

Since WildFly 16, it is possible to configure a gradual load (0-99%) for the single nodes using the
parameter initial-load. This helps to avoid hitting mod_cluster nodes with full load from the
beginning. A Value of 0 allows an immediate full load while a value of -1 disables this behavior.
Here is how to set the initial-load to 50%:

/subsystem=modcluster/mod-cluster-config=configuration/dynamic-load-
provider=configuration:write-attribute(name=initial-load,value=50)

This parameter affetcs only the single cluster node. It does not propagate to the

 cluster, therefore if you wanted to have it cluster-wide you have to execute it on


every node.

14.2.5. Configuring mod_cluster on Apache httpd

So far we have assumed that mod_cluster is used as pure Java solution, running on the top of a
WildFly server. It is however possible to run the front-end side of mod_cluster on the top of Apache
httpd. Within Apache httpd, mod_cluster is implemented as a set of modules for httpd with enabled
mod_proxy. Much of the logic comes from mod_proxy, that is, mod_proxy_ajp provides all the AJP
logic needed by mod_cluster.

The httpd modules for Apache are currently available for Windows under
https://github.com/modcluster/mod_cluster/releases Fedora packages are available in updates repo
for Fedora 28, 29 and 30. If you are running a different distribution, then you have to build the
httpd modules from the source code (At the time of writing this is:
https://github.com/modcluster/mod_cluster/archive/1.3.11.Final.zip)

Assumed that you have downloaded or built the httpd mod_cluster libraries, next step will be
copying the modules in the httpd modules folder.

304
cp mod_slotmem.so $HTTPD_HOME/modules/
cp mod_manager.so $HTTPD_HOME/modules/
cp mod_proxy_cluster.so $HTTPD_HOME/modules/
cp mod_advertise.so $HTTPD_HOME/modules/

Done with that, open the Apache configuration file, httpd.conf file and make sure the following
modules are included:

LoadModule proxy_module modules/mod_proxy.so


LoadModule proxy_ajp_module modules/mod_proxy_ajp.so
LoadModule slotmem_module modules/mod_slotmem.so
LoadModule manager_module modules/mod_manager.so
LoadModule proxy_cluster_module modules/mod_proxy_cluster.so
LoadModule advertise_module modules/mod_advertise.so

The first two modules should be already part of your Apache configuration so just make sure they
are not commented. The other modules are the one we have downloaded/built.

On the other hand, verify that the mod_proxy_balancer module is commented out; otherwise it is
going to conflict with mod_cluster load balancer:

# LoadModule proxy_balancer_module modules/mod_proxy_balancer.so

With the modules in place, we will add the mod_cluster configuration which is essentially broken in
two parts:

• One part that defines the virtual IP address and port to be used by Mod-Cluster Management
Protocol (MCMP). In the default bundle this uses the loopback address and port 6666

• One part which is used for the administration interface of mod_cluster

305
<IfModule manager_module>
  Listen 127.0.0.1:6666
  ManagerBalancerName mycluster
  <VirtualHost 127.0.0.1:6666>
  <Location />
  Order deny,allow
  Deny from all
  Allow from 127.0.0
  </Location>

  KeepAliveTimeout 300
  MaxKeepAliveRequests 0
  AdvertiseFrequency 5
  EnableMCPMReceive

  <Location /mod_cluster_manager>
  SetHandler mod_cluster-manager
  Order deny,allow
  Deny from all
  Allow from 127.0.0
  </Location>

  </VirtualHost>
</IfModule>

Here are some details about the core configuration parameters:

ManagerBalancerName: That is the name of balancer to use when the application server doesn’t
provide a balancer name (default: mycluster).

AdvertiseFrequency: Time between the multicast messages advertising the IP and port (default 10
seconds).

EnableMCPMReceive: Allows the VirtualHost to receive mod_cluster Protocol Message (MCPM)


from nodes. You need one EnableMCPMReceive in your in the VirtualHost configuration to allow
mod_cluster to work.

Restart Apache httpd for the changes to take effect:

$ sudo /opt/jboss/httpd/sbin/apachectl restart

If you followed our guidelines chances are that you are happily running your mod_cluster; just in
case you are not that lucky, the next section will try to fix most common issues.

14.2.5.1. Using a static list of httpd proxies

By default, the list of httpd proxies is dynamically generated by mod_cluster by means of the
advertisement mechanism. If multicast is not available or server advertisement is disabled then the

306
application server will not be able to discover Apache httpd server with modcluster libraries.

The key attribute that you need to set is proxy-list which needs to include the (comma-separated)
list of httpd proxies. In the above example, we suppose that the httpd proxy is bound at the IP
address 192.168.10.1 and port 6666.

/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=proxy-
list,value=192.168.10.1:6666)

14.2.6. Troubleshooting mod_cluster

The first and obvious thing you should check is that you don’t have a firewall that prevents your
multicast messages to be received. The ports you are going to investigate are the UDP port 23364
and the multicast address 224.0.1.105.

If you are running a Linux/Unix box it’s likely that you have stricter security requirements. So
chances are that iptables or selinux are blocking your messages. You have several option, start by
disabling (as root) iptables to see if that’s the problem:

$ sudo /etc/init.d/iptables stop

Next, we will weaken Selinux security policy by setting it to permissive mode. In Fedora Core and
RedHat Enterprise, edit as root the file /etc/selinux/config. Look for the following line:

SELINUX=enforcing

This need to be set to permissive in order to enable traffic from other machines:

SELINUX=permissive

Reboot your machine to take effect. If mod_cluster works with the permissive policy, you can then
further refine the security policies so that you only enable mod_cluster and WildFly traffic:

/sbin/iptables -I INPUT 5 -p udp -d 224.0.1.0/24 -j ACCEPT -m comment --comment


"mod_cluster traffic"
/sbin/iptables -I INPUT 6 -p udp -d 224.0.0.0/4 -j ACCEPT -m comment --comment
"WildFly Cluster traffic"

Additionally, you need to allow also intra-cluster communication. For example supposing your
cluster nodes are bound on the 192.168.1 subnet and are using the (default) UDP stack:

/sbin/iptables -I INPUT 9 -p udp -s 192.168.1.0/24 -j ACCEPT -m comment --comment


"cluster subnet for inter-node communication"

307
subnet for inter-node communication"

14.2.6.1. Check multicast communication

If your firewall rules are properly configured you should then verify that multicasting is working
correctly. That can be done by starting a multicast test chat which is part of the JGroups
distribution.

cd modules/system/layers/base/org/jgroups/main

Now execute the McastReceiverTest class passing as argument the multicast address and port:

java -classpath jgroups-4.1.6.Final.jar org.jgroups.tests.McastReceiverTest


-mcast_addr 224.0.1.105 -port 23364

Once started the receiver, start the McastSenderTest class using the following shell:

java -classpath jgroups-4.1.6.Final.jar org.jgroups.tests.McastSenderTest -mcast_addr


224.0.1.105 -port 23364

The McastSenderTest will start a prompt where you can type in messages. If multicast is working
you should see the message printed in the McastReceiverTest window.

If the multicast communication test fails, chances are that your platform does not support
multicast; this is a known issue of some Window systems like Vista. In such a scenario, you should
stick to static configuration which is detailed in the section [Using a static list of http proxies]

14.2.6.2. Switch additional display

if you are running Apache httpd as Front-end to your cluster, then you can enable a debugging
option in your configuration by setting the AllowDisplay to on:

AllowDisplay On

By setting this option to on, you will have some detail about the single modules that are needed to
run mod_cluster, therefore you can identify the potential source of an issue:

308
14.3. Load balancing EJB clients
Since WildFly 11, the recommended client configuration file is wildfly-config.xml which contains
information both about the remote endpoints and the authentication rules. Within this file, you will
specify any of the server of the cluster. The cluster will automatically failover to another server
node, picked up from the cluster list. Here is a sample wildfly-config.xml:

<configuration>
  <authentication-client xmlns="urn:elytron:1.0">
  <authentication-rules>
  <rule use-configuration="default"/>
  </authentication-rules>
  <authentication-configurations>
  <configuration name="default">
  <sasl-mechanism-selector selector="DIGEST-MD5"/>
  <set-user-name name="ejbuser"/>
  <credentials>
  <clear-password password="Password1!"/>
  </credentials>
  </configuration>
  </authentication-configurations>
  </authentication-client>
  <jboss-ejb-client xmlns="urn:jboss:wildfly-client-ejb:3.0">
  <connections>
  <connection uri="remote+http://127.0.0.1:8080"/>
  </connections>
  </jboss-ejb-client>
</configuration>

Within this file, we can specify in the connections element the the EJB client connections. It can
contain any number of <connection /> elements.

You can download the above example wildfly-config.xml from here: http://bit.ly/2FQPdC0

Before WildFly 11, the recommended way to configure load balancing for remote EJB applications

309
was by means of the file jboss-ejb-client.properties (which must be available in the client
application’s classpath**) to specify the list of server nodes where the clustered EJBs are available.
This file contains the list of nodes that will be used to route the client request.

Supposing that we have started a cluster of two nodes on the same machine (localhost) using a port-
offset of 200, here’s your suggested jboss-ejb-client.properties:

remote.clusters=ejb
remote.cluster.ejb.connect.options.org.xnio.Options.SASL_POLICY_NOANONYMOUS=false
remote.cluster.ejb.connect.options.org.xnio.Options.SSL_ENABLED=false
username=ejbuser
password=ejbpassword
remote.connectionprovider.create.options.org.xnio.Options.SSL_ENABLED=false
remote.connections=one,two
remote.connection.one.host=192.168.10.1
remote.connection.one.port=8080
remote.connection.two.host=192.168.10.2
remote.connection.two.port=8080

Behind the scenes, the EJB client API project does the necessary plumbing to request Undertow to
switch to JBoss Remoting protocol while communicating on that HTTP port. Undertow thus switches
transparently the protocol to Remoting and the rest of the communication happens as if it was the
usual invocation on the Remoting port.

310
15. Chapter 15: Securing WildFly with
Elytron
WildFly 11 introduced Elytron, as the new single unified framework that can manage and
configure security for both standalone servers and managed domains. Elytron can also be used to
configure security access for your applications deployed on WilFly.

Although you are still able to run your existing server configuration and deployments that use the
legacy security subsystem (which is based on PicketBox) you are encouraged to migrate to Elytron.

The WildFly Elytron project provides a single unified security framework across the entire
application server. As a single framework it will be usable both for configuring management access
to the server and for applications installed to the server, there will be no need to learn a different
security framework for host controllers in a domain compared to configuring a standalone server.

The project covers these main areas:

• Authentication and Authorization

• SSL / TLS

• Secure Credential Storage

Before entering the single areas, we will have an introduction to discuss the building blocks of
Elytron.

15.1. Elytron building blocks


The building blocks of the elytron subsystem are still called Security realms and Security

Domains just like the older legacy subsystem. Other key components of the architecture are
Mappers and Authentication Factories.

Let’s see more in detail the role of each component:

• Security Domain: this is the entry point to all security operations available in your server
infrastructure. It contains a high-level view of security policies and resources associated with
your IT domain. An Elytron domain can be composed of a single application or multiple
applications which share the same security policies. In terms of responsibilities, a security
domain is in charge of:

• Security Realm: this component encapsulates and integrates the application server with an
identity store or repository (such as an LDAP server or a database). Hence, they are mainly
responsible for obtaining or verifying credentials, obtaining the attributes associated with a
given identity and last, but also important, creating an internal representation based on this
information that will be used by a security domain to authenticate and perform role and
permission mappings for a given principal. A security realm can be compared to the legacy
login modules; unlike the JAAS login modules, however, Elytron also provides the concept of
modifiable realms, which are also capable of performing very basic and simple write operations
against a specific repository.

311
 The relation between Security Domain and Security Realms is 1:n .

As a matter of fact, a single Security Domain can reference multiple Security Realms and can
manage multiple identities that are available on different repositories. Elytron takes care to identify
which realm a principal belongs to by using a Mapper.

• Authentication Factory: This component handles authentication and role mapping for the
Security Domain. Two core Authentication Factories are provided out of the box:

• HTTP Authentication Factory: which is obviously used for Web applications performing HTTP
Authentication

• SASL Authentication Factory: which is used for other network protocols, including standard
protocols such as LDAP, IMAP, etc., but also JBoss Remoting which is the EJB primary transport.

• Realm Mapping: This is a key element of Elytron authentication and it aims to take the
mapping principal and map it to a Realm name to identify the name of the Security Realm to
use to load the identity. The configuration will be inspected for the first realm mapper that can
be found. If a RealmMapper is identified but that mapper returns null when mapping the
Principal then the default-realm specified on the Security Domain will be used instead. If no
RealmMapper is available then the default-realm on the SecurityDomain will be used.

15.1.1. Default Security Domain and Security Realms

Out of the box, WildFly ships with two default Security Domains and Security Realms.

• The ApplicationDomain Security Domain uses ApplicationRealm and groups-to-roles for


authentication. It also uses default-permission-mapper to assign the login permission.

• The ManagementDomain Security Domain uses two security realms for authentication:
ManagementRealm with groups-to-roles and local with super-user-mapper. It also uses default-
permission-mapper to assign the login permission.

Here is the default Security Domain configuration:

 <security-domains>
  <security-domain name="ApplicationDomain" default-realm=
"ApplicationRealm" permission-mapper="default-permission-mapper">
  <realm name="ApplicationRealm" role-decoder="groups-to-roles"/>
  <realm name="local"/>
  </security-domain>
  <security-domain name="ManagementDomain" default-realm=
"ManagementRealm" permission-mapper="default-permission-mapper">
  <realm name="ManagementRealm" role-decoder="groups-to-roles"/>
  <realm name="local" role-mapper="super-user-mapper"/>
  </security-domain>
. . . . .
 </security-domains>

• The ApplicationRealm security realm is a properties realm that authenticates principals using
application-users.properties and assigns roles using application-roles.properties. These files

312
are located under jboss.server.config.dir, which by default, maps to
$JBOSS_HOME/standalone/configuration. They are also the same files used by the legacy security
default configuration.

• The ManagementRealm security realm is a properties realm that authenticates principals


using mgmt-users.properties and assigns roles using mgmt-groups.properties. These files are
located under jboss.server.config.dir, which by default, maps to
$JBOSS_HOME/standalone/configuration. They are also the same files used by the legacy security
default configuration.

Following here are the Security Realms which are available in the default configuration of
WildFly:

<security-domains>
  . . . . .
  <security-realms>
  <identity-realm name="local" identity="$local"/>
  <properties-realm name="ApplicationRealm">
  <users-properties path="application-users.properties" relative-to
="jboss.server.config.dir" digest-realm-name="ApplicationRealm"/>
  <groups-properties path="application-roles.properties" relative-
to="jboss.server.config.dir"/>
  </properties-realm>
  <properties-realm name="ManagementRealm">
  <users-properties path="mgmt-users.properties" relative-to=
"jboss.server.config.dir" digest-realm-name="ManagementRealm"/>
  <groups-properties path="mgmt-groups.properties" relative-to=
"jboss.server.config.dir"/>
  </properties-realm>
  </security-realms>

15.2. How to enable Elytron for Authentication


The subsystem elytron is included by default in all server configurations, anyway in order to enable
the application server to use Elytron security domains for its core subsystems (e.g. undertow, ejb3,
remoting, batch-jberet) you have to execute some CLI commands.

Within the $JBOSS_HOME/docs/examples folder you will find a CLI script named enable-elytron.cli
which enables Elytron for all server configurations. You can run it from $JBOSS_HOME/bin as follows:

$ ./jboss-cli.sh --file=../docs/examples/enable-elytron.cli

If you want to switch on Elytron by default for all subsystems, just open the CLI script and comment
which subsystems you want to leave in the "legacy" area.

Here is for example how to enable Elytron for all subsystems but for the "ejb3":

313
embed-server --server-config=standalone.xml

/subsystem=undertow/application-security-domain=other:add(http-authentication-
factory=application-http-authentication)
#/subsystem=ejb3/application-security-domain=other:add(security-
domain=ApplicationDomain)
/subsystem=batch-jberet:write-attribute(name=security-domain, value=ApplicationDomain)

/subsystem=remoting/http-connector=http-remoting-connector:write-attribute(name=sasl-
authentication-factory, value=application-sasl-authentication)
/subsystem=remoting/http-connector=http-remoting-connector:undefine-
attribute(name=security-realm)

/core-service=management/access=identity:add(security-domain=ManagementDomain)
/core-service=management/management-interface=http-interface:write-
attribute(name=http-upgrade,value={enabled=true, sasl-authentication-
factory=management-sasl-authentication})
/core-service=management/management-interface=http-interface:write-
attribute(name=http-authentication-factory,value=management-http-authentication)
/core-service=management/management-interface=http-interface:undefine-
attribute(name=security-realm)
/core-service=management/security-realm=ManagementRealm:remove
/core-service=management/security-realm=ApplicationRealm/authentication=local:remove
/core-service=management/security-
realm=ApplicationRealm/authentication=properties:remove
/core-service=management/security-
realm=ApplicationRealm/authorization=properties:remove

stop-embedded-server
reload

15.3. Elytron Realms


An Elytron Security Realm encapsulates and integrates the application server with an

identity store or repository (such as an LDAP server or a database). The following list of built-in
Realms are available to be used for Authentication/Authorization purposes:

Component Description

aggregate-realm A realm definition that is an aggregation of two realms, one for the
authentication steps and one for loading the identity for the authorization
steps.

caching-realm A realm definition that enables caching to another security realm. Caching
strategy is Least Recently Used where least accessed entries are discarded
when maximum number of entries is reached.

314
Component Description

custom-modifiable- Custom realm configured as being modifiable will be expected to


realm implement the ModifiableSecurityRealm interface. By configuring a realm
as being modifiable management operations will be made available to
manipulate the realm.

custom-realm A custom realm definitions can implement either the s SecurityRealm


interface or the ModifiableSecurityRealm interface. Regardless of which
interface is implemented management operations will not be exposed to
manage the realm. However other services that depend on the realm will
still be able to perform a type check and cast to gain access to the
modification API.

filesystem-realm A simple security realm definition backed by the filesystem.

identity-realm A security realm definition where identities are represented in the


management model.

jdbc-realm A security realm definition backed by database using JDBC.

key-store-realm A security realm definition backed by a keystore.

ldap-realm A security realm definition backed by LDAP.

properties-realm A security realm definition backed by properties files.

token-realm A security realm definition capable of validating and extracting identities


from security tokens.

trust-managers A trust manager definition for creating the TrustManager list as used to
create an SSL context.

Let’s see some examples for the most common types of Realms.

15.3.1. Configuring a File System Security Realm

The most basic example of a security realm is the FileSystem Realm, which stores the identity
information on a filesystem, by paging each identity in an XML file containing credentials and
Roles.

Start the application server and connect to it from a CLI.

First of all, before starting the FileSystem Realm batch, we need to add a Simple Role Decoder
which maps the application Roles from the attribute Roles in the File system. It’s recommended to
use a conditional execution of this statement, in case the Role Decoder has been already created:

if (outcome != success) of /subsystem=elytron/simple-role-decoder=from-roles-


attribute:read-resource
/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles)
end-if

Now let’s define a new filesystem-realm named fsRealm and its respective path on the file system:

315
batch

/subsystem=elytron/filesystem-realm=demoFsRealm:add(path=demofs-realm-users,relative-
to=jboss.server.config.dir)

Next, we will be adding some identities to the Realm:

/subsystem=elytron/filesystem-realm=demoFsRealm:add-identity(identity=frank)

/subsystem=elytron/filesystem-realm=demoFsRealm:set-
password(identity=frank,clear={password="password123"})

/subsystem=elytron/filesystem-realm=demoFsRealm:add-identity-
attribute(identity=frank,name=Roles,value=["Admin","Guest"])

Done with that, we will then create a new Security Domain which maps our Realm:

/subsystem=elytron/security-domain=fsSD:add(realms=[{realm=demoFsRealm,role-
decoder=from-roles-attribute}],default-realm=demoFsRealm,permission-mapper=default-
permission-mapper)

Now we have both a Realm and a Security Domain available and correctly mapped. As we will test
our Realm with a Web application, we need an Http Authentication Factory which references our
Security Domain:

/subsystem=elytron/http-authentication-factory=example-fs-http-auth:add(http-server-
mechanism-factory=global,security-domain=fsSD,mechanism-configurations=[{mechanism-
name=BASIC,mechanism-realm-configurations=[{realm-name=RealmUsersRoles}]}])

Finally, a Security Domain in the undertow’s subsystem will be associated with our Http
Authentication Factory:

/subsystem=undertow/application-security-domain=httpFsSD:add(http-authentication-
factory=example-fs-http-auth)

Run the above batch and check that it executes successfully.

You can download the CLI script to create the Realm from here: http://bit.ly/2G3kK7a

15.3.1.1. Testing Elytron Security Realm

The above example uses an HTTP Authentication Factory to reference the Security Domain of
Elytron. Therefore, in order to test our File System Realm we will be using a simple Web
Application which contains a secured Servlet.

316
Using a Declarative approach to define the Roles Allowed to the Servlet, we will restrict access only
to users belonging to the "Admin" Role:

@WebServlet("/secure")
@ServletSecurity(@HttpConstraint(rolesAllowed = { "Admin" }))
public class SecuredServlet extends HttpServlet {
. . .
}

Finally, we will link the Web application to our Security Domain by setting the security-domain
attribute in jboss-web.xml:

<jboss-web>
  <security-domain>httpFsSD</security-domain> ①
</jboss-web>

① This must match with undertow’s "application-security-domain"

Alternatively, you can specify a default security domain for all applications using the undertow
subsystem. This way you can avoid using jboss-web.xml to configure a security domain for an
individual application:

/subsystem=undertow:write-attribute(name=default-security-domain,value="httpFsSD")

You can find this example application on GitHub at: http://bit.ly/2Iu9W02

Once deployed, if you reach the "secure" Servlet you will be challenged with an HTTP Basic
Authentication Form. Enter one of the Identities you have configured to display the Servlet output.

15.3.1.2. Using other options for storing the password

In our simple example, we have used a clear text password to store our identities. As a matter of
fact, you can opt for several other options as you can see from the following table:

Algorithm Description

bcrypt A password using the Bcrypt algorithm.

clear A password in clear text.

simple-digest A simple digest password.

salted-simple-digest A salted simple digest password.

digest A digest password.

otp A one-time password, used by the OTP SASL mechanism.

So, for example, if you were to use a digest MD5 password instead of a clear text password, then
you would change your Identity’s password storage as follows:

317
/subsystem=elytron/filesystem-realm=demoFsRealm:set-
password(identity=frank,digest={algorithm=digest-
md5,password="password123",realm=demoFsRealm})

15.3.1.3. Using Elytron in parallel with the Legacy security subsystem

As we have discussed, both the elytron and the legacy security subsystem can be configured in
WildFly 11 and later. This means that you can define authentication in both the elytron and legacy
security subsystems and use them in parallel.

If you use both jboss-web.xml and default-security-domain in the undertow subsystem, WildFly
will first try to match the configured security domain in the elytron subsystem. If a match is not
found, then WildFly will attempt to match the security domain with one configured in the legacy
security subsystem. If the elytron and legacy security subsystem each have a security domain with
the same name, the elytron security domain is used.

15.3.2. Converting legacy property files into Elytron FileSystemRealm

Since WildFly 16, the Elytron Tool (named elytron-tool.sh ) provides a command for converting
your legacy properties files into an Elytron FileSystemRealm. Let’s see how to use it to convert the
files users.properties and roles.properties available in the conf folder. Start by moving to the bin
folder of $JBOSS_HOME where the elytron-tool.sh is available. Then execute:

cd $JBOSS_HOME/bin

./elytron-tool.sh filesystem-realm -u conf/users.properties -r conf/roles.properties


--output-location realms/FsRealmExample --summary -f demoFsRealm

WARNING: No name provided for security-domain, using default security-domain name for
conf/users.properties.
----------------------------------------------------------------------
Summary for execution of Elytron-Tool command FileSystemRealm
----------------------------------------------------------------------
Options were specified via CLI, converting single users-roles combination
Added roles: {admin} for user frank.
Added roles: {developer,tester} for user joe.
Configured script for WildFly named demoFsRealm.sh at /home/jboss/wildfly-
20.0.0.Final/bin/realms/FsRealmExample.
The script is using the following names:
Name of filesystem-realm: demoFsRealm
WARNING: No name provided for security-domain, using default security-domain name for
conf/users.properties.

In the above command, we have provided as parameter the folder where property files are (-r), the
output where the FileSystemRealm has been created, the real name (-f). The --summary option
provides some details about the execution status.

318
As you can see from the summary, a default security-domain has been created for
 your realm. If you want to provide a custom name for it then use the option
--security-domain-name/-s <name>

The legacy property file in our example contained two users ("frank" and "joe") therefore the
following structure will be created in your realm:

FsRealmExample/
├── demoFsRealm.sh
├── f
│   └── r
│   └── frank-MZZGC3TL.xml
└── j
  └── o
  └── joe-NJXWK.xml

As you can see, along with the XML files containing the Realm data, also a file demoFsRealm.sh has
been created in the top folder. This shell script contains the CLI command to add the Realm to your
Elytron security domain:

/subsystem=elytron/filesystem-realm=demoFsRealm:add(path=/home/jboss/wildfly-
20.0.0.Final/bin/realms/FsRealmExample)
/subsystem=elytron/security-domain=converted-properties-security-
domain:add(realms=[{realm=demoFsRealm}],default-realm=demoFsRealm,permission-
mapper=default-permission-mapper)

15.3.2.1. Using the Elytron tool against a Descriptor File

In case you have multiple configuration files to be converted, it can be convenient to use a
descriptor file which contains the list of configuration files. This file can also include parameters
such as the output-location or the security-domain-name. See the following descriptor file:

users-file:conf/users-single.properties
roles-file:conf/roles-single.properties
output-location:realms/example-single
security-domain-name:my-security-domain

You can pass the descriptor file as argument to elytron-tool.sh via the -b parameter:

./elytron-tool.sh filesystem-realm -b conf/my-descriptor.properties --silent

15.3.3. Configuring a JDBC Realm

The Elytron JDBC Realm is a Java Database Connectivity-based (JDBC) based realm that supports
authentication and role mapping. You can use this login module if you have your username,

319
password and role information stored in a relational database, which is accessible by means of a
Datasource.

Prerequisites: A Datasource for connecting to the Database. Follow the Creating a Datasource using
the CLI to complete this step.

When completed the above step, you will have in your configuration a Datasource bound under the
jndi name "java:/PostGreDS". Assuming the Database is running as a Docker Container with the ID
70c98059541b, we will connect to it this way:

$ sudo docker exec -it 70c98059541b /bin/bash

root@70c98059541b:/# su - postgres

postgres@70c98059541b:~$ psql postgres postgres

Now issue the following sql commands that will create a table and insert one user (username
"admin", password "admin") which is bound to the Admin" role:

CREATE TABLE USERS(login VARCHAR(64) PRIMARY KEY, password VARCHAR(64), role VARCHAR
(64));

INSERT into USERS (login,password,role) values('admin','admin','Admin');

Your Database configuration is completed.

In order to configure the Elytron JDBC Realm, let’s first make sure we have a Role mapper for our
Roles:

if (outcome != success) of /subsystem=elytron/simple-role-decoder=from-roles-


attribute:read-resource
/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles)
end-if

Next, we need to add a security domain to your security subsystem:

# Define the JDBC Realm


/subsystem=elytron/jdbc-realm=demoJdbcRealm:add(principal-query=[{sql="SELECT
password,role FROM USERS WHERE login=?",data-source=PostgrePool,clear-password-
mapper={password-index=1},attribute-mapping=[{index=2,to=groups}]}])

# Define the Security Domain


/subsystem=elytron/security-domain=jdbcSD:add(realms=[{realm=demoJdbcRealm,role-
decoder=groups-to-roles}],default-realm=demoJdbcRealm,permission-mapper=default-
permission-mapper)

320
Next we will create an HTTP Authentication Factory which we will bind to Undertow:

/subsystem=elytron/http-authentication-factory=db-http-auth:add(http-server-mechanism-
factory=global,security-domain=jdbcSD,mechanism-configurations=[{mechanism-
name=BASIC,mechanism-realm-configurations=[{realm-name=RealmUsersRoles}]}])

#Configure Undertow to use this Authentication Factory


/subsystem=undertow/application-security-domain=httpJdbcSD:add(http-authentication-
factory=db-http-auth)

As you can see, in the above example queries are defined using a principal-query element
containing the sql and data-source attributes. Both are mandatory attributes as they allow you to
specify the SQL SELECT statement and the datasource used to execute the access control list query,
respectively. The next element, the clear-password-mapper , is responsible for mapping a specific
column, or set of columns, to a password.

The full CLI Script is available at GitHub: http://bit.ly/2VDdSVy

Before testing the JDBC Realm, update the security-domain attribute in jboss-web.xml:

<jboss-web>
  <security-domain>httpJdbcSD</security-domain>
</jboss-web>

Here’s a tip to do it, using the deployment-overlay command. First create a new jboss-web.xml:

$ echo "<jboss-web><security-domain>httpJdbcSD</security-domain></jboss-web>" >


/tmp/jboss-web.xml

Then connect to the CLI and create a deployment overlay to update the jboss-web.xml file:

deployment-overlay add --name=myAppOverlay --content=/WEB-INF/jboss-web.xml=/tmp/jboss


-web.xml --deployments=demo-security.war --redeploy-affected

15.3.3.1. Alternative Password Mappers

The JDBC Realm we have used in the above examples uses a clear-password-mapper which is used
to load a clear text password directly from the database. The clear-password-mapper supports as
single attribute the password-index, which is the index of the column containing the clear text
password. In production environment, however, it is recommended to use a more secure password
mapper for storing your user credentials. Other available mappers are the following ones:

• bcrypt-password-mapper: The bcrypt-password-mapper can be used for passwords to be


loaded using the bcrypt algorithm, as an iterated salted password type the iteration count and
salt are also loaded from the database query. The following attributes need to be set:

321
◦ password-index: The index of the column containing the encoded password.

◦ salt-index: The index of the column containing the encoded salt.

◦ iteration-count-index: The index of the column containing the iteration count.

• salted-simple-digest-mapper: The salted-simple-digest-mapper supports the password types


hashed with a salt as described in Salted Digest, for this type of password the encoded form of
the password is loaded in addition to the salt.

◦ algorithm - The algorithm of the password type, the supported values are listed at Salted
Digest.

◦ password-index - The index of the column containing the encoded password.

◦ salt-index - The index of the column containing the encoded salt.

• simple-digest-mapper: The simple-digest-mapper supports the loading of passwords which


have been simply hashed without any salt as described in Simple Digest.

◦ algorithm - The algorithm of the password type, the supported values are listed at Simple
Digest.

◦ password-index - The index of the column containing the encoded password.

• scram-mapper: The scram-mapper supports the loading of SCRAM passwords which use both a
salt and an interation count as described in Scram.

◦ algorithm - The algorithm of the password type, the supported values are listed at Scram.

◦ password-index - The index of the column containing the encoded password.

◦ salt-index - The index of the column containing the encoded salt.

◦ iteration-count-index - The index of the column containing the iteration count.

As an example, we will show here how to implement the bcrypt-password-mapper. First of all, we
need to compute the bcrypt Hash and the encoded Salt. There are various tools for computing the
bcrypt hash, such as Apache’s htpasswd tool:

htpasswd -bnBC 10 "" myPassword

On the other hand, a pure Java implementation, follows here:

322
static final Provider ELYTRON_PROVIDER = new WildFlyElytronProvider();

static final String TEST_PASSWORD = "myPassword";

public static void main(String[] args) throws Exception {


  PasswordFactory passwordFactory = PasswordFactory.getInstance(BCryptPassword
.ALGORITHM_BCRYPT, ELYTRON_PROVIDER);

  int iterationCount = 10;

  byte[] salt = new byte[BCryptPassword.BCRYPT_SALT_SIZE];


  SecureRandom random = new SecureRandom();
  random.nextBytes(salt);

  IteratedSaltedPasswordAlgorithmSpec iteratedAlgorithmSpec = new


IteratedSaltedPasswordAlgorithmSpec(iterationCount, salt);
  EncryptablePasswordSpec encryptableSpec = new EncryptablePasswordSpec
(TEST_PASSWORD.toCharArray(), iteratedAlgorithmSpec);

  BCryptPassword original = (BCryptPassword) passwordFactory.generatePassword


(encryptableSpec);

  byte[] hash = original.getHash();

  Encoder encoder = Base64.getEncoder();


  System.out.println("Encoded Salt = " + encoder.encodeToString(salt));
  System.out.println("Encoded Hash = " + encoder.encodeToString(hash));
}

In our example, the following output will be produced:

Encoded Salt = 3bFOQwRU75to+yJ8Cv0g8w==


Encoded Hash = x9P/0cxfNz+Pf3HCinZ3dLCbNMnBeiU=

You will then insert your users with the encoded Salt and Hash into the users' database table:

INSERT USERS(username, password, salt, iteration_count) VALUES ('user',


'x9P/0cxfNz+Pf3HCinZ3dLCbNMnBeiU=', '3bFOQwRU75to+yJ8Cv0g8w==', 10);

Having completed also this step, now you can add the JDBC Realm using the bcrypt-mapper in
replacement for the clear-password-mapper:

323
/subsystem=elytron/jdbc-realm=demoJdbcRealm:add(
  principal-query=[{data-source=PostgrePool,
  sql="select password, salt, iteration_count from USERS where
login = ?",
  bcrypt-mapper={password-index=1, salt-index=2, iteration-count-
index=3}}])

15.3.4. Configuring an LDAP Realm

The last example we will include in this chapter is about the LDAPRealm , which defines the
standard set of elements we have seen in the earlier chapter to secure the management interfaces
with LDAP.

For the sake of simplicity, we will start a Containerised version of OpenLdap, which is available in
the DockerHub, using as BASE DN wildfly.org:

$ docker run --env LDAP_ORGANISATION="wildfly" --env LDAP_DOMAIN="wildfly.org" --env


LDAP_ADMIN_PASSWORD="admin" --detach osixia/openldap

As an alternative, you can set the BASE DN in your slapd.conf and set the default admin password.

Check your IP Address from your container:

$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' $(docker ps -q)


172.17.0.2

Now verify the connection with any LDAP Browser:

324
Assumed that the connection worked, now upload a sample ldif file which will contain one user
named "frank" which is granted the Role "Admin". The example ldif file is available on GitHub at:
http://bit.ly/2pjbwZY

You should be able to see the updated Directory from your LDAP Browser:

Now let’s move to the WildFly CLI: let’s first make sure we have a Role mapper for our Roles:

325
if (outcome != success) of /subsystem=elytron/simple-role-decoder=from-roles-
attribute:read-resource
/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles)
end-if

To connect to the LDAP server from WildFly, you need to configure a dir-context that provides the
URL as well as the principal used to connect to the server:

/subsystem=elytron/dir-
context=exampleDC:add(url="ldap://172.17.0.2:389",principal="cn=admin,dc=wildfly,dc=or
g",credential-reference={clear-text="secret"})

Next, we will define the LdapRealm:

/subsystem=elytron/ldap-realm=demoLdapRealm:add(dir-context=exampleDC,identity-
mapping={search-base-dn="ou=Users,dc=wildfly,dc=org",rdn-identifier="uid",user-
password-mapper={from="userPassword"},attribute-mapping=[{filter-base-
dn="ou=Roles,dc=wildfly,dc=org",filter="(&(objectClass=groupOfNames)(member={1}))",fro
m="cn",to="Roles"}]})

Next, we will define a Security Domain for elytron, mapping it to the LdapRealm we have created:

/subsystem=elytron/security-domain=ldapSD:add(realms=[{realm=demoLdapRealm,role-
decoder=from-roles-attribute}],default-realm=demoLdapRealm,permission-mapper=default-
permission-mapper)

We finally complete the configuration by creating the Http Authentication Factory and setting it
into undertow subsystem:

/subsystem=elytron/http-authentication-factory=example-ldap-http-auth:add(http-server-
mechanism-factory=global,security-domain=ldapSD,mechanism-configurations=[{mechanism-
name=BASIC,mechanism-realm-configurations=[{realm-name=RealmUsersRoles}]}])

/subsystem=undertow/application-security-domain=httpLdapSD:add(http-authentication-
factory=example-ldap-http-auth)

The CLI script required to install the Ldap realm is available here: http://bit.ly/2FGYnVG

Before testing the Ldap Realm, as usual, update the security-domain attribute in jboss-web.xml:

<jboss-web>
  <security-domain>httpLdapSD</security-domain>
</jboss-web>

326
15.3.5. Configuring a SASL Based Authentication

In the examples so far we have been using the HTTP Authentication Factory which is obviously
used for Web applications performing HTTP Authentication. Let’s see how we can define a SASL
Authentication Factory which is used for other network protocols, including standard protocols
such as LDAP, IMAP, etc., but also JBoss Remoting which is the EJB primary transport.

In the following example, we will show how to re-create the File System Realm and its Security
Domain to secure access to an EJB application.

Once connected to the application server’s CLI, let’s first make sure we have a Role mapper for our
Roles:

if (outcome != success) of /subsystem=elytron/simple-role-decoder=from-roles-


attribute:read-resource
/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles)
end-if

Then we can define the File System Realm through the CLI as follows:

/subsystem=elytron/filesystem-realm=demoFsRealmEJB:add(path=demofs-realm-ejb-
users,relative-to=jboss.server.config.dir)

Next, we will add one Identity to the Realm as follows:

/subsystem=elytron/filesystem-realm=demoFsRealmEJB:add-identity(identity=ejbuser)
/subsystem=elytron/filesystem-realm=demoFsRealmEJB:set-
password(identity=ejbuser,clear={password="password123"})
/subsystem=elytron/filesystem-realm=demoFsRealmEJB:add-identity-
attribute(identity=ejbuser,name=Roles, value=["guest","manager"])

Done with that, we will create a Security Domain which is bound to this Realm:

/subsystem="elytron"/security-domain="fsSDEJB":add(default-
realm="demoFsRealmEJB",permission-mapper="default-permission-
mapper",realms=[{realm="demoFsRealmEJB",role-decoder="from-roles-
attribute"},{realm="local"}])

As we will be using SASL as authentication mechanism, we will need a SASL Authentication


Factory for it:

327
/subsystem="elytron"/sasl-authentication-factory="fs-application-sasl-
authentication":add(mechanism-configurations=[{mechanism-name="JBOSS-LOCAL-
USER",realm-mapper="local"},{mechanism-name="DIGEST-MD5",mechanism-realm-
configurations=[{realm-name="demoFsRealmEJB"}]}],sasl-server-
factory="configured",security-domain="fsSDEJB")

Done with Elytron, the last steps needed are adding a reference to Elytron Security Domain in the
"other" Security Domain contained in the "ejb3" subsystem:

/subsystem=ejb3/application-security-domain=other:add(security-domain=fsSDEJB)

Finally, as the EJB call initially lands on the HTTP Connector, we will add a reference to the SASL
authentication factory we have created:

/subsystem=remoting/http-connector=http-remoting-connector:write-attribute(name=sasl-
authentication-factory,value=fs-application-sasl-authentication)

Grab the script to install this example from here: http://bit.ly/2GzRgLT

15.3.5.1. Configuring the EJB Server side

Done with the server side, we will configure our EJBs to use the "other " Security Domain (which in
turn references Elytron’s "fsSD" Security Domain:

@Stateless
@Remote(SecuredEJBRemote.class)
@RolesAllowed({ "guest" })
@SecurityDomain("other")
public class SecuredEJB implements SecuredEJBRemote {

  @Resource
  private SessionContext ctx;

  public String getSecurityInfo() {

  Principal principal = ctx.getCallerPrincipal();


  return principal.toString();
  }

  @RolesAllowed("manager")
  public boolean secured() {
  return true;
  }
}

328
As you can see, two "guest" Role is required to access this class but for the "secured" method which
requires a manager role.

15.3.5.2. Configuring the EJB Client side

From the EJB Client side, the only requirement is to provide a reference to the Authentication
Configuration, by specifying the Users that are configured in the Realm in wildfly-config.xml:

<configuration>
  <authentication-client xmlns="urn:elytron:1.0">
  <authentication-rules>
  <rule use-configuration="default-config"/>
  </authentication-rules>
  <authentication-configurations>
  <configuration name="default-config">
  <set-user-name name="ejbuser"/>
  <credentials>
  <clear-password password="password123"/>
  </credentials>
  <sasl-mechanism-selector selector="DIGEST-MD5"/>
  <providers>
  <use-service-loader />
  </providers>
  </configuration>
  </authentication-configurations>
  </authentication-client>
</configuration>

The full code for this application is available on Github at: http://bit.ly/2FGXgoY

15.3.5.2.1. Masking the user’s password

 This feature requires WildFly 18 or newer.

Instead of using clear text password, it is also possible to specify masked passwords as credential
passwords for client authentication. A masked password consists of the following attributes:

• algorithm The algorithm used to encrypt the password. If this attribute is not specified, the
default value is "masked-MD5-DES".

• key-material The initial key material used to encrypt the password. If this attribute is not
specified, the default value is "somearbitrarycrazystringthatdoesnotmatter".

• iteration-count The iteration count used to encrypt the password. This attribute is required.

• salt The salt used to encrypt the password. This attribute is required.

• masked-password The base64 encrypted password (without the "MASK-" prefix).

• initialization-vector The initialization vector used to encrypt the password. This attribute is
optional.

329
In order to generate a masked password, you can use the following sample code which generates a
masked password using the algorithm "masked-MD5-DES" and encode it in base64:

import org.wildfly.common.iteration.ByteIterator;
import org.wildfly.security.WildFlyElytronProvider;
import org.wildfly.security.password.PasswordFactory;
import org.wildfly.security.password.interfaces.MaskedPassword;
import org.wildfly.security.password.spec.*;

import java.security.Provider;

public class Converter {


  static final Provider ELYTRON_PROVIDER = new WildFlyElytronProvider();

  static final String TEST_PASSWORD = "password";

  public static void main(String[] args) throws Exception {

  String algorithm = "masked-MD5-DES";


  char[] keyMaterial = "somearbitrarycrazystringthatdoesnotmatter".toCharArray(
);
  byte[] salt = "12345678".getBytes();
  int iterationCount = 12;
  String clearPassword = "password123";

  PasswordFactory passwordFactory = PasswordFactory.getInstance(MaskedPassword


.ALGORITHM_MASKED_MD5_DES, ELYTRON_PROVIDER);

  MaskedPasswordAlgorithmSpec maskedAlgorithmSpec = new


MaskedPasswordAlgorithmSpec(keyMaterial, iterationCount, salt);
  EncryptablePasswordSpec encryptableSpec = new EncryptablePasswordSpec
(clearPassword.toCharArray(), maskedAlgorithmSpec);

  MaskedPassword original = (MaskedPassword) passwordFactory.generatePassword


(encryptableSpec);
  byte[] masked = original.getMaskedPasswordBytes();
  MaskedPasswordSpec maskedPasswordSpec = new MaskedPasswordSpec(keyMaterial,
iterationCount, salt, masked);

  //Get the masked password as a string


  String maskedPassword = ByteIterator.ofBytes(maskedPasswordSpec
.getMaskedPasswordBytes()).base64Encode().drainToString();
  System.out.println(String.format("Masked Password: " + maskedPassword));

  //Verify the masked password is the encryption of the clear password


  MaskedPassword restored = (MaskedPassword) passwordFactory.generatePassword
(maskedPasswordSpec);
  System.out.println(String.format("Password Verified '%b'", passwordFactory
.verify(restored, clearPassword.toCharArray())));

330
  }

In our example, the base64 masked password for "password123" is "j37uUs8kG9slKIvwFAxsBQ==",


therefore we can change the wildfly-config.xml as follows:

<configuration>
  <authentication-client xmlns="urn:elytron:client:1.4">
  <authentication-rules>
  <rule use-configuration="masked-config" />
  </authentication-rules>
  <authentication-configurations>
  <configuration name="masked-config">
  <set-user-name name="ejbuser"/>
  <credentials>
  <masked-password iteration-count="12" salt="12345678" masked-password=
"j37uUs8kG9slKIvwFAxsBQ=="/>
  </credentials>
  <sasl-mechanism-selector selector="DIGEST-MD5"/>
  <providers>
  <use-service-loader />
  </providers>
  </configuration>
  </authentication-configurations>
  </authentication-client>
</configuration>

15.3.5.2.2. Verifying the client identity with a keystore

You can also use a SSL Context in your Authentication process, provided that you have generated
the client keystore and truststore as discussed in Configuring Mutual SSL Authentication for
WildFly applications. In this case, you can add a key-stores and ssl-context element to the `wildfly-
config.xml`file to specify the path to the client’s keystore and truststore that will be used to verify
the identity of the caller. In the following example, replace the file name path, with the actual path
to the client’s keystore and truststore:

331
<configuration>
  <authentication-client xmlns="urn:elytron:client:1.5">
  <authentication-rules>
  <rule use-configuration="auth-config" />
  </authentication-rules>
  <key-stores>
  <key-store name="truststore" type="JKS">
  <file name="/path/to/client.truststore.jks"/>
  <key-store-clear-password password="secret"/>
  </key-store>
  <key-store name="keystore" type="JKS">
  <file name="/path/to/client.keystore.jks"/>
  <key-store-clear-password password="secret"/>
  </key-store>
  </key-stores>
  <ssl-contexts>
  <ssl-context name="client-context">
  <trust-store key-store-name="truststore"/>
  <key-store-ssl-certificate key-store-name="keystore" alias="client">
  <key-store-clear-password password="secret"/>
  </key-store-ssl-certificate>
  </ssl-context>
  </ssl-contexts>
  <ssl-context-rules>
  <rule use-ssl-context="client-context"/>
  </ssl-context-rules>
  <authentication-configurations>
  <configuration name="auth-config">
  <set-user-name name="ejbuser"/>
  <credentials>
  <masked-password iteration-count="12" salt="12345678" masked-password=
"j37uUs8kG9slKIvwFAxsBQ=="/>
  </credentials>
  <sasl-mechanism-selector selector="DIGEST-MD5"/>
  <providers>
  <use-service-loader />
  </providers>
  </configuration>
  </authentication-configurations>
  </authentication-client>
</configuration>

15.3.5.3. Securing SOAP Web services with Elytron

Since WildFly 19, Elytron security layer can be applied also to JAX-WS Web services. This means
you can include the wildfly-config.xml file along with your Client SOAP application. Let’s see in
practice how to set up a JAX-WS application to use Elytron. On the server side, once that you have
configured your Elytron Security Domain as seen in Configuring a File System Security Realm,
include in your jboss-web.xml a reference to it:

332
<jboss-web>
  <security-domain>httpFsSD</security-domain> ①
</jboss-web>

On the client side, the wildfly-config.xml file can be placed by default in the resources/META-INF
of your project:

<configuration>
  <authentication-client xmlns="urn:elytron:client:1.5">
  <authentication-rules>
  <rule use-configuration="auth-config" />
  </authentication-rules>
  <authentication-configurations>
  <configuration name="auth-config">
  <set-user-name name="frank"/>
  <credentials>
  <clear-password password="password123"/>
  </credentials>
  <webservices>
  <set-http-mechanism name="BASIC" />
  </webservices>
  </configuration>
  </authentication-configurations>
  </authentication-client>
</configuration>

You can also apply masked passwords and verify against the client’s truststore as
 discussed in the Configuring the EJB Client side section.

Besides it, you will also need to include in the pom.xml file, along with your JAX-WS Client BOM, also
the elytron client libraries to run your client project:

<dependency>
  <groupId>org.wildfly</groupId>
  <artifactId>wildfly-jaxws-client-bom</artifactId>
  <version>19.0.0.Final</version>
  <type>pom</type>
</dependency>

<dependency>
  <groupId>org.wildfly.security</groupId>
  <artifactId>wildfly-elytron-client</artifactId>
  <version>1.11.0.Final</version>
</dependency>

Finally, in order to pick up your security configuration, use the

333
org.jboss.wsf.stack.cxf.client.configuration.CXFClientConfigurer as shown in this snippet:

 public void test() {


  QName serviceName = new QName("HelloService");
  Service service = Service.create(serviceName);
  HelloService helloService = service.getPort(HelloService.class);
  BindingProvider bindingProvider = (BindingProvider) helloService;
  bindingProvider.getRequestContext().put(BindingProvider
.ENDPOINT_ADDRESS_PROPERTY, "https://localhost:8443/ws-client-
integration/HelloService");
  CXFClientConfigurer cxfClientConfigurer = new CXFClientConfigurer();
  cxfClientConfigurer.setConfigProperties(bindingProvider, null, null);
  Assert.assertNotNull(helloService);
  Assert.assertEquals("Hello world!", helloService.sayHello());

  }

15.3.6. Using Client attributes to determine a Role

It is possible to make use of the IP address of a remote client in order to assign a user a particular
role. This can be useful, for example, for remote EJB clients which are connecting from different
networks.

This can be done using a source-address-role-decoder which can specify that a user should be
assigned a specific Role when establishing a connection to the server from an IP address. For
example, in order to grant the "Admin" role to a remote client connecting from the IP Address
192.168.10.1, you could add the following source-address-role-decoder to your configuration

/subsystem=elytron/source-address-role-decoder=decoder1:add(source-
address="192.168.10,1", roles=["Admin"])

It is also possible to configure an aggregate-role-decoder in the mappers configuration in the


Elytron subsystem. This will consist of references to two or more configured role decoders. This
aggregate role decoder will combine the roles obtained using each role decoder.

For example:

/subsystem=elytron/source-address-role-decoder=decoder1:add(source-
address="192.168.10,1", roles=["Admin"])
/subsystem=elytron/source-address-role-decoder=decoder2:add(source-
address="192.168.20,1", roles=["Developer"])
/subsystem=elytron/aggregate-role-decoder=aggregateDecoder:add(role-
decoders=[decoder1, decoder2])

334
15.3.7. Troubleshooting Authentication issues

If you are unable to login against your Elytron Realm, it is essential to gain some insights on the
trace information emitted by the "org.wildfly.security" namespace. Therefore it’s recommended to
elevate the logger level for this category to "TRACE":

<logger category="org.wildfly.security">
  <level name="TRACE"/>
</logger>

On the other hand, if you are dealing with SASL, you should also collect more information from the
"org.jboss.remoting" namespace:

<logger category="org.jboss.remoting">
  <level name="TRACE"/>
</logger>

Finally, it’s worth to note that you can gather more information on LDAP connections by setting the
system property com.sun.jndi.ldap.connect.pool.debug to any of these values:

• "fine" In order to trace connection creation and removal

• "all" In order to trace all available debugging information.

15.4. Securing Management interfaces


Once you have defined an Authentication Factory, which is associated with a Security Realm, then
you can simply associate the management interfaces with this Authentication Factory.

Assumed that you have defined an Http Authentication Factory named "example-fs-http-auth" as
described in the section Configuring a File System Security Realm, then here is how to secure your
Web console to that Realm:

/core-service=management/management-interface=http-interface:write-
attribute(name=http-authentication-factory,value=example-fs-http-auth)

On the other hand, if you want to secure your management interface against the JDBC Realm as
discussed in this section Configuring a JDBC Realm , then execute the following CLI:

/core-service=management/management-interface=http-interface:write-
attribute(name=http-authentication-factory,value=db-http-auth)

15.5. Configuring SSL/TSL


Elytron subsystem is able to handle both the Authentication and Authorization mechanism and

335
encryption of the communication between client and server.

Generally speaking, to configure SSL/HTTPS you can either use the pure JSSE implementation (and
the keytool utility) or a native implementation based on OpenSSL. We will cover at first the JSSE
implementation with keytool.

The keytool utility stores the keys and certificates in a file termed as keystore, a repository of
certificates used for identifying a client or a server. Typically, a keystore contains one client or one
server’s identity, which are protected by using a password.

In order to have a quick test of SSL/HTTPS, a the ApplicationRealm contains a default keystore
definition to wrap a self-signed demo certificate:

<security-realm name="ApplicationRealm">
  <server-identities>
  <ssl>
  <keystore path="application.keystore" relative-to=
"jboss.server.config.dir" keystore-password="password" alias="server" key-password=
"password" generate-self-signed-certificate-host="localhost"/>
  </ssl>
  </server-identities>
  <authentication>
  <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
  <properties path="application-users.properties" relative-to=
"jboss.server.config.dir"/>
  </authentication>
  <authorization>
  <properties path="application-roles.properties" relative-to=
"jboss.server.config.dir"/>
  </authorization>
</security-realm>

This certificate, named application.keystore is generated on-demand. Once you have requested
an https connection (e.g. https://localhost:8443), the certificate will be created and its SHA
fingerprints will be dumped on your logs:

12:11:25,590 WARN [org.jboss.as.domain.management.security] (default I/O-4)


WFLYDM0113: Generated self signed certificate at /opt/wildfly-
20.0.0.Final/standalone/configuration/application.keystore. Please note that self
signed certificates are not secure, and should only be used for testing purposes. Do
not use this self signed certificate in production.
SHA-1 fingerprint of the generated key is
fd:3f:61:c6:fe:f5:e0:75:3a:ab:d8:f7:c1:00:3f:70:ab:c7:95:15

15.5.1. Creating your own certificates

Aside from the default certificate, you can obviously create your own certificates to be installed on
the server. Let’s start by creating a certificate for your server using the following command:

336
$ keytool -genkeypair -alias localhost -keyalg RSA -keysize 2048 -validity 365
-keystore server.keystore -dname "cn=Server Administrator,o=Acme,c=GB" -keypass secret
-storepass secret

This command created the keystore named server.keystore in the working directory, with the
password "secret" . It generates a public/private key pair for the entity whose "distinguished name"
has a common name of Server Administrator , organization of Acme and two-letter country code of
GB.

The message "The JKS keystore uses a proprietary format" simply reminds you
that JKS is a format specific to Java, while PKCS12 is a standardized and language-
neutral way of storing encrypted private keys and certificates. You can generate a
 PKCS12 format using the option -storetype PKCS12 or importing the JKS file into
a PKCS12 one with the command keytool -importkeystore -srckeystore
server.keystore -destkeystore server.keystore -deststoretype pkcs12

Now let’s store the server keystore into the configuration folder of the application server:

$ cp server.keystore $JBOSS_HOME/standalone/configuration

If you only need a one-way authentication (Server-→Client) then you are done.

Otherwise, if you need a two-way authentication (Server←→Client) then we need to create as well
the client certificates and export them to create a truststore.

The following command, will create the client certificate, which is used to authenticate against the
server when accessing a resource through SSL:

$ keytool -genkeypair -alias client -keyalg RSA -keysize 2048 -validity 365 -keystore
client.keystore -dname "CN=client" -keypass secret -storepass secret

Now export both the client and the server keystores in a certificate file:

$ keytool -exportcert -keystore server.keystore -alias localhost -keypass secret


-storepass secret -file server.crt

$ keytool -exportcert -keystore client.keystore -alias client -keypass secret


-storepass secret -file client.crt

Finally, import the certificates into the server’s and client’s truststores:

337
$ keytool -importcert -keystore server.truststore -storepass secret -alias client
-trustcacerts -file client.crt -noprompt

$ keytool -importcert -keystore client.truststore -storepass secret -alias localhost


-trustcacerts -file server.crt -noprompt

The above commands are contained in a shell script which is available at: http://bit.ly/2GB4PKV

Done with certificates, we will be storing as well the the client.truststore into the configuration
folder of the application server:

$ cp client.truststore $JBOSS_HOME/standalone/configuration

15.5.2. Configuring One-Way SSL / HTTPS for WildFly applications

When using WildFly 11 or newer you can either use Elytron or the Legacy SSL configuration. To
verify which one is the default one check and see if the https-listener is configured to use a legacy
security realm for its SSL configuration:

/subsystem=undertow/server=default-server/https-listener=https:read-
attribute(name=security-realm)
{
  "outcome" => "success",
  "result" => "ApplicationRealm"
}

The above command shows that the https-listener is configured to use the legacy ApplicationRealm
for its SSL configuration. Therefore we will undefine the security-realm attribute in the https-
listener as Undertow cannot reference both a legacy security realm and an ssl-context in Elytron.

The following CLI batch script will add the keystores the key manager and the ssl context
configuration in the elytron subsystem, removing the reference to the Undertow’s https-listener.

338
batch
# Configure Server Keystore
/subsystem=elytron/key-store=demoKeyStore:add(path=server.keystore,relative-
to=jboss.server.config.dir, credential-reference={clear-text=secret},type=JKS)
# Server Keystore credentials
/subsystem=elytron/key-manager=demoKeyManager:add(key-store=demoKeyStore,credential-
reference={clear-text=secret})
# Server keystore Protocols
/subsystem=elytron/server-ssl-context=demoSSLContext:add(key-
manager=demoKeyManager,protocols=["TLSv1.2"])
# This is only needed if WildFly uses by default the Legacy security realm
/subsystem=undertow/server=default-server/https-listener=https:undefine-
attribute(name=security-realm)
# Store SSL Context information in undertow
/subsystem=undertow/server=default-server/https-listener=https:write-
attribute(name=ssl-context,value=demoSSLContext)

run-batch

reload

You can also define a default SSL Context to be used by the Elytron subsystem,

 by setting the property default-ssl-context that will reference to the SSLContext


which should be globally registered as the default.

You can find the above CLI script at: http://bit.ly/2FGCMNg

That’s all. Now if you try to access a Web application through the https://localhost:8443 address, you
will be informed that you are using a self-signed certificate. If you add an exception to the browser
you will be running through the SSL channel, with your certificate:

In terms of configuration, here is the tls section which has been added to WildFly:

339
<tls>
  <key-stores>
  <key-store name="demoKeyStore">
  <credential-reference clear-text="secret"/>
  <implementation type="JKS"/>
  <file path="server.keystore" relative-to="jboss.server.config.dir"/>
  </key-store>
  </key-stores>
  <key-managers>
  <key-manager name="demoKeyManager" key-store="demoKeyStore">
  <credential-reference clear-text="secret"/>
  </key-manager>
  </key-managers>
  <server-ssl-contexts>
  <server-ssl-context name="demoSSLContext" protocols="TLSv1.2" key-manager=
"demoKeyManager"/>
  </server-ssl-contexts>
</tls>

And here is the corresponding undertow section:

 <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server"


default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other" statistics-enabled="${wildfly.undertow.statistics-
enabled:${wildfly.statistics-enabled:false}}">
  <buffer-cache name="default"/>
  <server name="default-server">
  <http-listener name="default" socket-binding="http" redirect-socket=
"https" enable-http2="true"/>
  <https-listener name="https" socket-binding="https" ssl-context=
"demoSSLContext" enable-http2="true"/>
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content"/>
  <filter-ref name="server-header"/>
  <filter-ref name="x-powered-by-header"/>
  <http-invoker security-realm="ApplicationRealm"/>
  </host>
  </server>

15.5.2.1. Using the CLI security command to configure One-Way SSL / HTTPS

If you prefer, a simpler way to enable SSL for the HTTP server is by means of the security enable-
ssl-http-server CLI command. This command has the advantage of combining the definition of the
key-store, key-manager and ssl-context in just one command. So, assumed that the file
server.keystore is in the same folder as the jboss-cli.sh script, you can enable SSL for the HTTP
server with just this command:

340
[standalone@localhost:9990 /] security enable-ssl-http-server --key-store
-path=server.keystore --key-store-password=secret

Server reloaded.
SSL enabled for default-server
ssl-context is ssl-context-server.keystore
key-manager is key-manager-server.keystore
key-store is server.keystore

Your One-Way SSL configuration is ready and the server has been reloaded for you to reflect the
changes. You can also use the --interactive option which will let you create also the keystore. Here
is a transcript of a sample ssl configuration for the HTTP server which will eventually create the file
wildfly.keystore, the certificate file wildfly.pem and wildfly.csr file in server configuration
directory.

[standalone@localhost:9990 /] security enable-ssl-http-server --interactive


Please provide required pieces of information to enable SSL:

Certificate info:
Key-store file name (default default-server.keystore): wildfly.keystore
Password (blank generated): password
What is your first and last name? [Unknown]: John Smith
What is the name of your organizational unit? [Unknown]: QA
What is the name of your organization? [Unknown]: Acme
What is the name of your City or Locality? [Unknown]: London
What is the name of your State or Province? [Unknown]:
What is the two-letter country code for this unit? [Unknown]: UK
Is CN=John Smith, OU=QA, O=Acme, L=London, ST=Unknown, C=UK correct y/n [y]?y
Validity (in days, blank default):
Alias (blank generated): jsmith
Enable SSL Mutual Authentication y/n (blank n):n

SSL options:
key store file: wildfly.keystore
distinguished name: CN=John Smith, OU=QA, O=Acme, L=London, ST=Unknown, C=UK
password: password
validity: default
alias: jsmith
Server keystore file wildfly.keystore, certificate file wildfly.pem and wildfly.csr
file will be generated in server configuration dire
ctory.

Do you confirm y/n :y


Server reloaded.
SSL enabled for default-server
ssl-context is ssl-context-24f3d44b-a511-4b54-9610-ac414a8b6143
key-manager is key-manager-24f3d44b-a511-4b54-9610-ac414a8b6143
key-store is key-store-24f3d44b-a511-4b54-9610-ac414a8b6143

341
15.5.2.2. Enabling TSL 1.3

WildFly 19 (or newer) allows us to use TLS 1.3 with WildFly when running against JDK 11 or higher.

TLS 1.3 helps to speed up encrypted connections with features such as TLS false
start and Zero Round Trip Time (0-RTT). To put it simply, with TLS 1.2, two round-
 trips have been needed to complete the TLS handshake. With 1.3, it requires only
one round-trip, which in turn cuts the encryption latency in half.

In order to enable TLS 1.3, we need to update our ssl-context configuration to specify the cipher-
suite-names attribute as a colon separated list of the TLS 1.3 cipher suites that we want to enable.

First off, set the protocols attribute of the server-ssl-context to be TLSv1.3:

/subsystem=elytron/server-ssl-context=demoSSLContext:write-
attribute(name=protocols,value=[TLSv1.3])

Next, we will set the cipher-suite-names to


TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256:

/subsystem=elytron/server-ssl-context=demoSSLContext:write-attribute(name=cipher-
suite-
names,value=TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256:TLS_AES_128_GCM_SHA256
)

Reload the configuration for changes to take effect.

15.5.3. Configuring Mutual SSL Authentication for WildFly applications

Mutual SSL provides the same security as SSL, with the addition of authentication and non-
repudiation of the client authentication, using digital signatures. When mutual authentication is
used, the server would request the client to provide a certificate in addition to the server certificate
issued to the client. Mutual authentication requires an extra round trip time for client certificate
exchange. In addition, the client must buy and maintain a digital certificate. We can secure our war
application deployed over WildFly with mutual(two-way) client certificate authentication and
provide access permissions or privileges to legitimate users.

It is assumed that you have already completed the One-Way SSL configuration for

 the server as discussed in Configuring One-Way SSL / HTTPS for WildFly


applications

In order to update your One-Way configuration to use Mutual SSL configuration, we need a SSL
Context which includes also the Client Truststore and TrustManager in its configuration:

342
batch

# Add the Truststore, TrustManager to a SSL Context configuration


/subsystem=elytron/key-store=demoTrustStore:add(path=client.truststore,relative-
to=jboss.server.config.dir,type=JKS,credential-reference={clear-text=secret})

/subsystem=elytron/trust-manager=demoTrustManager:add(key-store=demoTrustStore)

/subsystem=elytron/server-ssl-context=twoWaySSL:add(key-manager=demoKeyManager,trust-
manager=demoTrustManager,protocols=[TLSv1.2],need-client-auth=true)

# This is only needed if WildFly uses by default the Legacy security realm
/subsystem=undertow/server=default-server/https-listener=https:undefine-
attribute(name=security-realm)

# Store SSL Context information in undertow


/subsystem=undertow/server=default-server/https-listener=https:write-
attribute(name=ssl-context,value=twoWaySSL)

run-batch

reload

You can find the above CLI script at: http://bit.ly/2pkbSzu

It is also possible to configure mutual SSL authentication in one step using the
security enable-ssl-http-server command as follows:

 security enable-ssl-http-server --key-store-path=server.keystore --key


-store-password=secret --trusted-certificate-path=client.crt --trust
-store-file-password=secret --no-trusted-certificate-validation

In terms of configuration, you can enforce mutual SSL authentication in an application by declaring
its transport as confidential in web.xml:

<security-constraint>
  <web-resource-collection>
  <url-pattern>/*</url-pattern>
  </web-resource-collection>

  <user-data-constraint>
  <transport-guarantee>CONFIDENTIAL</transport-guarantee>
  </user-data-constraint>
</security-constraint>

343
15.5.3.1. Importing Client certificates on your browser

If you want to use an application which uses mutual SSL then you have to import the certificates in
your Browser. In order to do that, we need to use the keytool utility to export client certificate into
the pkcs12 format.

$ keytool -importkeystore -srckeystore client.keystore -srcstorepass secret


-destkeystore clientCert.p12 -srcstoretype PKCS12 -deststoretype PKCS12 -deststorepass
secret

Entry for alias mykey successfully imported.


Import command completed: 1 entries successfully imported, 0 entries failed or
cancelled

Now let’s import the pkcs12 into the browser. Every browser uses a different set of menu to import
the certificates. For Chrome, click the Chrome menu icon (3 dots) in the upper right on the browser
toolbar and choose Settings.

Scroll to the bottom of the page and click on the Advanced link to reveal the advanced settings.
Search for the Manage Certificates line under "Privacy and Security" and then click on it.

In the Manage certificates screen, select the Your Certificates tab and click on the Import button.
Choose to browse all file types and import your keystore which was exported in pkcs12 format. You
will be prompted to enter the password: secret.

The Client Certificate in now installed on your browser.

15.6. Configuring SSL for Management interfaces


The management interfaces, by default, are available through port 9990 using clear text
connections. In order to encrypt the management communication, we need to create an SSL context
and connect it to the management interfaces. The simplest way to do it, is through the new security
CLI command adding as argument enable-ssl-management. We will use this command with the
--interactive options that requires passing the arguments on the command line:

344
[standalone@localhost:9990 /] security enable-ssl-management --interactive
Please provide required pieces of information to enable SSL:

Certificate info:
Key-store file name (default management.keystore):
Password (blank generated): wildfly
What is your first and last name? [Unknown]: Admin
What is the name of your organizational unit? [Unknown]: Administrators
What is the name of your organization? [Unknown]: Acme
What is the name of your City or Locality? [Unknown]: London
What is the name of your State or Province? [Unknown]:
What is the two-letter country code for this unit? [Unknown]: UK
Is CN=Admin, OU=Administrators, O=Acme, L=London, ST=Unknown, C=UK correct y/n [y]?y
Validity (in days, blank default):
Alias (blank generated): admin
Enable SSL Mutual Authentication y/n (blank n):n

SSL options:
key store file: management.keystore
distinguished name: CN=Admin, OU=Administrators, O=Acme, L=London, ST=Unknown, C=UK
password: wildfly
validity: default
alias: admin
Server keystore file management.keystore, certificate file management.pem and
management.csr file will be generated in server configuration directory.

Do you confirm y/n :y


Unable to connect due to unrecognised server certificate
Subject - CN=Admin,OU=Administrators,O=Acme,L=London,ST=Unknown,C=UK
Issuer - CN=Admin, OU=Administrators, O=Acme, L=London, ST=Unknown, C=UK
Valid From - Wed Jan 08 10:15:46 CET 2020
Valid To - Tue Apr 07 10:15:46 CEST 2020
MD5 : 7c:63:76:48:ec:8d:e2:2c:96:74:4d:19:7d:81:e1:6d
SHA1 : de:3e:ba:f5:9b:c1:9c:4c:e5:48:ca:cf:f4:e2:71:63:d3:20:19:1a

Accept certificate? [N]o, [T]emporarily, [P]ermanently : P


Server reloaded.
SSL enabled for http-interface
ssl-context is ssl-context-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86
key-manager is key-manager-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86
key-store is key-store-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86

[standalone@localhost:9993 /]

As you can see, the certificate has been installed in the configuration folder and the CLI connection
already switched to port 9993. In terms of configuration, the following ssl-context has been added
to the management interface:

345
 <management-interfaces>
  <http-interface ssl-context="ssl-context-eb0e29ad-6cf6-4c28-aab2-
ffba55eb1d86" security-realm="ManagementRealm">
  <http-upgrade enabled="true"/>
  <socket-binding http="management-http" https="management-https"/>
  </http-interface>
 </management-interfaces>

The SSL Context, in turn, is defined in the Elytron TSL section, which contains the key-store
definition, the related key-manager and the server-ssl-context:

<tls>
  <key-stores>
  <key-store name="key-store-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86">
  <credential-reference clear-text="wildfly"/>
  <implementation type="JKS"/>
  <file required="false" path="management.keystore" relative-to=
"jboss.server.config.dir"/>
  </key-store>
  </key-stores>
  <key-managers>
  <key-manager name="key-manager-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86" key-
store="key-store-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86">
  <credential-reference clear-text="wildfly"/>
  </key-manager>
  </key-managers>
  <server-ssl-contexts>
  <server-ssl-context name="ssl-context-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86"
cipher-suite-filter="DEFAULT" protocols="TLSv1.2" want-client-auth="false" need-
client-auth="false" authentication-optional="false" use-cipher-suites-order="false"
key-manager="key-manager-eb0e29ad-6cf6-4c28-aab2-ffba55eb1d86"/>
  </server-ssl-contexts>
</tls>

As a proof of concept, you can now connect to the Management interface through
https://localhost:9993

346
15.7. Using certificates from Let’s Encrypt in WildFly
Let’s Encrypt is a CA which can provide a free of charge certificate for your website’s domain via
Let’s Encrypt’s Automatic Certificate Management Environment (ACME) protocol. There are two
steps to this process:

• First, your agent proves to the CA that the web server controls a domain.

• Then, you can request, renew, and revoke certificates for that domain. A certificate emitted by
Let’s Encrypt is valid for 90 days.

More in detail, the process looks like this:

The first time you interact with Let’s Encrypt using an agent application, you will need to provide a
new key pair and proves to the Let’s Encrypt CA that the server controls one or more domains. The
Let’s Encrypt CA will look at the domain name being requested and issue one or more sets of
challenges. These are different ways that the agent can prove control of the domain. For example,
the CA might give the agent a choice of either:

• Provisioning a DNS record under the domain (e.g. example.com)

• Provisioning an HTTP resource under a well-known URI on the domain

Along with the challenges, the Let’s Encrypt CA also provides a nonce that the agent must sign with
its private key pair to prove that it controls the key pair.

The agent software completes one of the provided sets of challenges. Let’s say it is able to
accomplish the second task above: it creates a file on a specified path on the http://example.com
site. The agent also signs the provided nonce with its private key. Once the agent has completed

347
these steps, it notifies the CA that it’s ready to complete validation.

Then, it’s the CA’s job to check that the challenges have been satisfied. The CA verifies the signature
on the nonce, and it attempts to download the file from the web server and make sure it has the
expected content. If the signature over the nonce is valid, and the challenges check out, then the
agent identified by the public key is authorized to do certificate management for example.com. We
call the key pair the agent used an authorized key pair for example.com.

15.7.1. Using WildFly CLI as agent to request a certificate from Let’s Encrypt

When using WildFly, you can kickstart the above process by using the --lets-encrypt option to the
security enable-ssl-management command, which has been recently added to the Command Line
Interface

This option is available when you launch the security enable-ssl-management


 command using the --interactive option

Let’s see how it works in action. In order to request the certificate from Let’s Encrypt a set of
information has to be provided:

• Account Key-store: The account key that will be used to communicate with the ACME server.

• Account Key-store password: The account password that will be used to communicate with the
ACME server.

• URL of the ACME server: This is URL of the ACME Server that will be contacted for the
Certificate (by default https://acme-v02.api.letsencrypt.org/directory)

• Certificate authority account name: A reference to the certificate authority account name that
should be used to obtain the certificate.

• Contact url(s): an email account that should be used to obtain the certificate.

• Certificate authority account password: the certificate authority account password that
should be used to obtain the certificate.

• Certificate authority account alias: the certificate authority account alias that should be used
to obtain the certificate.

• Domain name(s): This is the domain for which the certificate will be issued.

• Key-store: A reference to the Elytron key-store that will host the account key.

• Key-Store alias: The alias that identifies the account key entry in the KeyStore.

Here is in a nutshell, what the enable-ssl-management --lets-encrypt process will do for you:

1. Generate a key pair and a Certificate Signing Request (CSR) using the key pair

2. Create an account with the CA if an account hasn’t been created yet as specified by the ACME
protocol

3. Send the CSR to the CA as specified by the ACME protocol

4. Answer challenges from the CA to prove ownership of the domain name(s) requested in the
certificate as specified by the ACME protocol

348
5. Obtain the resulting signed certificate from the CA as specified by the ACME protocol

6. Import the signed certificate into the KeyStore

7. Save the changes to the file that backs the KeyStore

Here is an example of the certificate interactive setup:

[standalone@localhost:9990 /] security enable-ssl-management --interactive --lets


-encrypt
Please provide required pieces of information to enable SSL:

Let's Encrypt account key-store:


File name (default accounts.keystore.jks):
Password (blank generated):

Let's Encrypt certificate authority account:


Account name [admin]:
Contact email [admin@mastertheboss.com]:
Password (blank generated):
Alias (blank generated):

Certificate Authority URL (default https://acme-v02.api.letsencrypt.org/directory):"

Let's Encrypt TOS (https://community.letsencrypt.org/tos)


Do you agree to Let's Encrypt terms of service? y/n (blank n): y

Certificate info:
Key-store file name (default management.keystore):
Password (blank generated):
Domain name (must be accessible by the Let's Encrypt server at 80 & 443 ports)
[mastertheboss.com]:
Alias (blank generated):

Enable SSL Mutual Authentication y/n (blank n): n

At this point, the shell wizard will recap the options you have selected and request confirmation:

349
Let's Encrypt options:
Account key store: accounts.keystore.jks
Password:xxx
Account keystore file X will be generated in server configuration directory.
Let's Encrypt Certificate authority account name: admin
Contact email: admin@mastertheboss.com
Password:xxxx
alias: alias-123
certificate authority URL: https://acme-v02.api.letsencrypt.org/directory
You provided agreement to Let's Encrypt terms of service.

SSL options:
key store file: a
domain name: mastertheboss.com
password: xxxxxxxx
alias: alias-42723f73-ec17-4c84-9c20-160180490cf8
Certificate will be obtained from Let's Encrypt server and will be valid for 90 days.
Server keystore file a will be generated in server configuration directory.

Do you confirm y/n :y

Upon successful verification, the certificate information will be printed on the console and SSL will
be enabled for the management interface using the certificate:

Subject - CN=mastertheboss.com
Issuer - CN=Let's Encrypt Authority X3, O=Let's Encrypt, C=US
Valid From - Thu Nov 08 12:36:16 CET 2018
Valid To - Wed Feb 06 12:36:16 CET 2019
MD5 : 83:e0:41:16:5e:f1:5b:b8:b3:4a:6f:94:5e:36:cd:03
SHA1 : a2:98:38:82:9e:79:2c:11:3c:d4:2c:76:28:3e:6d:16:1c:7c:6f:25

Subject - CN=Let's Encrypt Authority X3,O=Let's Encrypt,C=US


Issuer - CN=DST Root CA X3, O=Digital Signature Trust Co.
Valid From - Thu Mar 17 17:40:46 CET 2016
Valid To - Wed Mar 17 17:40:46 CET 2021
MD5 : b1:54:09:27:4f:54:ad:8f:02:3d:3b:85:a5:ec:ec:5d
SHA1 : e6:a3:b4:5b:06:2d:50:9b:33:82:28:2d:19:6e:fe:97:d5:95:6c:cb

Accept certificate? [N]o, [T]emporarily, [P]ermanently : t


Server reloaded.
SSL enabled for http-interface
ssl-context is ssl-context-7129ee02-add4-4acd-a39a-103a8c1ba495
key-manager is key-manager-7129ee02-add4-4acd-a39a-103a8c1ba495
key-store is key-store-7129ee02-add4-4acd-a39a-103a8c1ba495

350
15.8. Configuring OpenSSL as SSL provider
When using Elytron for SSL/HTTPS you can opt for two different providers:

<providers>
  <aggregate-providers name="combined-providers">
  <providers name="openssl"/>
  <providers name="elytron"/>
  </aggregate-providers>
  <provider-loader name="elytron" module="org.wildfly.security.elytron"/>
  <provider-loader name="openssl" module="org.wildfly.openssl"/>
</providers>

In order to switch to OpenSSL, at first set your security Realm to use OpenSSL TSL as protocol:

/core-service=management/security-realm=ApplicationRealm/server-identity=ssl:write-
attribute(name=protocol,value=openssl.TLS)

This will show in your console that the OpenSSL protocol has been loaded:

12:08:00,027 INFO [org.wildfly.openssl.SSL] (MSC service thread 1-7) WFOPENSSL0002


OpenSSL Version OpenSSL 1.0.2j-fips 26 Sep 2016

Next, we need to change the ordering of the providers in the elytron combined-providers, which
means that OpenSSL will now take precedence:

/subsystem=elytron/aggregate-providers=combined-providers:list-add(index=0,
name=providers, value=openssl)
/subsystem=elytron/aggregate-providers=combined-providers:list-remove(index=2,
name=providers)

15.9. Configuring Server Name Indication (SNI)


Since WildFly 15, the elytron subsystem allows configuring an SSL context which supports SNI.

Server Name Indication (SNI) is an extension of the TLS protocol. The client
specifies which hostname they want to connect to using the SNI extension in the
TLS handshake. This allows a server to select the corresponding private key and
certificate chain that are required to establish the connection from a list or
 database while hosting all certificates on a single IP address. When SNI is used,
the hostname of the server is included in the TLS handshake, which enables
HTTPS websites to have unique TLS Certificates, even if they are on a shared IP
address.

351
By supporting SNI, if an SNI host name is available while the SSLSession is being negotiated a host
specific SSLContext will be selected. If no host specific SSLContext is selected either because no
host name was received or because there is no match a default SSLContext will be used alternately.
By identifying a host specific SSLContext it means that a certificate appropriate for that host can be
used.

The following command demonstrates how an SNI aware SSLContext can be added: - Server Name
Indication (SNI) is an extension of the TLS protocol. The client specifies which hostname they want
to connect to using the SNI extension in the TLS handshake. This allows a server (which can be fore
example Apache, Nginx, or a load balancer such as HAProxy) to select the corresponding private
key and certificate chain that are required to establish the connection from a list or database while
hosting all certificates on a single IP address.

When SNI is used, the hostname of the server is included in the TLS handshake, which enables
HTTPS websites to have unique TLS Certificates, even if they are on a shared IP address.

[standalone@localhost:9990 /] /subsystem=elytron/server-ssl-sni-context=test-
sni:add(default-ssl-context=demoSSLContext,host-context-map={localhost=localhost,
jboss.com=jboss})
{"outcome" => "success"}

To run the above command, it is required that you define three SSLContexts (demoSSLContext,
localhost and jboss) as discussed previously in Configuring One-Way SSL / HTTPS for WildFly
applications

During negotiation of the SSLSession if the SNI host name received is localhost then the localhost
SSLContext will be used, if the SNI host name is jboss.com then the jboss SSLContext will be used. If
no SNI host name is received or if we receive a name that does not match, the default
demoSSLContext will be used instead.

The resulting resource looks like: -

[standalone@localhost:9990 /] /subsystem=elytron/server-ssl-sni-context=test-sni:read-
resource
{
  "outcome" => "success",
  "result" => {
  "default-ssl-context" => "demoSSLContext",
  "host-context-map" => {
  "localhost" => "localhost",
  "wildfly.org" => "jboss"
  }
  }
}

352
15.10. Configuring Java Authentication Service
Provider Interface (JASPI)
JASPI (JSR 196) defines a standard service provider interface (SPI) with which a message level
authentication agent can be developed for Java EE containers on either the client side or the server
side. These agents may establish the authenticated identities used by the containers allowing:

• A server side agent to verify security tokens or signatures on incoming requests and extract
principal data or assertions before adding them to the client security context.

• A client side agent to add security tokens to outgoing requests, sign messages, and interact with
the trusted authority to locate targeted web service providers.

Since WildFly 15, an implementation of the Servlet profile from the JASPI standard is also provided
by the WildFly Elytron subsystem. In order to enable the JASPI integration for a web application,
the web application needs to be associated with any of these components:

• an Elytron http-authentication-factory

• a security-domain

e.g.

/subsystem=undertow/application-security-domain=MyAppSecurity:add(security-
domain=ApplicationDomain)

or

/subsystem=undertow/application-security-domain=MyAppSecurity:add(http-authentication-
factory=application-http-authentication)

15.11. Using Credential Stores to store sensitive data


The elytron subsystem allows using Credential stores as secure storage for your credentials. Using
a credential store is a replacement of the standard password vault mechanism to store passwords
and other sensitive strings. Credential stores allow for easier credential management within
WildFly, without having to use an external tool. It is however still possible to use an external script
named elytron-tool.sh to manage from the shell the storage of your passwords. The default
credential store implementation uses a Java Cryptography Extension (JCEKS) keystore file to store
credentials. When creating a new credential store, the default implementation also allows you to
reference an existing keystore file or have WildFly automatically create one for you. Currently, the
default implementation only allows you to store clear text passwords.

15.11.1. Example: securing your Datasource password

In this example, we will show how to secure the password used to connect a PostgreSQL
datasource. First of all, we will create a Credential Store. This can be done either using a shell script

353
(elytron-tool.sh) or with WildFly CLI:

/subsystem=elytron/credential-store=my_store:add(location="credentials/csstore.jceks",
relative-to=jboss.server.data.dir, credential-reference={clear-
text=mypassword},create=true)

The above command has created a Credential Store in a file named csstore.jceks in the
jboss.server.data.dir/credentials folder using a clear text password named "mypassword".

To add entries into a credential store, you have to associate an alias to the sensitive string that you
are wanting to store.

For example, to add a password with the alias database-pw to the store we have just created:

/subsystem=elytron/credential-store=my_store:add-alias(alias=database-pw, secret-
value="secret")

Let’s check that our alias has been correctly included:

/subsystem=elytron/credential-store=my_store:read-aliases
{
  "outcome" => "success",
  "result" => ["database-pw"]
}

Perfect. Now just create a datasource without specifying the "password" as datasource’s property
but rather include a "credential-reference" which points to your alias:

data-source add --jndi-name=java:/PostGreDSSec --name=PostgrePoolSec --connection


-url=jdbc:postgresql://localhost/postgres --driver-name=postgres --user-name=postgres
--credential-reference={store=my_store, alias=database-pw}

As you can see from the above script, you can use credential-reference as an

 alternative to providing a password or other sensitive string in most places


throughout the JBoss EAP configuration.

You can grab the CLI script required to create the Credential Store from http://bit.ly/3cpDB9S.

Finally, let’s check that our connection with the Database is ok:

354
/subsystem=datasources/data-source=PostGreDSSec:test-connection-in-pool()
{
  "outcome" => "success",
  "result" => [true]
}

15.11.2. A shortcut to add entries in your Credential Store

WildFly 20 has added the ability to automatically add a credential to a previously defined
credential store by specifying both the store and clear-text attributes for a credential-reference.
Let’s go back to our initial example:

/subsystem=elytron/credential-store=my_store:add(location=credentials/csstore.jceks,
relative-to=jboss.server.config.dir, credential-reference={clear-text=mypassword},
create=true)

Having defined our Credential Store, now it’s possible to configure a key-store with a credential-
reference that specifies the store, alias, and clear-text attributes as follows:

/subsystem=elytron/key-store=newKS:add(relative-to=jboss.server.config.dir,
path=new.keystore, type=JCEKS, credential-reference={store=my_store, alias=myNewAlias,
clear-text=myNewPassword})

The above command will result in a new entry being added to our credential store, my_store, with
alias myNewAlias and credential myNewPassword.

15.11.3. Configuring the Credential Store Offline

Elytron Credential Stores can be also configured offline. This is a valid alternative if you have to
include your Credential Store configuration as part of other provisioning shells. The tool which can
be used for this purpose is elytron-tool.sh which which is available in $JBOSS_HOME/bin This shell
script allows to create and modify a credential store for an offline, or stopped, WildFly server. Here
is, for example, how to create a Credential store using the elytron-tool.sh

$ elytron-tool.sh credential-store --create --location "../credentials/csstore.jceks"


--password mypassword

Once created, then you can start adding entries to your store:

$ elytron-tool.sh credential-store --location "../credentials/csstore.jceks"


--password mypassword --add database-pw --secret secret

355
15.12. An overview of Jakarta EE Security API
WildFly 20 is a fully compliant Jakarta EE and Java EE 8 application server. The last inclusion in the
Enterprise specification is JSR 375 which is now part of the Jakarta EE Security API. The principal
issues that JSR aimed to resolve are:

• The existing security mechanisms did not take advantage of using a common programming
feature such as CDI.

• There was no standard way to control how authentication was managed on the backend across
containers.

• There was no standard way to control identity stores across containers.

• The Java EE API did include methods to manage security in a programmable way but with
subtly different syntax (e.g. A Servlet would use HttpServletRequest.isUserInRole(String role)
while an EJB uses EJBContext.isCallerInRole(String roleName) to check a user’s role)

The advantage of the Java EE 8 security API is that is allows a portable APIs for authentication,
identity stores, roles and permissions but does not replace existing security mechanisms.

15.12.1. Secure web authentication with Jakarta EE Security API

A Jakarta EE 8 container must provide an HttpAuthenticationMechanism implementations for three


authentication mechanisms, which are defined in the Servlet 4.0 specification. The three
implementations are:

• Basic HTTP authentication

• Form-based authentication

• Custom-form authentication

Each authentication mechanism so far was managed essentially through the web.xml configuration
file. Now the concrete implementation can be triggered by the presence of its associated
annotation:

• @BasicAuthenticationMechanismDefinition

• @FormAuthenticationMechanismDefinition

• @CustomFormAuthenticationMechanismDefinition

Let’s see them in detail:

The @BasicAuthenticationMechanismDefinition annotation can be used to provide basic HTTP


authentication as defined by Servlet 4.0. The following code shows an example. The only
configuration parameter is optional, and allows a realm to be specified.

In the following example, we are using the


@javax.security.enterprise.authentication.mechanism.http.BasicAuthenticationMechanismD
efinition annotation to define the Authentication Mechanism to be used in your Enterprise
Component, for example a Servlet:

356
@WebServlet(name = "secureservlet", urlPatterns = { "/secureservlet" })
@BasicAuthenticationMechanismDefinition(realmName = "userRealm")
@ServletSecurity(@HttpConstraint(rolesAllowed = { "Admin" }))
public class SecureServlet extends HttpServlet { . . . }

Another available option is to use the


@javax.security.enterprise.authentication.mechanism.http.FormAuthenticationMechanismD
efinition annotation to trigger a Form-based authentication as per Servlet specification. Within this
annotation, we have the option to specify the login and error pages otherwise the default ones
(/login and /login-error) will be used:

@FormAuthenticationMechanismDefinition(
  loginToContinue = @LoginToContinue(
  loginPage = "/loginpage.html",
  errorPage = "/login-error.html"))
@ApplicationScoped
public class ApplicationConfigurationBean{ . . . }

Besides the standard @FormAuthenticationMechanismDefinition, which allows defining the


authentication mechanism, it is possible to use also a
@CustomFormAuthenticationMechanismDefinition which delegates to a Backing Bean to define
the authentication process. For example:

<input type="submit" value="Login" jsf:action="#{loginBean.login}"/>

Here is the LoginBean definition which receives as injected the SecurityContext which allows to
verify the authentication:

357
@Named
@RequestScoped
public class LoginBean {

  @Inject
  private SecurityContext securityContext;

  @NotNull private String username;

  @NotNull private String password;

  public void login() {


  Credential credential = new UsernamePasswordCredential(
  username, new Password(password));
  AuthenticationStatus status = securityContext
  .authenticate(
  getHttpRequestFromFacesContext(),
  getHttpResponseFromFacesContext(),
  withParams().credential(credential));
  // ...
  }

  // ...
}

In order to be enabled, the @CustomFormAuthenticationMechanismDefinition needs to be enabled


in your application, for example in an Application Scoped Component:

@CustomFormAuthenticationMechanismDefinition(
  loginToContinue = @LoginToContinue(loginPage = "/login.xhtml"))
@ApplicationScoped
public class AppConfig {
}

As mentioned at the beginning of this chapter, if you are using WildFly 14, the
Java EE 8 API is still based on Picketbox legacy implementation. Therefore you
have to reference the jaspitest Security Domain which is defined in the legacy
security subsytem.


 <jboss-web>
  <security-domain>jaspitest</security-domain>
 </jboss-web>

15.12.2. Managing Identity Stores with Jakarta EE 8

As discussed in this chapter, an Identity Store is essentially a database that stores user identity

358
information such as username, group membership. The Jakarta EE Security API provides an
identity-store abstraction for it called IdentityStore. Using IdentityStore and
HttpAuthenticationMechanism together enables an application to control the identity stores it uses
for authentication in a portable and standard way, and is recommended for most use-case
scenarios.

There are some built-in Identity Store implementations such as the


**@DatabaseIdentityStoreDefinition and the @LdapIdentityStoreDefinition which can be used to
validate user’s credentials against an Identity Store with some simple annotations.

In the following example, we can see how Database login can be rewritten using its IdentityStore:

@DatabaseIdentityStoreDefinition(
  dataSourceLookup = "java:/PostGreDS",
  callerQuery = "select password from USERS where login=?",
  groupsQuery = "select role, 'Roles' from USERS where login=?",
  priority=30)
@ApplicationScoped
public class BeanConfig {
}

The DatabaseIdentityStoreDefinition does not use plain text to verify identities on


the database but instead uses hashed passwords. You can customize the hash
algorithm to be used by setting additional parameters into the
@DatabaseIdentityStoreDefinition:

@DatabaseIdentityStoreDefinition(dataSourceLookup = "java:/PostGreDS",
callerQuery = "select password from USERS where login=?", groupsQuery
= "select role, 'Roles' from USERS where login=?", hashAlgorithm =
Pbkdf2PasswordHash.class, priorityExpression = "#{100}",
hashAlgorithmParameters = {
  "Pbkdf2PasswordHash.Iterations=3072",
 "${applicationConfig.dyna}" })

@ApplicationScoped
@Named
public class ApplicationConfig {

  public String[] getDyna() {


  return new String[] {
"Pbkdf2PasswordHash.Algorithm=PBKDF2WithHmacSHA256",
  "Pbkdf2PasswordHash.SaltSizeBytes=64" };

The LDAP configuration is just as simple, and can be used as replacement for the configuration
through the @LdapIdentityStoreDefinition by passing configuration data:

359
@LdapIdentityStoreDefinition(
  url = "ldap://172.17.0.2:389",
  callerBaseDn = "ou=Users,dc=wildfly,dc=org",
  groupSearchBase = "ou=Roles,dc=wildfly,dc=org",
  groupSearchFilter = "(member={1})")
@ApplicationScoped
public class BeanConfig {
}

In the above example, we need the URL of an external LDAP server, how to search the caller in the
LDAP directory, and how to retrieve his groups.

360
16. Chapter 16: WildFly’s legacy security
model
This chapter discusses about WildFly legacy security infrastructure which is based on several
frameworks (the core one being PicketBox).

You can still use the legacy Security model to secure your applications, although you are
encouraged to migrate to Elytron in the near term.

To summarize, this is the list of topics discussed in this chapter:

• At first we will discuss about the building blocks of the legacy Security Model

• Then we will learn how to create Login modules, which are associated with a Security Domain.

• Next, we will introduce the new Java EE 8 Security API which currently uses the legacy Security
Framework

• Finally, we will discuss about configuring Secure Sockets Layers to encrypt the transmission of
your HTTP channel.

16.1. Security building blocks


The core building blocks of WildFly legacy security are the same as the one we discussed in the
Elytron subsystem system, just they are used to manage security aspects with different scopes. So
the first building block is the Security Domain.

A Security Domain consists of configurations for authentication, authorization, security mapping,


and auditing. It implements Java Authentication and Authorization Service (JAAS) declarative
security.

A security domain therefore performs all the authorization and authentication checks before a
request reaches the borders of the application server. In order to work the security domain relies
on the concept of Security Realm. The Realm job is to respond to callbacks based on a supplied
username and return either the users password or a hash of their password allowing the transport
specific checks to be done. We will explore Security Realms and Security Domains with the
following schedule:

• Configuring Security Realms and Security Domains

• Creating Login modules for securing

• Configuring Secure Sockets Layers to encrypt the transmission of your HTTP channel.

16.1.1. Configuring Security Realms

Since a Security Domain relies on the concept of Security Realm, we will learn at first how Security
Realms are configured. Out the of box, the application server defines the following Security Realms:

• The ManagementRealm: which is used to secure access to the Management interfaces(CLI/Web


console)

361
• The ApplicationRealm: which is used to secure access to your applications

In the next section, we will enter into the details of each Realm and we will learn how to customize
its configuration.

16.1.1.1. The Management Realm

The ManagementRealm is used to control the management instruments of the application server.
Out of the box, the Management Realm is based on a simple authentication and authorization
mechanism which stores the user credentials on the files mgmt-users.properties and group
mappings in the file mgmt-groups.properties as shown by the following configuration snippet:

<security-realm name="ManagementRealm">
  <authentication>
  <local default-user="$local" />
  <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"
/>
  </authentication>
  <authorization map-groups-to-roles="false">
  <properties path="mgmt-groups.properties" relative-to="jboss.server.config.dir"
/>
  </authorization>
</security-realm>

The CLI management interfaces relies on the local mechanism which means that any user
connecting from a local host will be granted a guest access named "$local" which does not requires
a password in order to access the CLI.

The Web interface and remote CLI clients, on the other hand, require username / password
authentication in order to access the Management interfaces. The user’s details will be loaded by
default from the file mgmt-users.properties which is located in the
$JBOSS_HOME/standalone/configuration or $JBOSS_HOME/domain/configuration depending on the
running mode of the server.

16.1.1.2. The Application Realm

The ApplicationRealm is used to secure applications that are exposing services such as EJBs for
example. Here is the default configuration of the ApplicationRealm:

362
<security-realm name="ApplicationRealm">
  <server-identities>
  <ssl>
  <keystore path="application.keystore" relative-to="
jboss.server.config.dir" keystore-password="password" alias="server" key-password=
"password" generate-self-signed-certificate-host="localhost"/>
  </ssl>
  </server-identities>
  <authentication>
  <local default-user="$local" allowed-users="*" skip-group-loading="true"/>
  <properties path="application-users.properties" relative-to=
"jboss.server.config.dir"/>
  </authentication>
  <authorization>
  <properties path="application-roles.properties" relative-to=
"jboss.server.config.dir"/>
  </authorization>
</security-realm>

The ApplicationRealm configuration is slightly more complex than the ManagementRealm since it
is used both for authentication and authorization of service invocations.

• The authentication process is pretty similar to the ManagementRealm realm counterpart since
it allows a guest authentication (named $local) for local application clients. The only difference
is that users are taken from the file named application-users.properties located in
$JBOSS_HOME/standalone/configuration or $JBOSS_HOME/domain/configuration depending on the
running mode of the server.

• The authorization process, on the other hand, relies on the file named application-
roles.properties, which is located as well in the server’s configuration folder.

Since Application Realms are able to define an identity for the server, they can be used for both
inbound connections to the server and outbound connections being established by the server.

A typical example of it is the Remoting subsystem, which is used to drive your connections towards
the EJB container and by default, is bound to the ApplicationRealm:

<subsystem xmlns="urn:jboss:domain:remoting:4.0">
  <http-connector name="http-remoting-connector" connector-ref="default" security-
realm="ApplicationRealm"/>
</subsystem>

You can add users in the default ApplicationRealm by running the add-user.sh script and selecting
"Application User" as option.

16.2. WildFly Security Domains


The other key component of the application server security is the Security Domain which defines

363
all the authentication and authorization policies to be used by the application server. The security
domain in turn contains login modules that implement the Security Domain’s principal
authentication and role-mapping behavior. Security domains are defined into the security
subsystem which, out of the box, contains already a few security domains: here is the first one:

<subsystem xmlns="urn:jboss:domain:security:2.0">
  <security-domains>
  <security-domain name="other" cache-type="default">
  <authentication>
  <login-module code="Remoting" flag="optional">
  <module-option name="password-stacking" value="useFirstPass" />
  </login-module>
  <login-module code="RealmDirect" flag="required">
  <module-option name="password-stacking" value="useFirstPass" />
  </login-module>
  </authentication>
  </security-domain>
  . . . . .
  </security-domains>
</subsystem>

The other security domain is a basic security domain that can be used for our first experiments
with the ApplicationRealm. As you can see, this domain defines a couple of login modules named
respectively: "Remoting" and "RealmDirect".

• The Remoting login module is used internally when authentication requests are received over a
Remoting connection (usually an EJB call).

• The RealmDirect is triggered when the request did not arrive over a Remoting connection (e.g.
when logging into a Web application).

Besides the "other" security domain, you can find also other two security domains named
respectively jboss-ejb-policy and jboss-web-policy. These domains define the default
authorization modules that should be used if none is found in the application security domain. As
they are used internally by Security Framework (named PicketBox), most of the time, you shouldn’t
care about these domains at all. We include them here for your reference:

<security-domain name="jboss-web-policy" cache-type="default">


  <authorization>
  <policy-module code="Delegating" flag="required" />
  </authorization>
  </security-domain>
  <security-domain name="jboss-ejb-policy" cache-type="default">
  <authorization>
  <policy-module code="Delegating" flag="required" />
  </authorization>
</security-domain>

364
16.2.1. Security under the hood

If you paid attention to the definition of the built-in login modules, you should have noticed the flag
attribute which controls the overall behavior of the authentication stack:

<login-module code="RealmDirect" flag="required">

This attribute can be set to the following values:

• Required: The LoginModule is required to succeed. If it succeeds or fails, authentication


continues to proceed down to the LoginModule list.

• Requisite: The LoginModule is required to succeed. If it succeeds, authentication continues


down the LoginModule list. Should it fail, control immediately returns to the application
(authentication does not proceed down the LoginModule list).

• Sufficient: The LoginModule is not required to succeed. If it does succeed, control immediately
returns to the application (authentication does not proceed down the LoginModule list). Should
it fail, authentication continues down the LoginModule list.

• Optional: The LoginModule is not mandatory to succeed. If it succeeds or fails, authentication


continues to proceed down the LoginModule list.

The overall authentication succeeds only if all required and requisite LoginModules are successful.
If no required or requisite LoginModules are configured for an application, then at least one
sufficient or optional LoginModule must succeed. In the following sections, we will show how to
add a Security Domain for securing your applications using some built-in login modules such as:

• RealmDirect login module: used to store users and roles in simple text files

• Database login module: used to store credentials in a relational database

• LDAP login module: used to store the users and roles in a directory tree such as OpenLdap.

16.2.2. Using the RealmDirect login module

The RealmDirect is one of the simplest login modules and is used, as we said, for authenticating
the current request if that did not occur in the Remoting login module.

<login-module code="RealmDirect" flag="required">


  <module-option name="password-stacking" value="useFirstPass"/>
</login-module>

The advantage of using this login module is that you don’t need to provide any backing store
configuration as the Security Domain will just delegate it to the Realm. Actually, the user and roles
storage for RealmDirect are contained in the ApplicationRealm:

365
<security-realm name="ApplicationRealm">
  <authentication>
  <local default-user="$local" allowed-users="*"/>
  <properties path="application-users.properties"
  relative-to="jboss.server.config.dir"/>
  </authentication>
  <authorization>
  <properties path="application-roles.properties"
  relative-to="jboss.server.config.dir"/>
  </authorization>
</security-realm>

The RealmDirect login module contains just an option in it, the password-stacking which can be
used if multiple login modules are chained together in a stack.

If you don’t want to delegate to the ApplicationRealm for the user/roles file names location, you can
add the usersProperties and rolesProperties to your login module as shown by the following
example:

<module-option name="usersProperties"
  value="${jboss.server.config.dir}/custom-users.properties"/>
<module-option name="rolesProperties"
  value="${jboss.server.config.dir}/custom-roles.properties"/>

16.2.2.1. Adding new Application users

Having described the available configuration options, it is now time to add some users into our
ApplicationRealm so that the RealmDirect login module will use them for authentication. In order
to do that, you can use the add-user.sh (or add-user.bat for Windows) mentioning to create a new
Application User and entering the credentials and the group (in our case the "Admin" group) to
whom the user belongs to:

366
$ ./add-user.sh

What type of user do you wish to add?


 a) Management User (mgmt-users.properties)
 b) Application User (application-users.properties)
(a): b

Enter the details of the new user to add.


Using realm 'ApplicationRealm' as discovered from the existing property files.
Username : francesco
Password recommendations are listed below. To modify these restrictions edit the
 add-user.properties configuration file.
 - The password should not be one of the following restricted values {root, admin,
administrator}
 - The password should contain at least 8 characters, 1 alphabetic character(s), 1
digit(s), 1 non-alphanumeric symbol(s)
 - The password should be different from the usernamePassword :
Password :
What groups do you want this user to belong to? (Please enter a comma separated
list, or leave blank for none)[ ]: Admin

The above user will be added into the application-users.properties using the following format:

username=HEX( MD5( username ':' realm ':' password))

so, in our example it will be:

francesco=b6deaea47caf6a533cc8c60fe372063d

And its role into will be added as clear text into application-roles.properties :

francesco=Admin

16.2.2.2. Defining the roles into your applications

Once you have defined the server security stack, it is time to secure your applications with it. The
steps required will be different depending if you are securing a Web application or an EJB
application.

Web applications:

Web applications configure their security in two different points: in the Java EE configuration file
(web.xml) you will declare which role will map a set of URL patterns. On the other hand, in the
jboss-web.xml configuration file we will declare which Security Domain will be used to verify the
login credentials entered by the user (in our example, the "other" Security Domain). Here’s the
web.xml:

367
<web-app>
  . . .
  <security-constraint>
  <web-resource-collection>
  <web-resource-name>HtmlAuth</web-resource-name>
  <description>application security constraints</description>
  <url-pattern>/*</url-pattern>
  <http-method>GET</http-method>
  <http-method>POST</http-method>
  </web-resource-collection>
  <auth-constraint>
  <role-name>Admin</role-name>
  </auth-constraint>
  </security-constraint>
  <login-config>
  <auth-method>BASIC</auth-method>
  <realm-name>UserRoles simple realm</realm-name>
  </login-config>
  <security-role>
  <role-name>Admin</role-name>
  </security-role>
</web-app>

And here’s the jboss-web.xml configuration file which needs to be placed into the WEB-INF folder of
your web application:

<jboss-web>
  <security-domain>other</security-domain>
</jboss-web>

If you don’t include any reference to a security domain in your application, the "other" security will
be used by default the application server.

EJB applications:

In order to secure EJB applications you can either use jboss-ejb3.xml configuration file or applying
Java EE annotations such as:

• @javax.annotation.security.PermitAll: Indicates that the given method or all business


methods of the given EJB are accessible by everyone.

• @javax.annotation.security.DenyAll: Indicates that the given method in the EJB cannot be


accessed by anyone.

• @javax.annotation.security.RolesAllowed: Indicates that the given method or all business


methods in the EJB can be accessed by users that are associated with the list of roles.

• @javax.annotation.security.DeclareRoles: Defines roles for security checking. To be used by


EJBContext.isCallerInRole, HttpServletRequest.isUserInRole, and
WebServiceContext.isUserInRole.

368
• @javax.annotation.security.RunAs: Specifies the RunAs role for the given components.

By combining the Java EE annotation with @org.jboss.ejb3.annotation.SecurityDomain you can


specify the Security Domain and the conditions which will be used to protect your applications.

In the following example, we are restricting access to the EJB named SafeEJB using the role Admin
that we have earlier defined:

import org.jboss.ejb3.annotation.SecurityDomain;
import javax.annotation.security.RolesAllowed;

@Stateless
@SecurityDomain("other")
@RolesAllowed( { "Admin" })
public class SafeEJB {
. . . . . .
}

Annotations can also be applied at method level as shown in the following example:

@RolesAllowed( { "Admin" })
public void persistData() {
. . . .
}

In WildFly 18, the presence of any security metadata (like @RolesAllowed,


@PermitAll, @DenyAll, @RunAs, @RunAsPrincipal) on the bean or any business
method of the bean, makes the bean secure, even in the absence of an explicitly

 configured security domain. In such cases, the security domain name is default to
"other". Users can explicitly configure an security domain for the bean if they
want to using either the annotation or deployment descriptor approach explained
earlier.

Additionally, Since WildFly 18, if you haven’t specified any security policy for a method the policy
@DenyAll policy will be applied. You can change this behaviour at ejb subsytem level:

<subsystem xmlns="urn:jboss:domain:ejb3:1.4">
...
  <default-missing-method-permissions-deny-access value="true"/>
...
</subsystem>

This behaviour can be controlled via the jboss-ejb3.xml deployment descriptor at a per bean level
or a per deployment level as follows:

369
<jboss:jboss
  xmlns="http://java.sun.com/xml/ns/javaee"
  xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xmlns:s="urn:security:1.1"
  xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-ejb3-spec-2_0.xsd"
  version="3.1" impl-version="2.0">

  <assembly-descriptor>
  <s:security>
  <!-- Even wildcard * is supported where * is equivalent to all EJBs in the
deployment -->
  <ejb-name>SafeEJB</ejb-name>
  <s:missing-method-permissions-deny-access>false</s:missing-method-
permissions-deny-access>
  </s:security>
  </assembly-descriptor>
</jboss:jboss>

16.2.3. Database Login module

The Database Login Module is a Java Database Connectivity-based (JDBC) login module that
supports authentication and role mapping. You can use this login module if you have your
username, password and role information stored in a relational database, which is accessible by
means of a Datasource.

Prerequisites:

• A Datasource for connecting to the Database. Follow the steps described in Creating a
Datasource using the CLI to complete this step.

• Create the required Tables on the Database as described in Configuring a JDBC Realm .

Having completed all the required steps, we will now add a legacy Security Domain to your security
subsystem. The following CLI will add a Security Domain DBLogin:

/subsystem="security"/security-domain="DBLogin":add()
/subsystem="security"/security-domain="DBLogin"/authentication="classic":add()
/subsystem="security"/security-domain="DBLogin"/authentication="classic"/login-
module="Database":add(code="Database",flag="required",module-options={"dsJndiName" =>
"java:/PostGreDS","principalsQuery" => "select password from USERS where
login=?","rolesQuery" => "select role, 'Roles' from USERS where login=?"})
reload

Once reload your configuration, you will see in the security subsytem of your server the following
XML fragment:

370
<subsystem xmlns="urn:jboss:domain:security:1.2">
  <security-domains>
  . . . . . .
  <security-domain name="DBLogin">
  <authentication>
  <login-module code="Database" flag="required">
  <module-option name="dsJndiName" value="java:/PostGreDS" />
  <module-option name="principalsQuery"
  value="select password from USERS where login=?
"/>
  <module-option name="rolesQuery"
  value="select role, 'Roles' from USERS where
login=?"/>
  </login-module>
  . . . . .
  </authentication>
  </security-domain>
  </security-domains>
</subsystem>

You can download the CLI script required to create the DBLogin Security Domain from here.
http://bit.ly/2PEnP10

As you can see from the highlighted section, your login module references the Datasource we have
formerly created, where credentials are stored. The application security configuration follows the
same guidelines as in RealmDirect, that is, you will state your URLs/Role mappings in web.xml and in
your jboss-web.xml you will reference the DBLogin module:

<jboss-web>
  <security-domain>DBLogin</security-domain>
</jboss-web>

The example Web application which uses DBLogin Security Domain is available at:
http://bit.ly/2FQimND

On the other hand, EJB applications will use the @org.jboss.ejb3.annotation.SecurityDomain to


reference your DBLogin module:

@Stateless
@SecurityDomain("DBLogin")
@RolesAllowed( { "Admin" })
public class SafeEJB { // . . . .}

16.2.3.1. Using encrypted database passwords

The login module showed so far stores password in the database using clear text. For greater
security rather than storing passwords in plain text, a one-way hash of the password can be stored

371
(using an algorithm such as MD5) in a similar fashion to the /etc/passwd file on a UNIX system. This
has the advantage that anyone reading the hash won’t be able to use it to log in. However, there is
no way of recovering the password should the user forget it, and it also makes administration
slightly more complicated because you also have to calculate the password hash yourself to put it in
your security database. This is not a major problem though. To enable password hashing in the
database login module, you need to include the following highlighted module options:

<login-module code="Database" flag="required">


  <module-option name="dsJndiName" value="java:/PostGreDS" />
  <module-option name="principalsQuery"
  value="select password from USERS where login=?"/>
  <module-option name="rolesQuery"
  value="select role, 'Roles' from USERS where login=?"/>
  <module-option name="password-stacking" value="useFirstPass"/>
  <module-option name="hashAlgorithm" value="MD5"/>
  <module-option name="hashEncoding" value="base64"/>
</login-module>

This indicates that we want to use MD5 hashes and use base64 encoding to covert the binary hash
value to a string. The application server will now calculate the hash of the supplied password using
these options before authenticating the user, so it’s important that we store the correctly hashed
information in the database. If you’re on a Linux/UNIX system, you can use openssl to hash the
value. For example supposing you want to hash the password "smith":

$ echo -n "admin" | openssl dgst -md5 -binary | openssl base64

ISMvKXpXpadDiUoOSoAfww==

As an alternative, you can use the Base64Encoder class that is part of PicketBox modules as
follows:

$ cd $JBOSS_HOME/modules/system/layers/base/org/picketbox/main

$ java -classpath picketbox-5.0.3.Final.jar


org.jboss.security.Base64Encoder admin MD5

[ISMvKXpXpadDiUoOSoAfww==]

Now you can update your database password so to use an encrypted password:

update USERS set password = 'ISMvKXpXpadDiUoOSoAfww==' where login = 'admin';

16.2.4. LDAP Login module configuration

In this section, we will show how to use OpenLDAP as repository for authentication.

372
Prerequisites: An OpenLDAP service active, with the required Directory entries in it. Follow the
steps described in Configuring an LDAP Realm to install an LDAP Server with some entries in it.

Now in order to use LDAP for Authentication, you can use the LdapExtended Login module,
entering the values of the bindDN and bindCredential contained in slapd.conf or in your Docker
environment variables:

<security-domain name="LDAPAuth">
  <authentication>
  <login-module code="LdapExtended" flag="required">
  <module-option name="java.naming.factory.initial" value=
"com.sun.jndi.ldap.LdapCtxFactory" />
  <module-option name="java.naming.provider.url" value="ldap://172.17.0.2:389"
/>
  <module-option name="java.naming.security.authentication" value="simple" />
  <module-option name="bindDN" value="cn=admin,dc=wildfly,dc=org" />
  <module-option name="bindCredential" value="admin" />
  <module-option name="baseCtxDN" value="ou=Users,dc=wildfly,dc=org" />
  <module-option name="baseFilter" value="(uid={0})" />
  <module-option name="rolesCtxDN" value="ou=Roles,dc=wildfly,dc=org" />
  <module-option name="roleFilter" value="(member={1})" />
  <module-option name="roleAttributeID" value="cn" />
  <module-option name="searchScope" value="ONELEVEL_SCOPE" />
  <module-option name="allowEmptyPasswords" value="true" />
  </login-module>
  </authentication>
</security-domain>

The CLI script required to generate the above Security Domain is available at: http://bit.ly/2pjIES7

Within our login module, we need at first to specify the organization unit containing the users,
through the baseCtxDN option and as well the organization, which contains the roles through the
rolesCtxDN.

The baseFilter option is a search filter used to locate the context of the user to authenticate.

The roleFilter is as well a search filter used to locate the roles associated with the authenticated
user.

The searchScope sets the search scope to one of the strings. ONELEVEL_SCOPE searches directly
under the named roles context.

Finally the allowEmptyPasswords: It is a flag indicating if empty (length==0) passwords should be


passed to the LDAP server.

16.2.5. Login not working?

If, for some reasons, you are not able to login, the first remedy is to set to TRACE the verbosity of
org.jboss.security packages to see more details about your error:

373
<logger category="org.jboss.security">
  <level name="TRACE"/>
</logger>

16.2.6. Auditing Security Domains

Security auditing enables tracing events that happen within the security subsystem. The auditing
mechanism is part of the Security Domain, along with authentication and authorization that we
have already learned.

Out of the box, auditing is not enabled in the server configuration. You can dig into the
management core-service resource to gather information about the audit-log:

/core-service=management/access=audit/logger=audit-log/:read-resource(recursive=false)
{
  "outcome" => "success",
  "result" => {
  "enabled" => false,
  "log-boot" => true,
  "log-read-only" => false,
  "handler" => {"file" => undefined}
  }
}

You can enable it by setting to true the "enabled" attribute:

/core-service=management/access=audit/logger=audit-log/:write-
attribute(name=enabled,value=true)

In the default configuration, the auditing is stored in the audit-log.log file name under the server’s
data directory:

/core-service=management/access=audit/file-handler=file/:read-resource
{
  "outcome" => "success",
  "result" => {
  "formatter" => "json-formatter",
  "max-failure-count" => 10,
  "path" => "audit-log.log",
  "relative-to" => "jboss.server.data.dir"
  }
}

374
16.3. Management Security with Login Modules
So far we have learned how to configure several login modules which can be used to store user
name and passwords on a relational database or on a directory service. The same JAAS-based login
modules can be used as well as authorization schema for your management interfaces. For
example, suppose you want to use the DBLogin security domain to control access to the
management interfaces. As you can see, it’s simply a matter of replacing the authentication
schema contained in the ManagementRealm with a jaas element pointing to the DBLogin security
domain:

<security-realms>
  <security-realm name="ManagementRealm">
  <authentication>
  <jaas name="DBLogin"/>
  </authentication>
  </security-realm>
</security-realms>

One side effect of removing the "local" authentication schema from your ManagementRealm is that
you will be prompted for username and password also when connecting from a local CLI client:

[jboss@localhost bin]$ ./jboss-cli.sh -c


Username: admin
Password:

16.4. Management Security with LDAP


If you don’t want to rely on JAAS based login modules, you can directly specify your LDAP
Connection settings from within your Security Realm. This has the evident advantage that you can
apply any extra level of security on your Realm, for example by encrypting the communication.

As a proof of concept, we will show how to create a Security Realm which is based on the LDAP
directory tree that we have formerly used for the JAAS based login module:

<security-realms>
  . . . . . .
  <security-realm name="LdapRealm">
  <authentication>
  <ldap connection="ldap_connection" base-dn="ou=People,dc=jboss,dc=com">
  <username-filter attribute="uid" />
  </ldap>
  </authentication>
  </security-realm>
. . . . .
</security-realms>

375
So as first step, we have created a new Security Realm named "LdapRealm". Within the
authentication section, we have included the base-dn for connecting to the Directory service and
the attribute used to filter the username ("uid").

What happens under the hoods is that a first connection is made to LDAP to perform a search using
the supplied user name to identity the distinguished name of the user. Then a subsequent
connection is made to the server using the password supplied by the user - if this second
connection is a success then authentication succeeds.

The Security Realm named LdapRealm is then referenced in the management-interfaces section,
which needs and outbound connection to the LDAP Server and a search base connection string:

<management>
  . . . . . . .
  <management-interfaces>
  <http-interface security-realm="LdapRealm" http-upgrade-enabled="true">
  <socket-binding http="management-http"/>
  </http-interface>
  </management-interfaces>
  . . . . . . .
  <outbound-connections>
  <ldap name="ldap_connection" url="ldap://127.0.0.1:389" search-dn=
"uid=admin,ou=People,dc=jboss,dc=com" search-credential="secret" />
  </outbound-connections>
</management>

16.5. Enabling the Secure Socket Layer on WildFly


We have already introduced Configuring SSL/TSL with Elytron. We will now learn how to install
these certificates on WildFly, using the Legacy Security framework.

Assumed that you have created the required certificates and copied them into the configuration
folder of WildFly, next step will be defining a Security Realm which will contain the keystore and
trustore references

/core-service=management/security-realm=SSLRealm:add

Next, for one-way SSL, set the path to the keystore, along with the keystore path and password:

/core-service=management/security-realm=SSLRealm/server-identity=ssl:add(keystore-
path="server.keystore", keystore-relative-to="jboss.server.config.dir", keystore-
password="secret")

If you are using two-way, you will need to set the path to the truststore, along with the its path and
password:

376
/core-service=management/security-
realm=SSLRealm/authentication=truststore:add(keystore-password="secret",keystore-
path="server.truststore",keystore-relative-to="jboss.server.config.dir")

Finally, set the value of Undertow’s https listener to your Security Realm:

/subsystem=undertow/server=default-server/https-listener=default-https:write-
attribute(name=security-realm,value=SSLRealm)

The script to install one-way SSL on WildFly is available at: http://bit.ly/2FT6NFi

The script to install two-way SSL on WildFly is available at: http://bit.ly/2pkw18N

Having completed the certificates installation, we will see how to use it for securing the two basic
application types, that is Web applications and EJB applications.

16.5.1. Securing Web applications with SSL

As you might guess, if we are going to tweak the Web server configuration we will need operating
on the Undertow subsystem. Having our SSLRealm already configured, we need just one step to
enable secure communication on WildFly, which is adding a new https-listener in the undertow
subsystem as shown below:

 <subsystem xmlns="urn:jboss:domain:undertow:7.0" default-server="default-server"


default-virtual-host="default-host" default-servlet-container="default" default-
security-domain="other">
 . . .
  <server name="default-server">
  <http-listener name="default" socket-binding="http" />
  <https-listener name="default-https" socket-binding="https"
  security-realm="SSLRealm" />
  <host name="default-host" alias="localhost">
  <location name="/" handler="welcome-content" />
  </host>
  </server>
 . . .
</subsystem>

To declare that HTTPS should be used for a URL in your application, you can set up a security
constraint in the web.xml deployment descriptor with a <user-data-constraint> whose <transport-
guarantee> is CONFIDENTIAL, as follows:

377
<security-constraint>
  <web-resource-collection>
  <web-resource-name>secure-profile</web-resource-name>
  <url-pattern>/secure/*</url-pattern>
  </web-resource-collection>
  <user-data-constraint>
  <transport-guarantee>CONFIDENTIAL</transport-guarantee>
  </user-data-constraint>
</security-constraint>

Requests using HTTP (non-secure) for URLs whose transport guarantee is set to CONFIDENTIAL are
automatically redirected to the same URL using HTTPS

You can verify that your server is running on a secure socket layer by requesting your applications
through the https protocol and using the default port (8443). For example, if you were to deploy
your application named secure.war, then in order to access the welcome page of your application
you would issue: https://localhost:8443/secure

Having a quick look with WireShark network tool (http://www.wireshark.org/) reveals that the data
being returned by the server is now encrypted:

16.5.1.1. How to secure the application server with a CA signed certificate

If you try to connect via https to your site using a self-signed certificate, the browser security
sandbox will warn the user about the potential security threat. That’s correct as the certificate has
not been signed by any recognized CA.

Having your certificate signed requires issuing a Certificate Signing Request (CSR) to a CA that
will return a signed certificate to be installed on your server. This implies a cost for your
organization, which depends on how many certificates you are requesting, the encryption strength
and other factors. We will document here all the steps that need to be performed:

At first, generate a Certificate Signing Request (CSR) using the keystore. This step has been

378
already shown in the earlier section Creating your own certificates:

keytool -export -alias serverkey -keystore server.keystore -rfc -file server.crt


-keypass mypassword -storepass mypassword

This will create a new certificate request named server.crt, bearing the format:

-----BEGIN NEW
CERTIFICATE REQUEST-----

. . . . . .

-----END NEW
CERTIFICATE REQUEST-----

Now you need to transmit this certificate to a CA; request for a trial certificate at Verisign, for
example. (http://www.verisign.com )

At the end of the enrollment phase, the CA will return a signed certificate that needs to be
imported into your keychain. Supposing that you have saved your CA certificate in a file named
root.ca:

keytool -import -keystore server.jks -alias testkey1 -storepass mypassword -keypass


mypassword -file root.ca

Now your web browser will recognize your new certificate as being signed by a CA, so it won’t
complain that it cannot validate the certificate.

16.6. Encrypting the Management Interfaces channel


The certificates that we have created so far can be used as well for encrypting the communication
of management interfaces.

In order to do that, you need to apply an ssl element to the Security Realm used by the management
interface. The ssl element will contain a reference to the keystore created by the keytool utility. Let’s
see, how to encrypt the communication channel of your Ldap realm:

379
<security-realm name="LdapRealm">
  <server-identities>
  <ssl>
  <keystore path="server.keystore" relative-to="jboss.server.config.dir"
keystore-password="mypassword" alias="serverkey" />
  </ssl>
  </server-identities>
  <authentication>
  <ldap connection="ldap_connection" base-dn="ou=People,dc=jboss,dc=com">
  <username-filter attribute="uid" />
  </ldap>
  </authentication>
</security-realm>

As you can see, the <ssl> element has been added to the Realm with a reference to the keystore. You
need one more step, as the management interfaces need to be bound to the management-https
socket binding (instead of the default management-http):

<management-interfaces>
  <http-interface security-realm="LdapRealm" http-upgrade-enabled="true">
  <socket-binding https="management-https"/>
  </http-interface>
</management-interfaces>

16.7. WildFly support for HTTP/2


This section will provide a short overview of the new version of the HTTP protocol, namely the
HTTP/2 protocol. Why do we need a new protocol for Web communication ? The standard HTTP/1.1
protocol has served the Web well for more than fifteen years, but its age is starting to show.
Loading a Web page is more resource intensive than ever and loading all of those assets efficiently
is difficult, because HTTP practically only allows one request per TCP connection. As a consequence,
if too many requests are made, it hurts performance.

HTTP/2 was developed by the IETF’s HTTP Working Group, which maintains the HTTP protocol. At a
high level, here are the highlights of HTTP/2:

• HTTP/2 is binary, instead of textual. Binary protocols are far more efficient to parse, more
compact "on the wire", and less error prone.

• HTTP/2 is fully multiplexed. Multiplexing addresses allows multiple request and response
messages to be routed at the same time; it’s even possible to mix parts of one message with
another on the wire.

• HTTP/2 uses header compression to reduce overhead: even mild compression on headers allows
those requests to get onto the wire within one roundtrip; maybe even one packet.

• HTTP/2 allows servers to "push" responses proactively into client caches : When a browser
requests a page, the server sends the HTML in the response, and then needs to wait for the

380
browser to parse the HTML and issue requests for all of the embedded elements before it can
start sending the JavaScript, images and CSS. Server Push allows the server to avoid this round
trip of delay by "pushing" the responses it thinks the client will need into its cache

16.7.1. Setting up HTTP/2

Since WildFly 11, there is built-in support for HTTP/2. On the other hand, for older application
server versions, you need to provide a valid certificate to encrypt the communication. You can
check that your server has enabled support for HTTP/2 by inspecting the enable-http2 attribute in
the HTTP Listener:

/subsystem=undertow/server=default-server/https-listener=https:read-
attribute(name=enable-http2)
{
  "outcome" => "success",
  "result" => true
}

The current implementation of HTTP/2 requires the use of encryption (e.g., TLS).
 Besides that, currently no browser supports HTTP/2 unencrypted.

If you have installed a certificate on your default Security Realm, that one will be for encrypting the
communication, otherwise the default (self-signed) certificate will be used.

Let’s check that HTTP/2 is working by invoking a page that uses https (e.g. https://localhost:8443).

There is no UI element anywhere that tells that you’re talking http2. One way to figure it out, for
example on Firefox, is to enable "Web developer→Network" and check the response headers and
see what you got back from the server. The response is then "HTTP/2.0" something and Firefox
inserts its own header called "X-Firefox-Spdy:" as shown in the screenshot below:

381
17. Chapter 17: RBAC and other Constraints
In this chapter we will discuss about topics which are cross-cutting topics such as Role Based Access
Control and Security Constraints which can be configured regardless of which Security framework
you have adopted (Elytron or Legacy security domain).

• In the first part of this chapter we will learn how to configure Role Based Access Control to
secure your management interfaces

• Next we will learn how to apply further customization to the management users by configuring
fine grained Constraints on them

17.1. Configuring Role Based Access Control


When you create a Management user in WildFly, by default, the user is entitled to a full
management of the application server’s resources. WildFly, however, allows to configure a more
sophisticated approach called Role-Based Access Control (RBAC) where administration users can
be mapped to one or more standard role.

In security terms, by role we mean a set of permissions. Permissions, on the other hand, include a
set of actions and constraints (on data and resources) that can be allowed or denied to users. The
element, which controls the current management policy, is defined in the access-control stanza,
which by default is set to use the "simple" access control style:

<management>
  <access-control provider="simple">
  <role-mapping>
  <role name="SuperUser">
  <include>
  <user name="$local"/>
  </include>
  </role>
  </role-mapping>
  </access-control>
</management>

With the default settings, all management users are granted the SuperUser role which has
complete access to all resources and operations of the server with no restrictions.

By turning on RBAC allows a "separation of duties" for management users making it easy for an
organization to spread responsibility between individuals or groups without granting unnecessary
privileges.

Out of the box, seven Roles are defined each one with different permissions on the server
resources. The following table contains the list of the server roles and the related permissions:

382
Role Description

Monitor Users of the Monitor role have the fewest permissions and it is meant for
users who need to track and report the performance of the server. Monitor
users cannot modify server configuration nor can they access sensitive data
or operations.

Operator Users of the Operator role have all the permissions of the Monitor role plus
the ability to start/stop servers or pause/resume JMS destinations. The
Operator role is a good choice for users who are responsible for the
physical/virtual hosts where application servers are running. Operators
cannot modify server configuration or access sensitive data or operations.

Deployer The Deployer role has the same permissions as the Monitor, but can modify
configuration and state for deployments and any other resource type
classified as an application resource.

Maintainer Users of the Maintainer role have access to view and modify runtime state
of the server plus the ability to configure resources and execute operations
on which are not classified as sensitive resources. Thus, the Maintainer role
is the general-purpose role that does not have access to sensitive data and
operation.

Administrator The Administrator role has unrestricted access to all resources and
operations on the server except the audit logging system. Administrator is
the only role (besides the SuperUser) that has access to sensitive data and
operations. This role can also configure the access control system.

Auditor The Auditor role has all the permissions of the Monitor role and can also
view (but not modify) sensitive data, and has full access to the audit logging
system. The Auditor role is the only role other than SuperUser that can
access the audit logging system. Auditors cannot modify sensitive data or
resources.

SuperUser The SuperUser has all permissions on all resources and operations on the
application server

17.1.1. Enabling RBAC

Now that we have covered the basics of roles and permissions we will turn on Role Based access
Control. In order to do that, you can either update the server’s XML file manually replacing:

<access-control provider="simple">

With:

<access-control provider="rbac">

or (suggested approach) use the CLI and issue the following command:

383
/core-service=management/access=authorization/:write-
attribute(name=provider,value=rbac)

Now, create a few management (-m) users by specifying just username (-u) and password (-p):

./add-user.sh -m -u f.marchioni -p password1!

./add-user.sh -m -u wildmonitor -p password1!

./add-user.sh -m -u wilddeployer -p password1!

The expected outcome should be in your mgmt-groups.properties the following list of users with
no groups associated right now:

f.marchioni=

wildmonitor=

wilddeployer=

Now we will map the above users stored to some server Roles. Let’s start first with the user named
"f.marchioni" and, in order to boost my ego, let’s bind it to the SuperUser Role. This can be done by
digging into the /core-service=management/access=authorization path:

/core-service=management/access=authorization/role-
mapping=SuperUser/include=f.marchioni/:add(type=USER,name=f.marchioni)

Next, in order to assign the Monitor and Deployer roles, we will at first define them as follows:

/core-service=management/access=authorization/role-mapping=Monitor/:add

/core-service=management/access=authorization/role-mapping=Deployer/:add

Finally, we will grant to the other two users respectively the Monitor and Deployer Role:

/core-service=management/access=authorization/role-
mapping=Monitor/include=wildmonitor/:add(type=USER,name=wildmonitor)

/core-service=management/access=authorization/role-
mapping=Deployer/include=wilddeployer/:add(type=USER,name=wilddeployer)

If you don’t feel like typing the above CLI commands, you can get them online at:
http://bit.ly/2pmdklR

384
Reload your configuration before logging in. After committing the above changes your
configuration should look like this:

<access-control provider="rbac">
  <role-mapping>
  <role name="SuperUser">
  <include>
  <user name="$local" />
  <user alias="f.marchioni" name="f.marchioni" />
  </include>
  </role>
  <role name="Deployer">
  <include>
  <user alias="wilddeployer" name="wilddeployer" />
  </include>
  </role>
  <role name="Monitor">
  <include>
  <user alias="wildmonitor" name="wildmonitor"/>
  </include>
  </role>
  </role-mapping>
</access-control>

Now login into the Admin Console in order to verify the changes. As first attempt, login using the
SuperUser (f.marchioni). Once logged, check out from the upper right corner your current Role:

You can also check, from the Access Control upper tab, the list of available users and the Roles that
have been granted to them:

Being a SuperUser grants you every permission over the server configuration: have a quick
turnaround on the console and verify it. Besides this, if you are a SuperUser you are also able to

385
switch to any other Role by clicking on the "Run as .." link. This will lead you to the following
screen where you can temporarily change the capability of your users to a different Role:

Now Log out and change user and login with the wildmonitor user. Again, verify the Role from the
right corner:

Being a user with Monitor Role grants you just an overview on the server status: for example, by
selecting the upper Runtime tab you can check JVM Settings from the Server option, which shows
the status of your server JVM:

386
You will find that the Monitor Role grants you some additional information such as Environment,
Datasource and Transaction statistics. On the other hand, some critical data, such as a JNDI tree
view is forbidden and it’s obviously forbidden any change to the server configuration.

Now let’s test our third user, the "wilddeployer". Logout with the former user and then login as
"wilddeployer". Again check your Role from the right information corner:

As you can see from a quick look into Deployments tab, this user is capable of managing
deployments of your applications and any other resource type enabled as an application resource.

17.1.2. Using groups

So far, we have defined individual users and assigned a role to those users. If the user has not been
granted a role, he/she can still operate with that role, provided that he/she is part of a group (that
has been granted that role). Let’s see a concrete example:

$ ./add-user.sh -m -u wildmaintain -p password1! -g "junior-admin"

In example given above, we have created an user named "wildmaintain" and included it as part of
the "junior-admin" group (-g). Now let’s login in the Admin Console as SuperUser and select the
Groups tab which is available once you have selected the Access Control upper tab.

387
Fine, now click on the Add button, which will let you define a new Group. Enter as name the
"junior-admin" and choose to Include the "Maintainer" Role:

The expected outcome in our configuration will be the following role, which is now related to the
group "junior-admin":

<role name="Maintainer">
  <include>
  <group name="junior-admin"/>
  </include>
 </role>

Now log in the console using the "wildmaintain" user and check that you have been granted the
Maintainer role:

388
As a Maintainer user, you will be able to manage the Runtime state of your server and its
Deployments, yet you will not be able to read or write sensitive information from your
configuration; for example if you jump into the DataSource security settings: you won’t be able to
read the user/password credentials to the DB:

17.1.3. Defining Scoped Roles for Domain mode

So far, we have just seen a standalone server view of the RBAC. When running in Domain mode you
generally speak in terms of Server Groups and Hosts and RBAC makes no exception to it. As a
matter of fact, when running in domain mode, you can configure Scoped roles which are
Administrative roles that are based on standard roles but are constrained to a particular set of
managed domain hosts or server groups. More in detail:

• Host-scoped roles: a role that is host-scoped restricts the permissions of that role to one or
more hosts. This means access is provided to the relevant /host=*/ resource trees but resources
that are specific to other hosts are hidden.

• Server-Group-scoped roles: a role that is server-group-scoped restricts the permissions of that


role to one or more server groups. Additionally, the role permissions will also apply to the
profile, socket binding group, server config and server resources that are associated with the
specified server-groups. Any sub-resources within any of those that are not logically related to
the server-group will not be visible to the user.

In order to define scoped roles, you need to start your server in domain mode and enable as well
RBAC for your current profile by issuing:

389
/core-service=management/access=authorization/:write-
attribute(name=provider,value=rbac)

Next, elect as SuperUser one of your users defined so far:

/core-service=management/access=authorization/role-
mapping=SuperUser/include=f.marchioni/:add(type=USER,name=f.marchioni)

17.1.3.1. Server Group-scoped roles

We will now define a Server-Group scoped role to allow users of that Group to administer just one
Server Group. In order to do that, log in the Admin console using your SuperUser (only users in the
SuperUser or Administrator roles can manage Scoped Roles!) and select the upper Access Control
tab. Now select the Roles tab and click on upper (+) button and choose Server Group Scoped Role.
The following picture will display:

Enter main-group-superuser as name and choose as Base Role SuperUser. Then, pick up the
Server Groups that will included in this group scoped Role, for example the main-server-group.
Click Save. A server reload will be required.

Once reloaded, you can either assign an user to the main-group-superuser or, if you prefer, as quick
test of your Server Group Role just choose Run As and select the main-group-superuser, like we did
in the following picture:

390
Now if you navigate to the Runtime tab, for example, you can see that you can only manage the
main-server-group servers, while the other-server-group is not included in the overview panel.

Accordingly, if you choose the Configuration upper tab, you will be able to see just the profiles that
are mapped to the main-server-group.

17.1.3.2. Host-scoped roles

The other available option, that is Host-scoped roles, can restrict access only to a particular host.
In the following example, we are running a domain composed of a "master" and "slave" host. By
selecting as Scope "master", we will configure the Role as SuperUser just for that Host:

391
As you can see from the Domain view of your domain, now you can only select the "master" host
from your list of hosts, which are part of a domain:

17.1.4. Configuring Constraints

The highest type of customization that you can apply to your management users is configuring
constraints on the single application server resources. Such constraints can tailor which resources
are considered "sensitive". It is possible to define two types of constraints:

Sensitivity Constraints are a set of resources that are considered "sensitive". A sensitive resource
is generally one that either should be secret, like passwords, or one that will have serious impact on
the server, like networking, JVM configuration, or system properties.

Application Resource Constraints are a set of resources, attributes and operations that are
usually associated with the deployment of applications and services.

17.1.4.1. Configuring Sensitivity Constraints

A sensitive resource is generally one that is not shared with every management user such as

392
passwords, network settings or system properties. Resource sensitivity limits which roles are able
to read, write or manage a specific resource.

Sensitivity constraint configuration can be reached from the CLI path at

/core-service=management/access=authorization/constraint=sensitivity-classification

Within the management model, each Sensitivity Constraint is identified as a classification. The
classifications are then grouped into types. Just expand the type element with Tab to discover the
available types:

/core-service=management/access=authorization/constraint=sensitivity-
classification/type=
core jmx remoting undertow
datasources mail resource-adapters
jdr naming security

To configure a sensitivity constraint, you can use the write-attribute operation to set the
configured-requires-read, configured-requires-write, or configured-requires-addressable
attribute. To make that type of operation sensitive set the value of the attribute to true, otherwise to
make it non-sensitive set it to false.

Let’s see an example, how to grant a sensitive constraint to your users. For this purpose, we will
return to our initial configuration where we defined an user named "wildmaintain" as application
server Maintainer. If you log in with that user (or any user running the Maintainer Role) you will
see that some resources such as Datasource security settings or Socket bindings are not editable:

393
What we are going to do now is specifying that a configured resource, for example a socket binding,
is not write sensitive so it does not require Administrator or SuperUser privileges in order to write
it:

/core-service=management/access=authorization/constraint=sensitivity-
classification/type=core/classification=socket-config/:write-
attribute(name=configured-requires-write,value=false)

Now move to your server configuration and verify that you are able to edit the sockets bindings for
the Maintainer role:

17.1.4.2. Configuring Application Constraints

Each Application Resource Constraint defines a set of resources, attributes and operations that are
usually associated with the deployment of applications and services. When an application resource
constraint is enabled management users of the Deployer role are granted access to the resources
that it applies to.

Application constraint configuration can be reached from the CLI path at

/core-service=management/access=authorization/constraint=application-classification/.

Within the management model, each Application Resource Constraint is identified as a


classification. The classifications are then grouped into types. Just expand the type element with
Tab to discover the available types:

/core-service=management/access=authorization/constraint=application-
classification/type=
core logging naming security
datasources mail resource-adapters

By default, the only Application Resource classification that is enabled is the core classification,
which includes deployments, deployment overlays, and the deployment operations.

In order to enable an Application Resource, use the write-attribute operation to set the configured-
application attribute of the classification to true. To disable an Application Resource, set this
attribute to false.

394
For example, let’s see how to enable editing of the Logging subsystem from a Role which, by default,
is not able to do it like the Deployer Role. So just login with an user that is bound to the Deployer
role or just choose Run As Deployer from the SuperUser profile. If you move to the logging
subsystem, you should be able to see that the configuration is not editable:

In order to enable the writing of this subsystem you have to set the configured-application value
to true for the logging type as follows:

/core-service=management/access=authorization/constraint=application-
classification/type=logging/classification=logging-profile/:write-
attribute(name=configured-application,value=true)

Now, refresh your console and verify that user is able to Edit the Log configuration options:

395
17.2. Configuring Security Manager on WildFly
Every Jakarta EE application server must be now capable of running with a Java security manager
that enforces Java security permissions, and that prevents application components from executing
operations for which they have not been granted the required permissions.

In Java terms, a permission represents access to a system resource. When running in a Security
Manager context, in order for a resource access to be allowed the corresponding permission must
be explicitly granted to the code attempting the access. A sample policy file entry that grants code
from the /home/jboss directory read access to the file /tmp/abc is:

grant codeBase "file:/home/jboss/" {


  permission java.io.FilePermission "/tmp/abc", "read";
};

17.2.1. Running WildFly with a Security Manager

The first step for using a Security Manager in the applicaiton server is activating it. In order to do
that, you can either set the -secmgr flag to the startup script or set the SECMGR variable to true, by
uncommenting in your standalone.conf the following line:

#Uncomment this to run with Security Manager enabled


SECMGR="true"

Start the Application server. Now try to deploy a sample Servlet which tries to save a files on the
disk:

396
PrintWriter writer = new PrintWriter("file.txt", "UTF-8");

writer.println("The first line");


writer.println("The second line");

writer.close();

Once you deploy the Servlet, you will first to meet the Security Manager:

java.security.AccessControlException: WFSM000001: Permission check failed (permission


"("java.io.FilePermission" "file.txt" "write")" in code source
"(vfs:/content/Security.war/WEB-INF/classes )" of "null")

So how do we grant permission to write to our application. Mostly like we do with a Java S2E
application, we declare the list of permissions in a file. This file, named permission.xml needs to be
placed in the META-INF folder of your application:

<permissions xmlns="http://xmlns.jcp.org/xml/ns/javaee" xmlns:xsi=


"http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation=
"http://xmlns.jcp.org/xml/ns/javaee
http://xmlns.jcp.org/xml/ns/javaee/permissions_7.xsd" version="7">
  <permission>
  <class-name>java.io.FilePermission</class-name>
  <name>*</name>
  <actions>read,write</actions>
  </permission>
</permissions>

Notice the read,write action, which will authorize us to perform read and write actions on a File for
every class which is part of the Deployment. Deploy the application and check that it works now.

Another kind of resource which is typically shielded by the Security Manager is reading/writing a
System Property. You can check by yourself that reading a System Property now returns a Security
Exception:

2016-10-10 09:33:26,306 ERROR [io.undertow.request] (default task-59) UT005023:


Exception handling request to /Security/test: java.security.AccessControlException:
WFSM000001: Permission check failed (permission "("java.util.PropertyPermission"
"java.home" "read")" in code source "(vfs:/content/Security.war/WEB-INF/classes )" of
"null")

Just like we did for the File, we can let an application read/write a System Property via a permission
block:

397
<permission>
  <class-name>java.util.PropertyPermission</class-name>
  <name>*</name>
  <actions>read,write</actions>
</permission>

17.2.2. Coding Permissions in the configuration file

Besides adding a deployment descriptor, it is also possible to define security policies at Server level.
This implies that they could be valid for multiple applications using a particular class. Out of the
box, the application server contains a subsystem named security-manager which defines just the
upper limit of Permission policies which can be deployed on the application server:

<subsystem xmlns="urn:jboss:domain:security-manager:1.0">
  <deployment-permissions>
  <maximum-set>
  <permission class="java.security.AllPermission"/>
  </maximum-set>
  </deployment-permissions>
</subsystem>

In order to define permissions for a specific class, we need to add an entry to the minimum-
permission element: the following CLI command, reproduces the same permission you needed to
read/write on a file:

/subsystem=security-manager/deployment-permissions=default:write-
attribute(name=minimum-
permissions,value=[{class="java.io.FilePermission",name="*",actions="read,write"}])

The result, will be the following security-manager configuration:

<subsystem xmlns="urn:jboss:domain:security-manager:1.0">
  <deployment-permissions>
  <minimum-set>
  <permission class="java.io.FilePermission" name="*" actions="read,write"/>
  </minimum-set>
  <maximum-set>
  <permission class="java.security.AllPermission"/>
  </maximum-set>
  </deployment-permissions>
</subsystem>

17.2.3. Restricting permissions at module level

Finally, it is worth mentioning that permissions which you have defined via permission.xml or the

398
configuration file, can be narrowed through the module.xml file which contains a set of classes
which are loaded by the application server. For example, let’s say we want to impose a write
restriction at module level for our Servlet, then within the module.xml file of the javax.servlet.api
module we would need to code the following permissions block:

<module xmlns="urn:jboss:module:1.3" name="javax.servlet.api">


  <resources>
  <resource-root path="jboss-servlet-api_3.1_spec-1.0.2.Final.jar"/>
  </resources>
  <permissions>
  <grant permission="java.io.FilePermission" name="/home/jboss/wildfly-
20.0.0.Final/modules/system/layers/base/javax/servlet/api/main/jboss-servlet-
api_3.1_spec-1.0.2.Final.jar" actions="read"/>
  </permissions>
</module>

As you can see from the following log, if you try to violate the restriction imposed by the module
permission, a different error log, related to the specific module will be issued:

2016-10-10 09:38:51,295 ERROR [io.undertow.request] (default task-28) UT005023:


Exception handling request to /Security/test: java.security.AccessControlException:
WFSM000001: Permission check failed (permission "("java.io.FilePermission" "file-
name.txt" "write")" in code source "(jar:file:/home/jboss/wildfly-
20.0.0.Final/modules/system/layers/base/jboss-servlet-api_3.1_spec-1.0.2.Final.jar!/
)" of "null")

399
18. Chapter 18: Taking WildFly in the cloud
JBoss middleware stack is designed to help enterprises move their Java server applications from
bare-metal servers to the cloud with a lightweight, highly modular, cloud-native platform. Newer
releases of WildFly have been optimized for cloud environments, with faster boostrap and
adjustable feature-packs provisioning. Cloud applications can be developed and run on OpenShift,
which is RedHat’s cloud development Platform as a Service (PaaS). This open source cloud-based
platform allows developers to create, test and run their applications and deploy them to the cloud
in a snap.

The basic units of OpenShift Container Platform applications are called containers. Linux
container technologies are lightweight mechanisms for isolating running processes so that they are
limited to interacting with only their designated resources.

Many application instances can be running in containers on a single host without visibility into
each others' processes, files, network, and so on. Typically, each container provides a single service
(often called a "micro-service"), such as a web server or a database, though containers can be used
for arbitrary workloads.

The Linux kernel has been incorporating capabilities for container technologies for years. More
recently the Docker project has developed a convenient management interface for Linux containers
on a host. OpenShift Container Platform and Kubernetes add the ability to orchestrate Docker-
formatted containers across multi-host installations. In this chapter, we will learn how to create a
fast-paced WildFly cloud environment covering the following topics:

• The basics of Docker engine

• How to utilize (pull) and create (build) images of WildFly

• How to install and use an OpenShift Cluster using Red Hat Code Ready Containers technology
(CRC)

• Deploying a complete Enterprise application using CRC

18.1. Getting Started with Docker


Docker is an open-source project that can be used to automate the provisioning of applications
inside components called "containers". This provides an additional layer of abstraction and
automation of applications running on an operating system.

Basically docker is a tool to create and work with containers. The word "container" is not new to
Linux users: we can define a Linux Containers as an operating system-level virtualization method
for running multiple isolated Linux systems (containers) on a single control host.

Most of you are certainly aware of virtualization tools like VMWare or Virtual Box. Containers,
however, are extremely light weight when compared to Virtual Machines.

A Virtual machine runs full fledged operating systems on your host OS. This means that if you plan
to use 3 VMs each one with 2 GB of RAM and 5 GB of disk space, then you will require 6 GB of RAM
and 15 GB of disk space, just to run your VMs! Containers, on the other hand, share resources with

400
the Operating System. They just have isolated process spaces and they use a layered file system
called AuFS. This means that containers can share the common base stuff them. All this makes them
much lighter than Virtual Machines.

18.1.1. Installing Docker

Docker is available in two versions:

• Community Edition (CE): The Docker CE is ideal for developers and small teams looking for a
quick start with Docker and container-based applications. We will be using this version in our
examples.

• Enterprise Edition (EE): The EE features additional capabilities such as a certified


infrastructure, image management, and image security scanning

The installation of Docker is fully documented at: https://docs.docker.com/install/ .

You can follow several installation techniques, depending on your needs:

• In a midterm perspective, you might want to ease the upgrade of Docker, so most users choose
to set up Docker’s repositories and install and upgrades from there as documented here:
https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-using-the-repository

• Another option, which turns to be pretty useful if you are installing Docker on a machine which
is offline, requires manually installing the RPM package and manually handling upgrades as
well. ( https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-from-a-package )

• Finally, for a quick and easy installation, you can use the automated script which will detect
your operating system and install Docker according to it. For the sake of simplicity, we will
choose this option. We will now proceed with Docker installation using the following steps:

The automated script can be downloaded from https://get.docker.com as follows:

$ curl -fsSL https://get.docker.com -o get-docker.sh

Then execute it with:

$ sh get-docker.sh

If you would like to run Docker as a non-privileged user, you should now consider adding your user
to the docker group by executing:

$ sudo usermod $(whoami) -G docker -a

In order to take effect, you will need to log out and log in again. We can check that, effectively, our
user is now in the Docker group by checking the output of this command:

401
$ groups $(whoami)

The output should include docker in the list of your groups. Now you can verify that you can run
Docker commands without a root user (or sudo ):

$ docker run hello-world

The preceding command will pull from the Docker repository and execute an hello-world test
image and runs it in a container. When the test image, it prints an informational message and exits:

Status: Downloaded newer image for hello-world:latest


Hello from Docker!

This message shows that your installation appears to be working correctly.

Having verified the installation, it is time to start the Docker daemon. You can use the systemctl
tool for this purpose:

$ sudo systemctl start docker

If you want Docker to start at boot, you should also:

$ sudo systemctl enable docker

As an alternative, if you are running an older distribution, you may need to use "service" to boot
docker and checkconfig to register it for initial boot:

$ sudo service docker start


$ sudo chkconfig docker on

18.2. Running WildFly images


Running WildFly with Docker is easier than you think: as a matter of fact, several already-built
images are already available. Other options include starting from an operating system (like Fedora)
image and extending with the openjdk and wildfly components. WildFly Docker images for WildFly
are released on https://quay.io/organization/wildfly . From there, you will find:

• WildFly S2I image: Build a WildFly application as a reproducible Docker image using source-to-
image. The resulting image can be run using Docker.

• WildFly Runtime image: An image that contains the minimal dependencies needed to run
WildFly with deployed application. This image is not runnable, it is to be used to chain a docker
build with an image created by the WildFly S2I builder image.

402
• WildFly Operator: used to run the Kubernetes/OpenShift Operator for WildFly Application
Server

Source-to-Image (S2I) is a toolkit and workflow for building reproducible


container images from source code. S2I produces ready-to-run images by
injecting source code into a container image and letting the container prepare
 that source code for execution. By creating self-assembling builder images, you
can version and control your build environments exactly like you use container
images to version your runtime environments.

Right now we will start by pulling from the quay.io the image "wildfly-centos7", which as the name
suggests, runs on the top of a Centos7 distribution:

$ docker pull quay.io/wildfly/wildfly-centos7

Now check that the image has been included in your repository:

$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
quay.io/wildfly/wildfly-centos7 latest 4 months ago 1.06 GB

Now let’s test our image with the run command:

$ docker run -it quay.io/wildfly/wildfly-centos7

The console should prompt a standard WildFly boot:

14:41:33,725 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0060: Http


management interface listening on http://127.0.0.1:9990/management
14:41:33,725 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0051: Admin console
listening on http://127.0.0.1:9990
14:41:33,725 INFO [org.jboss.as] (Controller Boot Thread) WFLYSRV0025: WildFly Full
20.0.0.Final (WildFly Core 12.0.1.Final) started in 5854ms - Started 391 of 576
services (324 services are lazy, passive or on-demand)

What happened ? The application server just started embedded in a container. A container id has
been assigned to the docker process:

$ docker ps
CONTAINER ID IMAGE COMMAND CREATED
STATUS PORTS NAMES
e2c86310ce41 quay.io/wildfly/wildfly-centos7 "container-entrypo..." 46
seconds ago Up 45 seconds 8080/tcp, 8778/tcp silly_montalcini

403
The container id is useful for several purposes such as finding the IP Address of the Virtual
interface created by Docker. You can find it by using the docker inspect command, filtering on the
NetworkSettings.IPAddress parameter and including the Container ID

$ docker inspect -f '{{ .NetworkSettings.IPAddress }}' e2c86310ce41


172.17.0.2

Now you can check opening the browser at the bound address on port 8080 that WildFly is running:
http://172.17.0.2:8080. The Welcome page of WildFly should pop-up.

If you are making experiments with Docker, it would be a good idea to run the

 Docker images with the --rm option, which automatically deletes the Docker
image when it process completes:

$ docker run --rm -it quay.io/wildfly/wildfly-centos7

18.3. Extending WildFly’s image


The above example is just fine if you want to provision a Container’s image to be used by your
customers. On the other hand, if you are planning to configure the application server or deploy
applications on the top of it, you will need to customize the default image.

The simplest way to customize your container is by means of the Dockerfile. A Dockerfile is a
special file which contains a set of commands that can be used to customize the Docker image. A
Docker rebuild using the Dockerfile will result in a new image, ready to be used.

Let’s have a sneak peak over a basic Dockerfile. Create a file named "Dockerfile" and include the
following content:

FROM quay.io/wildfly/wildfly-centos7

RUN /opt/wildfly/bin/add-user.sh -m -u admin -p Password1! --silent

CMD ["/opt/wildfly/bin/standalone.sh", "-b", "0.0.0.0", "-bmanagement", "0.0.0.0"]

Here is the meaning of the above commands:

The FROM command sets the base image for the build process, which will be extended by the next
commands.

The RUN command allows executing commands during the build time. In our case we want to
create a management user to allow remote management of the application server’s

The CMD instruction is similar to the RUN instruction as it allows executing commands as well. The
difference is however that the commands supplied through the RUN instruction are executed
during the build time, whereas the commands specified through the CMD instructions are

404
executed when the container is launched from the newly created image. Therefore, the CMD
instruction includes a default execution for this container.

Save the Dockerfile. Now execute a docker build including a tag option that will allow you to
quickly identify your custom image in the docker repository and the path (".") to the Dockerfile:

$ docker build --tag=wildfly-admin .

Now verify that the new image is enlisted in your docker images output:

$ docker images
REPOSITORY TAG IMAGE ID CREATED
VIRTUAL SIZE
wildfly-admin latest 487adbf6b245 10 seconds ago
1.06 GB

Finally, run it with:

$ docker run -it -p 9990:9990 wildfly-admin

Once executed, the main difference with the former example is that you will be able to connect to
your management Web console or CLI of the image. This is because we have forwarded the port
9990 of the image to the port 9990 of the host machine. So, assuming that the virtual interface of the
image is available on the IP address 172.17.0.2, you can connect from your local WildFly installation
as follows:

$ ./jboss-cli.sh -c controller=172.17.0.2:9990

Provide the credentials which you have added in the Dockerfile via the add-user.sh script. Much
the same way you could use the Web console pointing to http://172.17.0.2:9990

When you are done, you can issue the following command to force quitting all containers:

$ docker stop $(docker ps -a -q)

18.3.1. Deploying applications the top of WildFly image

If you have an available management channel you can actually deploy applications on your
WildFly image. However exposing the management channels when you are provisioning
applications to your customers is not what you might want. As an alternative, the simplest and
possibly best way to do it is by using the ADD command to inject your applications in the container
by means of the deployment scanner.

In order to do that, you just need to extend the jboss/wildfly image by creating a new one. Place

405
your application inside the deployments directory with the ADD command. You can also include the
changes to the configuration (if any) as additional steps (RUN command).

Here is, for example how to set up a Dockerfile which includes a web application in the container
distribution:

FROM jboss/wildfly

ADD my-app.war /opt/wildfly/standalone/deployments/

Place your my-app.war file in the same directory as your Dockerfile. Run the build with:

$ sudo docker build --tag=wildfly-admin .

Now you can run the container with:

$ sudo docker run -it wildfly-admin

Application will then be deployed on the container boot.

18.4. Getting started with Red Hat OpenShift


Red Hat OpenShift is a hybrid cloud, enterprise Kubernetes application platform. Building on the
top of Kubernetes API, OpenShift offers you the ability to easily deploy your application code
directly using a set of pre-defined image builders, or you can bring your own Docker images. With
support in OpenShift for features such as persistent volumes, you are not limited to just running 12-
factor or cloud native applications. You can also deploy databases and many legacy applications
which you otherwise would not be able to run on a traditional PaaS.

OpenShift is available in several flavors:

• Red Hat OpenShift Container Platform (requires subscription): It provides a supported


Kubernetes platform which will let you build, deploy, and manage your container-based
applications consistently across cloud and on-premises infrastructure.

• Red Hat OpenShift Dedicated (requires subscription): It provides a supported, private, high-
availability Red Hat OpenShift cluster hosted on Amazon Web Services or Google Cloud
Platform.

• Red Hat OpenShift Online (several plans available): It is on-demand access to Red Hat
OpenShift to manage containerized applications

• Origin Community Distribution of Kubernetes (OKD): It is the upstream version of Red Hat
OpenShift Container Platform that you can freely use in any environment.

• Red Hat Code Ready Containers (CRC): provides a minimal, preconfigured OpenShift 4.X
single node cluster to your laptop/desktop computer for development and testing purposes.
CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports

406
native hypervisors for Linux, macOS, and Windows 10.

In this book, we will be using CRC which is the quickest way to get started building OpenShift
clusters and emulate the cloud development environment locally with all the tools needed to
develop container-based apps.

18.4.1. Installing Red Hat Code Ready Containers

CRC is available on Linux, macOS and Windows operating systems. In this section we will cover the
Linux installation. You can refer to the quickstart guide for information about the other OS:
https://code-ready.github.io/crc/

On Linux, CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or
newer (including 8.x versions) and on the latest two stable Fedora releases (at the time of writing
Fedora 30 and 31). CodeReady Containers requires the libvirt and NetworkManager packages,
which can be installed as follows in Fedora/RHEL distributions:

$ sudo dnf install qemu-kvm libvirt NetworkManager

Next, download the latest release of CodeReady Containers for your platform.

 You need register with a Red Hat account to access and download this product.

Once downloaded, create a folder named .crc in your home directory:

$ mkdir $HOME/.crc

Then unzip the CRC archive in that location and rename it for your convenience:

$ tar -xf crc-linux-amd64.tar.xz -C $HOME/.crc


$ cd $HOME/.crc
$ mv crc-linux-1.6.0-amd64 crc-1.6.0

Next, add it to your system PATH:

$ export PATH=$HOME/.crc/crc-1.6.0:$HOME/.crc/bin:$PATH

Verify that the crc binary is now available:

$ crc version

crc version
crc version: 1.6.0+8ef676f
OpenShift version: 4.3.0 (embedded in binary)

407
Great, your environment is ready. It’s time to start it!

18.4.1.1. Starting OpenShift cluster

The crc setup command performs operations to set up the environment of your host machine for
the CodeReady Containers virtual machine.

This procedure will create the ~/.crc directory if it does not already exist.

Set up your host machine for CodeReady Containers:

$ crc setup
INFO Checking if oc binary is cached
INFO Checking if CRC bundle is cached in '$HOME/.crc'
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking for obsolete crc-driver-libvirt
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
Setup is complete, you can now run 'crc start' to start the OpenShift cluster

When the set up is complete, start the CodeReady Containers virtual machine:

$ crc start

When prompted, supply your user’s pull secret which is available at:
https://cloud.redhat.com/openshift/install/crc/installer-provisioned The cluster will start:

408
INFO Checking if oc binary is cached
INFO Checking if running as non-root
INFO Checking if Virtualization is enabled
INFO Checking if KVM is enabled
INFO Checking if libvirt is installed
INFO Checking if user is part of libvirt group
INFO Checking if libvirt is enabled
INFO Checking if libvirt daemon is running
INFO Checking if a supported libvirt version is installed
INFO Checking if crc-driver-libvirt is installed
INFO Checking if libvirt 'crc' network is available
INFO Checking if libvirt 'crc' network is active
INFO Checking if NetworkManager is installed
INFO Checking if NetworkManager service is running
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists
INFO Starting CodeReady Containers VM for OpenShift 4.3.0...
INFO Verifying validity of the cluster certificates ...
INFO Check internal and public DNS query ...
INFO Check DNS query from host ...
INFO Starting OpenShift cluster ... [waiting 3m]
INFO
INFO To access the cluster, first set up your environment by following 'crc oc-env'
instructions
INFO Then you can access it by running 'oc login -u developer -p developer
https://api.crc.testing:6443'
INFO To login as an admin, run 'oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF
https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift
web console
Started the OpenShift cluster

So, out of the box, two users have been created for you. An admin user (kubeadmin) and a
developer user. Their credentials are displayed in the above log. Now reach the Web console of
openshift with:

crc console

You will be noticed that the connection is insecure as no certificate is associated with that address.
Choose to add an Exception in your browser and continue.

409
After you have entered the username and password, you will be redirected to the Dashboard of
OpenShift, which features the default project:

18.4.1.1.1. Troubleshooting CRC installation

Depending on your DNS/Network settings, there can be some things that can possibly go wrong. A
common issue which determines the error "Failed to query DNS from host" is normally caused by a
mis-configuration of your DNS in the file /etc/resolv.conf. Check that it contains the following
entries:

search redhat.com
nameserver 127.0.0.1

This issue is discussed more in detail in the following thread: https://github.com/code-


ready/crc/issues/976

410
Another common issue is signaled by the following error message: "Failed to connect to the crc VM
with SSH"

This is often cause by a misconfiguration of your virtual network. It is usually fixed by releasing
any resources currently in use by it and re-creating through the crc set up. Here is the script to
perform this tasks:

crc stop
crc delete
sudo virsh undefine crc --remove-all-storage
sudo virsh net-destroy crc
sudo rm -f /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf
/etc/NetworkManager/dnsmasq.d/crc.conf
crc setup
crc start

More details about this are available here: https://github.com/code-ready/crc/issues/711

In general terms, if you find an issue with your CRC cluster, it is recommend to start crc in debug
mode to collect logs with:

crc start --log-level debug

Consider reporting the issue on http://gist.github.com/

18.4.2. OpenShift quick reference

In order to get started with OpenShift, it is necessary to know some key concepts. Luckily, there is
one command in the 'oc' client tool which provides a reference to all relevant key concepts of the
platform:

411
$ oc types

Concepts:

* Containers:
  A definition of how to run one or more processes inside of a portable Linux
  environment. Containers are started from an Image and are usually isolated
  from other containers on the same machine.

* Image:
  A layered Linux filesystem that contains application code, dependencies,
  and any supporting operating system libraries. An image is identified by
  a name that can be local to the current cluster or point to a remote Docker
  registry (a storage server for images).

* Pods [pod]:
  A set of one or more containers that are deployed onto a Node together and
  share a unique IP and Volumes (persistent storage). Pods also define the
  security and runtime policy for each container.

The above output has been truncated for brevity. A quicker way to get a description of what can be
set is to use the 'oc explain' command, which can be used on the name of a resource, or the path to
a specific setting.

For example, you can get quickly pod documentation this way:

$ oc explain pod
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is created by
clients and scheduled onto hosts.

18.5. Deploying WildFly on CRC


As next step, we will deploy a sample Web application which uses an Enterprise stack of
components (JSF/JPA) to insert and remove records from a Database. The first step is to create a
project for your application using the command oc new-project:

$ oc new-project wildfly-demo

The database we will be using in this example is PostgreSQL. A template for this Database is
available under the postgresql name in the Registry used by CRC. Therefore, you can create a new
PostgreSQL application as follows:

$ oc new-app -e POSTGRESQL_USER=wildfly -e POSTGRESQL_PASSWORD=wildfly -e


POSTGRESQL_DATABASE=sampledb postgresql

412
Notice the -e parameters, that are used to set the Database attributes using Environment variables.
Now, check that the Pod for postgresql is running:

$ oc get pods
NAME READY STATUS RESTARTS AGE
postgresql-1-2dp7m 1/1 Running 0 38s
postgresql-1-deploy 0/1 Completed 0 47s

Done with PostgreSQL, we will add the WildFly application. We will need, for this purpose, to load
the wildfly-centos7 image stream in our project. That requires admin permission, therefore login
as kubeadmin:

$ oc login -u kubeadmin -p 7z6T5-qmTth-oxaoD-p3xQF https://api.crc.testing:6443

Now you can load the wildfly-centos7 image stream in our project:

$ oc create -f https://raw.githubusercontent.com/wildfly/wildfly-s2i/wf-
20.0/imagestreams/wildfly-centos7.json

Done with the image stream, you can return to the developer user:

$ oc login
Authentication required for https://api.crc.testing:6443 (openshift)
Username: developer
Password:
Login successful.
You have one project on this server: "wildfly-demo"

Using project "wildfly-demo".

Now everything is ready to launch our WildFly application. We can use one example available on
github at: https://github.com/fmarchioni/openshift-jee-sample . To launch our WildFly application,
we will be passing some environment variables, to let WildFly create a PostgreSQL datasource
using the correct settings:

$ oc new-app wildfly~https://github.com/fmarchioni/openshift-jee-sample
--name=openshift-jee-sample -e DATASOURCE=java:jboss/datasources/PostgreSQLDS -e
POSTGRESQL_DATABASE=sampledb -e POSTGRESQL_USER=wildfly -e POSTGRESQL_PASSWORD=wildfly

You might have noticed that we have passed also the environment variable named DATASOURCE. This
variable is used specifically by our application. If you check the content of the file
https://github.com/fmarchioni/openshift-jee-sample/blob/master/src/main/resources/META-
INF/persistence.xml, it should be clear how it works:

413
<persistence version="2.0"
  xmlns="http://java.sun.com/xml/ns/persistence" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="
  http://java.sun.com/xml/ns/persistence
  http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">
  <persistence-unit name="primary">

  <jta-data-source>${env.DATASOURCE:java:jboss/datasources/ExampleDS}</jta-data-
source>

  <properties>

  <property name="hibernate.hbm2ddl.auto" value="create-drop" />


  <property name="hibernate.show_sql" value="false" />

  </properties>
  </persistence-unit>
</persistence>

So, when passing the environment variable named DATASOURCE the application will be bound to that
Datasource. Otherwise, the ExampleDS database will be used as fall-back solution. To get back to
our example, the following log will be displayed when you have created the wildfly application:

414
--> Found image 38b29f9 (4 months old) in image stream "wildfly-demo/wildfly" under
tag "latest" for "wildfly"

  WildFly 20.0.0.Final
  --------------------
  Platform for building and running JEE applications on WildFly 20.0.0.Final

  Tags: builder, wildfly, wildfly18

  * A source build using source code from https://github.com/fmarchioni/openshift-


jee-sample will be created
  * The resulting image will be pushed to image stream tag "openshift-jee-
sample:latest"
  * Use 'oc start-build' to trigger a new build
  * This image will be deployed in deployment config "openshift-jee-sample"
  * Ports 8080/tcp, 8778/tcp will be load balanced by service "openshift-jee-sample"
  * Other containers can access this service through the hostname "openshift-jee-
sample"

--> Creating resources ...


  imagestream.image.openshift.io "openshift-jee-sample" created
  buildconfig.build.openshift.io "openshift-jee-sample" created
  deploymentconfig.apps.openshift.io "openshift-jee-sample" created
  service "openshift-jee-sample" created
--> Success
  Build scheduled, use 'oc logs -f bc/openshift-jee-sample' to track its progress.
  Application is not exposed. You can expose services to the outside world by
executing one or more of the commands below:
  'oc expose svc/openshift-jee-sample'
  Run 'oc status' to view your app.

We need to expose our application, so that it can be accessed remotely:

oc expose svc/openshift-jee-sample
route.route.openshift.io/openshift-jee-sample exposed

In a few minutes, the application will be running as you can see from the list of Pods:

$ oc get pods
NAME READY STATUS RESTARTS AGE
openshift-jee-sample-1-95q2g 1/1 Running 0 90s
openshift-jee-sample-1-build 0/1 Completed 0 3m17s
openshift-jee-sample-1-deploy 0/1 Completed 0 99s
postgresql-1-2dp7m 1/1 Running 0 3m38s
postgresql-1-deploy 0/1 Completed 0 3m47s

Let’s have a look at the logs of the running Pod of openshift-jee-sample:

415
$ oc logs openshift-jee-sample-1-95q2g

17:44:25,786 INFO [org.jboss.as.server.deployment] (MSC service thread 1-1)


WFLYSRV0027: Starting deployment of "ROOT.war" (runtime-name: "ROOT.war")
17:44:25,793 INFO [org.wildfly.extension.undertow] (MSC service thread 1-2)
WFLYUT0018: Host default-host starting
17:44:25,858 INFO [org.wildfly.extension.undertow] (MSC service thread 1-1)
WFLYUT0006: Undertow HTTP listener default listening on 10.128.0.70:8080
17:44:25,907 INFO [org.jboss.as.ejb3] (MSC service thread 1-2) WFLYEJB0493: EJB
subsystem suspension complete
17:44:26,025 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread
1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/PostgreSQLDS]
17:44:26,026 INFO [org.jboss.as.connector.subsystems.datasources] (MSC service thread
1-2) WFLYJCA0001: Bound data source [java:jboss/datasources/ExampleDS]
 . . . . .

The interesting bit is that the java:jboss/datasources/PostgreSQLDS has been successfully bound.
Now reach the application which is available at the following route address:

$ oc get routes
NAME HOST/PORT
SERVICES PORT
openshift-jee-sample openshift-jee-sample-wildfly-demo.apps-crc.testing
openshift-jee-sample 8080-tcp

A simple Web application will display, which lets you add and remove records that are eventually
displayed in a JSF table:

You can check that your records have been actually committed to the database by logging into the
postgresql Pod:

416
$ oc rsh postgresql-1-2dp7m

From there, we will use the psql command to list the available databases:

sh-4.2$ psql
psql (10.6)
Type "help" for help.

postgres=# \l
  List of databases
  Name | Owner | Encoding | Collate | Ctype | Access privileges
 ----------+----------+----------+------------+------------+-----------------------
 postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
 sampledb | wildfly | UTF8 | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
  | | | | | postgres=CTc/postgres
 template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
  | | | | | postgres=CTc/postgres
(4 rows)

Next, let’s use the sampledb database:

postgres=# \c sampledb
You are now connected to database "sampledb" as user "postgres".

Query the list of tables available in this database:

sampledb=# \dt
  List of relations
 Schema | Name | Type | Owner
 -------+----------------+-------+---------
 public | simpleproperty | table | wildfly
(1 row)

The simpleproperty Table has been automatically created thanks to the hibernate.hbm2ddl.auto
setting which has been set to create-drop. Here is the list of records contained:

sampledb=# select * from simpleproperty;


 id | value
 ----+-------
 foo | bar
(1 row)

We have just demonstrated how to deploy a non-trivial example of Enterprise application using a
Database backend, by leveraging Red Hat Code Ready Containers technology

417
19. Chapter 19: Configuring MicroProfile
capabilities
The Eclipse MicroProfile project aims at optimizing Enterprise Java for the microservices
architecture. Its API are available in WildFly as MicroProfile extensions. In the current release of
WildFly, the following APIs are available:

MicroProfile Config: The goal of this API is to provide an uniform way to configure applications
using Microprofile extensions,

MicroProfile Fault Tolerance: The goal of this API is to handle the unavailability of a service by
using a set of well-defined policies.

MicroProfile Health Check: aims at providing a standard way to check the application readiness
and liveness.

MicroProfile JWT Authentication: The goal of this API is to use JSON Web Token (JWT) as an
encoding standard for applications authenticating with tokens and JSON data payload that can be
signed and encrypted.

MicroProfile Metrics: This is a specification that provides a standard way for application servers
to expose metrics and, also, an API for developers to build their own application metrics.

MicroProfile OpenAPI: This API provides a way to document your REST endpoint using
annotations or a pre-generated JSON in a standard way.

MicroProfile OpenTracing: This specification defines behaviors and an API for accessing an
OpenTracing compliant org.eclipse.microprofile.opentracing.Traced object within your JAX-RS
application.

MicroProfile Rest Client: The goal of this API is to provide a type-safe way to invoke REST services
in a Microservices architecture.

In terms of configuration, the MicroProfile Config and MicroProfile Health Check have some
management functions, therefore in the next section we will learn how to access them from the
Command Line Interface.

If you want to learn how to develop Enterprise applications with WildFly and the

 Microprofile stack, we recommend reading the Practical Enterprise Development


guide.

19.1. Managing the MicroProfile Config


In the era of microservices it is essential to be able to externalize and inject both static and
dynamic configuration properties for your services. As a matter of fact, microservices are designed
to be moved across different environments, therefore it is essential to have a portable
externalization of their configuration. Since WildFly 14, this can be done with the SmallRye
implementation of Eclipse MicroProfile Config. In terms of configuration, the MicroProfile config

418
has been implemented in WildFly with the following subsystem:

<subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0"/>

This subsystem in turns requires the following extension:

<extension module="org.wildfly.extension.microprofile.config-smallrye"/>

When this subsystem is activated, the configuration will be injected into the
org.eclipse.microprofile.config.Config object. This configuration object contains the information
collected from several configuration locations, called ConfigSources and stored in
org.eclipse.microprofile.config.spi.ConfigSource. By default there are following ConfigSources:

1. System.getProperties()
2. System.getenv()
3. All META-INF/microprofile-config.properties files on the ClassPath

4. All entries in microprofile-config-smallrye subsystem

If the same property is defined in multiple ConfigSources, the policy with the highest order
overwrites the lower (So for example if you start the application server with a System Property
named "foo", it will override another property named foo in the microprofile-config.properties.

Now let’s see all supported ConfigSources locations:

19.1.1. ConfigSources in microprofile-config-smallrye subsystem

The first option is to store properties in the ConfigSource by adding them in the microprofile-
config-smallrye subsystem. Let’s see an example:

/subsystem=microprofile-config-smallrye/config-
source=props:add(properties={"property1" = "value1", "property2" = "value2"})

In terms of XML configuration, this is the outcome:

<subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0">
  <config-source name="props">
  <property name="property1" value="value1"/>
  <property name="property2" value="value2"/>
  </config-source>
</subsystem>

You can reference also properties as files from a directory. For example, take the folder /var/config
which contains the following files:

419
$ ls -al /var/config

-rw-r--r--. 1 jboss jboss 19 Sep 18 12:46 text1


-rw-r--r--. 1 jboss jboss 34 Sep 18 12:46 text2

We can create a config-source using as target the folder /var/config:

/subsystem=microprofile-config-smallrye/config-source=file-
props:add(dir={path=/var/config})

Now our config-source contains two entries: text1 and text2 and their value is the content of the
files. This results in the XML configuration:

<subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0">
  <config-source name="file-props">
  <dir path="/var/config"/>
  </config-source>
</subsystem>

With that configuration, any application deployed in WildFly reference the above ConfigSource and
use them as property. Here is a Servlet example which reads the property1 and property2:

420
package com.mastertheboss.mp;

import java.io.IOException;

import javax.inject.Inject;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

import org.eclipse.microprofile.config.inject.ConfigProperty;

@WebServlet(name = "config", urlPatterns = { "/config" })

public class ConfigServlet extends HttpServlet {

  @Inject
  @ConfigProperty(name = "property1")
  String prop1;

  @Inject
  @ConfigProperty(name = "property2")
  String prop2;

  public ConfigServlet() {

  }

  @Override
  protected void doGet(HttpServletRequest request, HttpServletResponse response)
  throws ServletException, IOException {

  response.getWriter().append("Got property1 with: ").append(prop1);


  response.getWriter().append("Got property2 with: ").append(prop2);
  }

  @Override
  protected void doPost(HttpServletRequest request, HttpServletResponse response)
  throws ServletException, IOException {

  doGet(request, response);
  }

421
Please note that applications that are deployed in WildFly must have CDI enabled

 (e.g. with a META-INF/beans.xml) or by having CDI Bean annotation) to be able to


use MicroProfile Config in their code.

If you don’t want to reference the single ConfigProperty, you can also inject the generic
org.eclipse.microprofile.config.Config object:

@Inject Config config;

This allows to retrieve the single Property with the get method:

String p = config.getValue("property1", String.class);

You can also use the getPropertyNames() method to iterate over the list of Properties:

Iterable<String> i = config.getPropertyNames();

In order to compile your project, you’d need to include, besides the EE dependencies, also the
Eclipse Microprofile dependency. The latest stable version is the version 1.3:

<dependency>
  <groupId>org.eclipse.microprofile.config</groupId>
  <artifactId>microprofile-config-api</artifactId>
  <version>1.3</version>
</dependency>

One advantage of using the MicroProfile strategy is that property Injection will be
checked at deployment time. For example see what happens if some Config
Properties are missing:


Caused by: org.jboss.weld.exceptions.DeploymentException: Error while
validating Configuration: No Config Value exists for prop1

19.1.2. ConfigSource from Class

You can also provide a ConfigSource class which implements


org.eclipse.microprofile.config.spi.ConfigSource and acts as source for the ConfigSource
implementation by creating a ConfigSource resource with a class attribute.

For example, you can provide an implementation of


org.eclipse.microprofile.config.spi.ConfigSource that is named com.mastertheboss.MyConfigSource
:

422
package com.mastertheboss.mp;

import java.io.FileInputStream;
import java.io.InputStream;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;
import java.util.Set;

import org.eclipse.microprofile.config.spi.ConfigSource;

public class MyConfigSource implements ConfigSource {

  String fileLocation = System.getProperty("user.dir") +


"resources/config.properties";

  @Override
  public int getOrdinal() {
  return 400;
  }

  @Override
  public Set<String> getPropertyNames() {
  return getProperties().keySet();
  }

  @Override
  public String getValue(String key) {
  return getProperties().get(key);
  }

  @Override
  public String getName() {
  return "Custom Config Source: file:" + this.fileLocation;
  }

  @Override
  public Map<String, String> getProperties() {
  Map<String, String> map = new HashMap<String, String>();

  Properties properties = new Properties();


  InputStream in;
  try {
  in = new FileInputStream(this.fileLocation);
  properties.load(in);
  } catch (Exception e) {
  // TODO Auto-generated catch block
  e.printStackTrace();
  }

  for (final String name : properties.stringPropertyNames())

423
  map.put(name, properties.getProperty(name));

  return map;

  }

The class needs to be registered as module and installed. For example, if the class is packaged in a
library and installed as org.mp module:

/subsystem=microprofile-config-smallrye/config-source=my-config-
source:add(class={name=com.mastertheboss.MyConfigSource, module=org.mp})

This results in the XML configuration:

<subsystem xmlns="urn:wildfly:microprofile-config-smallrye:1.0">
  <config-source name="my-config-source">
  <class name="com.mastertheboss" module="org.mp"/>
  </config-source>
</subsystem>

19.1.3. ConfigSources in microprofile-config.properties

Finally, the application is able to extract the ConfigSource by including them into the META-
INF/microprofile-config.properties:

ConfigSources from the property file have priority (in case of clash between properties) over the
ConfigSources from the subsystem. However they are overridden by System Properties and Env
Properties.

An example using microprofile-config.properties is available at:


https://github.com/fmarchioni/mastertheboss/tree/master/micro-services/mp-config-example

424
19.2. Managing MicroProfile Health Checks
The MicroProfile Health API defines a set of interfaces that can be implemented by an application
developer to check the healthiness of its parts. The overall healthiness of the application is then
determined by the aggregation of all the procedures provided by the application. More specifically,
the following Health checks are defined:

• Readiness checks: This check can indicate that a service is temporarily unable to serve traffic.
This can be due, for example, to the fact that an application might be loading some
configuration or data. In such cases, you don’t want to shut down the application but, at the
same time, you don’t want to send it requests either. Readiness Health checks are available
through the endpoint http://localhost:9990/health/ready

• Liveness checks: Services running 24/7 can sometimes undergo a transition to broken states,
for example, because they have hit an Out of Memory Exception. Therefore, they cannot
recover except by being restarted. You can, however, be notified of this scenario by defining a
liveness check which probes the liveness of the service. Liveness Health checks are available
through the endpoint http://localhost:9990/health/live

In order to implement both checks, you can decorate your application with the
@org.eclipse.microprofile.health.Liveness and @org.eclipse.microprofile.health.Readiness
annotation.

You can

19.2.1. Health Checks from the Command Line Interface

Besides the REST API, Health checks are also available through the Command Line Interface, by
interacting with the microprofile-health-smallrye subsystem. For example, here is how to chec
the readiness of applications which use the @org.eclipse.microprofile.health.Readiness
annotation:

[standalone@localhost:9990 /] /subsystem=microprofile-health-smallrye:check-ready
{
  "outcome" => "success",
  "result" => {
  "status" => "UP",
  "checks" => [
  {
  "name" => "RESTReadinessCheckReadiness",
  "status" => "UP",
  "data" => {"services" => "available"}
  }
}

On the other hand, you can verify the status of applications implementing the
@org.eclipse.microprofile.health.Liveness check as follows:

425
[standalone@localhost:9990 /] /subsystem=microprofile-health-smallrye:check-live

  "status": "UP",
  "checks": [
  {
  "name": "system-load",
  "status": "UP",
  "data": {
  "name": "Linux",
  "processors": 8,
  "loadAverage": "0.66",
  "version": "4.18.16-300.fc29.x86_64",
  "loadAverage max": "0.6",
  "architecture": "amd64",
  "loadAverage per processor": "0.0825"
  }
  }
  ]

20. Appendix
In the appendix of this book you will find some reference examples of the XML descriptors used by
WildFly.

20.1. jboss-deployment-structure.xml
The file jboss-deployment-structure.xml can be used to set application dependency against
modules. The advantage of using this file (compared to the Manifest’s entry) is that you can define
dependencies across top-level deployments and subdeployments.

Location: META-INF or WEB-INF of the top level deployment

Here is an example of how to add a module (deployment.itextpdf-5.4.3.jar) to a deployment


(MyWebApp.war) as jar file. At the same time, we are selecting which packages to use in the module
so that we exclude for example the com/itextpdf/awt/geom package:

426
<jboss-deployment-structure>
  <sub-deployment name="MyWebApp.war">
  <dependencies>
  <module name="deployment.itextpdf-5.4.3.jar" />
  </dependencies>
  </sub-deployment>
  <module name="deployment.itextpdf-5.4.3.jar" >
  <resources>
  <resource-root path="itextpdf-5.4.3.jar" >
  <filter>
  <exclude path="com/itextpdf/awt/geom" />
  </filter>
  </resource-root>
  </resources>
  </module>
</jboss-deployment-structure>

20.2. jboss-ejb3.xml
The is the EJB deployment descriptor and can be used to override settings from ejb-jar.xml, and to
set some ejb3 specific settings:

Location: WEB-INF of a war, or META-INF of an EJB jar

Example: how to decare an EJB 3.X and set a custom Transaction Timeout:

427
 <jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
  xmlns="http://java.sun.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd
  http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd"
  version="3.1"
  impl-version="2.0">
  <enterprise-beans>
  <session>
  <ejb-name>DescriptorGreeter</ejb-name>
  <ejb-
class>org.jboss.as.test.integration.ejb.descriptor.DescriptorGreeterBean</ejb-class>
  <session-type>Stateless</session-type>
  </session>
  </enterprise-beans>
  <assembly-descriptor>
  <container-transaction>
  <method>
  <ejb-name>DescriptorGreeter</ejb-name>
  <method-name>*</method-name>
  <method-intf>Local</method-intf>
  </method>
  <tx:trans-timeout>
  <tx:timeout>10</tx:timeout>
  <tx:unit>Seconds</tx:unit>
  </tx:trans-timeout>
  </container-transaction>
  </assembly-descriptor>
</jboss:ejb-jar>

Example 2: How to define an MDB and link it to a JMS Destination:

428
<jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
  xmlns="http://java.sun.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd
  http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd"
  version="3.1"
  impl-version="2.0">
  <enterprise-beans>
  <message-driven>
  <ejb-name>ReplyingMDB</ejb-name>
  <ejb-
class>org.jboss.as.test.integration.ejb.mdb.messagedestination.ReplyingMDB</ejb-class>
  <activation-config>
  <activation-config-property>
  <activation-config-property-name>destination</activation-
config-property-name>
  <activation-config-property-
value>java:jboss/mdbtest/messageDestinationQueue</activation-config-property-value>
  </activation-config-property>
  </activation-config>
  </message-driven>
  </enterprise-beans>
</jboss:ejb-jar>

Example 3: How to link an EJB with a Security Domain:

<jboss:ejb-jar xmlns:jboss="http://www.jboss.com/xml/ns/javaee"
  xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
  xmlns:s="urn:security"
  xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss-ejb3-2_0.xsd http://java.sun.com/xml/ns/javaee
http://java.sun.com/xml/ns/javaee/ejb-jar_3_1.xsd"
  version="3.1" impl-version="2.0">

  <assembly-descriptor>
  <s:security>
  <ejb-name>Hello</ejb-name>
  <s:security-domain>MySecurityDomain</s:security-domain>
  </s:security>
  </assembly-descriptor>
</jboss:ejb-jar>

Example 4: How to link an EJB with a Container Interceptor:

429
<jboss xmlns="http://www.jboss.com/xml/ns/javaee"
  xmlns:jee="http://java.sun.com/xml/ns/javaee"
  xmlns:ci ="urn:container-interceptors:1.0">

  <jee:assembly-descriptor>
  <ci:container-interceptors>
  <!-- Class level container-interceptor -->
  <jee:interceptor-binding>
  <ejb-name>AnotherFlowTrackingBean</ejb-name>
  <interceptor-
class>org.jboss.as.test.integration.ejb.container.interceptor.ClassLevelContainerInter
ceptor</interceptor-class>
  </jee:interceptor-binding>
  <!-- Method specific container-interceptor -->
  <jee:interceptor-binding>
  <ejb-name>AnotherFlowTrackingBean</ejb-name>
  <interceptor-
class>org.jboss.as.test.integration.ejb.container.interceptor.MethodSpecificContainerI
nterceptor</interceptor-class>
  <method>
  <method-name>echoWithMethodSpecificContainerInterceptor</method-
name>
  </method>
  </jee:interceptor-binding>
  </ci:container-interceptors>
  </jee:assembly-descriptor>
</jboss>

20.3. jboss-web.xml
JBoss Web deployment descriptor. This can be use to override settings from web.xml, and to set
WildFly specific options

Location: WEB-INF

Example 1: How to use a Security Domain:

<jboss-web>
  <security-domain>ejb3-tests</security-domain>
</jboss-web>

Example 2: How to define the number of maximum active Sessions:

<jboss-web version="14.1" xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi=


"http://www.w3.org/2001/XMLSchema-instance">
  <max-active-sessions>1</max-active-sessions>
</jboss-web>

430
Example 3: How to use a custom worker from the io subsystem:

<jboss-web>
  <executor-name>test-worker</executor-name>
</jboss-web>

20.4. jboss-app.xml
WildFly application deployment descriptor. Can be used to override settings application.xml, and to
set application specific settings.

Location: META-INF of an EAR

Example: Setting Security Roles for a Security Domain:

<?xml version="1.0" encoding="ISO-8859-1"?>


<!DOCTYPE jboss-app PUBLIC "-//JBoss//DTD J2EE Application 4.2//EN"
"http://www.jboss.org/j2ee/dtd/jboss-app_4_2.dtd">
<jboss-app>
  <security-domain>mydomain</security-domain>
  <security-role>
  <role-name>Administrator</role-name>
  <principal-name>j2ee</principal-name>
  </security-role>
  <security-role>
  <role-name>Manager</role-name>
  <principal-name>javajoe</principal-name>
  </security-role>
</jboss-app>

20.5. jboss-permissions.xml
This file allows you to specify the permissions needed by the deployment. Can override those
available in permissions.xml

Location: META-INF

Example: Setting Permission for a Specific Class name referenced by the deployment unit:

<permissions version="7">
  <permission>
  <class-name>java.util.PropertyPermission</class-name>
  <name>*</name>
  <actions>read</actions>
  </permission>
</permissions>

431
20.6. ironjacamar.xml
Deployment descriptor for Resource Adaptor deployments.

Location: META-INF of a rar archive

Example: Define a ConnectionFactory:

<ironjacamar>
  <connection-definitions>
  <connection-definition class-name=
"org.jboss.as.test.smoke.rar.HelloWorldManagedConnectionFactory"
  jndi-name="java:/eis/HelloWorld"/>
  </connection-definitions>
</ironjacamar>

20.7. jboss-client.xml
The WildFly specific deployment descriptor for application client deployments.

Location: META-INF of an application client jar

Example: Setting an environment entry for a client deployment:

<jboss-client>
  <env-entry>
  <env-entry-name>stringValue</env-entry-name>
  <env-entry-type>java.lang.String</env-entry-type>
  <env-entry-value>OverridenEnvEntry</env-entry-value>
  </env-entry>
</jboss-client>

20.8. jboss-webservices.xml
The JBossWS 4.0.x specific deployment descriptor for JAX-WS Endpoints.

Location: META-INF for EJB webservice deployments or WEB-INF for POJO webservice
deployments/EJB webservice endpoints bundled in .war

Example: how to set a JAX-WS property at application level:

432
<webservices xmlns="http://www.jboss.com/xml/ns/javaee" xmlns:xsi=
"http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://www.jboss.com/xml/ns/javaee
http://www.jboss.org/j2ee/schema/jboss_web_services_1.2.xsd">
  <property>
  <name>org.jboss.ws.cxf.disableHandlerAuthChecks</name>
  <value>true</value>
  </property>
</webservices>

20.9. JMS Deployment descriptors (*-jms.xml)


Can be used to define application scoped JMS destinations.

Location: deployments folder of the application server or META-INF or WEB-INF of the


application.

Example:

<messaging-deployment xmlns="urn:jboss:messaging-activemq-deployment:1.0">
  <server>
  <jms-destinations>
  <jms-queue name="queue1">
  <entry name="java:/queue1"/>
  <durable>true</durable>
  </jms-queue>
  <jms-topic name="topic1">
  <entry name="java:/topic1"/>
  </jms-topic>
  </jms-destinations>
  </server>
</messaging-deployment>

20.10. Datasource Deployment descriptors (*-ds.xml)


Can be used to define application scoped Datasources.

Location: deployments folder of the application server or META-INF or WEB-INF of the


application.

Example:

433
<datasources xmlns="http://www.jboss.org/ironjacamar/schema">
  <datasource jndi-name="java:jboss/datasources/DeployedDS" enabled="true" use-java-
context="true"
  pool-name="H2DS">
  <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-
1;DB_CLOSE_ON_EXIT=FALSE</connection-url>
  <driver>h2</driver>
  <pool></pool>
  <security>
  <user-name>sa</user-name>
  <password>sa</password>
  </security>
  </datasource>
  <xa-datasource jndi-name="java:/H2XADS" pool-name="H2XADS">
  <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class>
  <xa-datasource-property name="URL">jdbc:h2:mem:test;DB_CLOSE_DELAY=-
1;DB_CLOSE_ON_EXIT=FALSE</xa-datasource-property>
  <driver>h2</driver>
  <security>
  <user-name>sa</user-name>
  <password>sa</password>
  </security>
  </xa-datasource>
</datasources>

434

You might also like