WebSphere Liberty autoscale clustering with Docker


I often meet customers who have hundreds of large applications running on WebSphere. If you move those large applications into microservices you may easily end up with several hundreds or even thousands of microservices. Quite often these environments are provisioned for the peak and that means having many instances for each microservice. This way you can easily end up with many thousands of JVMs. However at any given time only a small percentage of those applications are running at peak (some may peak only once a year). Whether you have active load on the system or not – you are still running all those JVMs and consume system resources. Why keep all of those JVMs running while they are running at 10% average utilization? This approach wastes a lot of memory, CPU, networking, power and licensed software. Add to that memory needed for the hypervisors, operating system, monitoring and other supporting software. This could easily reach into 100s of terabytes of RAM. Despite the fact that RAM is relatively cheap, it still costs money, especially for server class hardware. Waste is bad because it is expensive and it prevents you from doing other things by drawing away your resources and attention.

How can you solve this problem for on-prem environments – serve peaks when they come (once every few hours, days or weeks), but not waste huge amount of resources? Liberty clusters with autoscale capability are to the rescue! (Or you could use good PaaS platform, such as Bluemix local, but that is subject to another article.)

In one of my earlier posts I described cluster autoscaling with WebSphere Application Server Liberty using JVM approach. It is useful capability, but requires the user to install the Liberty binaries in advance into a target host environment as well as define a fixed number of server configurations. What if you have hundreds of servers? These tasks can be scripted, but wouldn’t it be nice to have autoscale without having to pre-install and pre-create servers? In other words, can Liberty Collective Controller provision servers to remote hosts dynamically? This capability has been available via IBM Bluemix Liberty Buildpack as I described in my earlier post. In September 2015 IBM introduced beta level Liberty support for elastic Docker containers, which is now generally available and supported. Before I dive into technical discussion and comparison with competitive products, please watch this demo:

How does it compare to WebLogic, JBoss and Tomcat?

Oracle WebLogic, Red Hat JBoss, Apache Tomcat, and Tomcat commercial flavors are statically provisioned and manually clustered. With JBoss EAP and Tomcat administrators can define static JVMs provisioned in advance to be part of the cluster. For large scale deployment this could mean lots and lots of JVMs running at all times regardless of the workload. If workload is at peak, then resources are used to their full capacity. However, workloads tend to come in peaks and valleys, and usually most of the time you are far from the max load on the system. Most systems average about 10% utilization rate compared to their total capacity. What this means is that your company paid for 100% of the hardware, networking, software, power, etc., only to use 10% of that investment. This is one of the reasons why pay per use models in the cloud are gaining momentum.

But what if you are not ready to move the workload into the cloud? What if you must run in your own datacenter behind your own firewall? One way to (partially) solve this problem is to utilize virtualization with PowerVM, KVM, VMware, or other hypervisor. This allows for much greater density of computing and more sharing of resources. However, one of the issues with virtualization is that 90% of the time it is memory bound. In other words, your processors may be idle, but memory is at 100% and you are once again forced into buying new servers.

This is where we come back to static clustering. WebLogic versions prior to 12.2.1, all versions of JBoss and Tomcat use static provisioning model. How can you solve this static over-provisioning problem, meaning serve peaks when they come (once every few hours, days or weeks), but not waste huge amount of resources?

WAS ND classic and WAS ND Liberty can handle this use case and help you dramatically reduce hardware and software footprint. WebLogic 12.2.1 recently added this capability as well. These products can automatically and dynamically provision and start / stop new instances of application server JVMs across a set of hosts. This is called “Dynamic clustering” or “Auto scaling” and provides the ability to meet Service Level Agreements (SLA) when multiple applications compete for limited compute resources. The boundaries of the auto scaled cluster for any particular application can be computed dynamically based on the rules defined by the system administrator.

WAS ND traditional WAS ND Liberty Oracle WLS EE JBoss EAP Apache Tomcat
Static clusters (pre-provisioned) Yes Yes Yes Yes Yes
Manually add or remove servers to/from a running cluster Yes Yes Yes Yes Yes
Centralized management of cluster members Yes Yes Yes Yes no
Dynamically creates/starts/stops servers when load changes Yes Yes Yes no no
Provisions new app servers to hosts when workload increases Yes Yes no no no
Provisions new Docker containers when workload increases no Yes no no no
Scaling policy allows for min and max number of servers Yes Yes Yes no no
Scaling policy based on CPU, heap or memory use Yes Yes Yes no no
Scaling based on service policies (URL + response time, etc.) Yes no no no no
Applications have relative priorities when servers are allocated Yes no no no no
Auto vertical stacking on a node Yes no Yes no no
Cluster isolation groups Yes no no no no
Lazy application start Yes no Yes no no

How to build Liberty autoscale with Docker

If you would like to build this autoscale environment shown in my demo, here is what you do:

  1. Go to wasdev.net to download the latest Liberty runtime with embedded JDK
  2. Create four Linux hosts named host1, host2, host3, etc. and httphost (make sure they see each other on the network and have ssh enabled
  3. Download bash scripts for installation and configuration from this GitHub repository into the project folder $PROJECT_HOME. This folder has to be shared across all four hosts mentioned above at the same mount point
  4. On host1, host2 and host3 install Docker runtime (feel free to use my one step script install_docker.sh)
  5. Update setenv.sh and install_and_setup.sh with your own values (paths, host names, ports, etc.)
  6. On host1, host2, and host3 build Docker image using this script: $PROJECT_HOME/docker/build.sh (for production environments use Docker Trusted Registry)
  7. Run install_and_setup.sh on host1. This will install and configure Liberty Collective Controller and register all hosts into the Collective
  8. If you want to uninstall Liberty and completely cleanup your environment, run ths cleanup.sh script. However before you run installation again (as in previous step), please update the variable CONTAINER_NAME in setenv.sh file to a unique value
  9. On httphost you need to install IBM HTTP Server (or other HTTP server) and WebSphere plugin. Once installed, run the plugin configuration script I provided: $PROJECT_HOME/http/config.sh. Restart http server to pickup new configuration

You are all set. Now you can hit the apps (see video above for details). The scripts I provided above will not only install Liberty, but also automatically generate the proper environment variables and server.xml files for the Liberty Collective Controller. Please give it a try and let me know what you think.

PS.
I would like to thank David Adcox, Steven Clay (IBM) for their help in developing this demo.



Categories: Technology

Tags: , , , , , , , , , ,

3 replies

Trackbacks

  1. Dynamic clustering (auto scaling) in WebSphere, WebLogic, JBoss and Tomcat – IBM Advantage Blog
  2. WebSphere Liberty auto scaling – IBM Advantage Blog
  3. WASdev | Building WebSphere Liberty autoscale cluster with Docker - WASdev

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: