WebSphere eXtreme Scale – a better choice than Infinispan


There are many commercial and open-source software offerings in the in-memory data grid (IMDG) market nowadays.  An IMDG offers many benefits, some of which are:

  • Consistent and predictable performance for business-critical applications
  • Elastic scalability
  • High availability of data
  • Reduced load on back-end systems
  • Prevention of downtime due to outages of systems of record (when systems of record data are cached in IMDG)

To successfully deliver these benefits, an IMDG must meet many criteria, such as high performance and its effective use of server resources.  So, I decided to put these categories to the test for WebSphere eXtreme Scale, IBM’s in-memory data grid offering, and Infinispan, an open-source software IMDG, which happens to be what JBoss Data Grid uses under the hood – e.g. JBoss Data Grid 6.4.0 is based on Infinispan 6.2.0.  I carried out performance tests for these two products and here are are my findings:

  • WXS was able to handle more throughput than Infinispan for out-of-the-box tests:
    • WXS can handle between 43% and 128% more throughput than Infinispan 6.0.2
    • WXS can handle between 29% and 75% more throughput than Infinispan 7.0.3
  • Infinispan 6.0.2 grid generates “SuspectException” execution errors as the client load increases
  • Infinispan client source code must use the HotRod API, a server module that implements Infinispan’s custom binary protocol designed to enable faster client/server interactions.  In contrast, using IBM XIO (eXtreme IO) does not require the use of any specific API in your code so developers can focus on writing business logic instead of worrying about which API to use and how
  • Infinispan 7.0.3 client library is very CPU intensive (consumes a lot of CPU resources), especially at 50k and 100k messages. This resulted in Infinispan not being able to handle as many objects/client as WXS
  • For 50k and 100k messages, Infinispan 6.0.2 consumed more CPU than WXS and Infinispan 7.0.3 despite processing a smaller number of transactions than both
  • At high client CPU consumption rates, WXS handled higher throughput than Infinispan 7.0.3
  • Infinispan 6.0.2 grid did more context switching than WXS for out-of-the-box tests despite handling a lower throughput
  • No backward compatibility for clustered.xml configuration file between Infinispan 6.0.2 and 7.0.3.  This means that when upgrading from 6.0.2 and 7.0.3, a customer will have to re-write configuration files, hence adding to the upgrade maintenance effort
  • Infinispan client needs to keep track of IMDG server list.  This adds to the development and maintenance efforts
  • Infinispan grid needs significantly more resources from OS to do equivalent work.  The # of processes had to be increased from 1024 to 8271048 for Infinispan.  WXS does fine with 1024

To a customer, the findings above translate into less license cost, hardware, and effort (development and maintenance) when choosing IBM over Infinispan.

As a result of this study, these are the high-level main differentiators between WXS and Infinispan:

WXSvsInfinispanHighLevelDifferentiators_Apr_9_2015e

The following paragraphs describe the tests in detail.

Simple grid use case

The goal of this test was to see how many transactions per second WXS and Infinispan could handle with XDF/XIO and JBoss Marshalling/HotRod respectively. XDF and XIO-on-heap were enabled for WXS and JBoss Marshalling and HotRod were used for Infinispan. Beside these settings, no tuning was done for either product and all configurations were out-of-the-box.

The test was composed of one client server and two container physical servers (the in memory data grid). The client server had 20 JVM instances running, each with a number of client threads (which varied per test scenario) and with a target number of objects to store in the grid per client (also varied per test scenario). Each of the two servers running the data grid had 16 running JVM instances (number of container nodes). All tests were run on the same identical hardware for both products.

The objects to serialize were randomized byte data (Byte Array), which allowed for the automation of the large number of tests that needed to be run. Although object types will vary with every customer’s data load, this randomization served as a good apples-to-apples input for the performance comparison.

Here’s a pictorial representation of the architecture used for this testing:

SimpleGridTestEnvArch_Apr_9_2015

For all these scenarios, a battery of tests was run for each message size category: 1K, 10K, 50K and 100K message sizes. For each message size category, the number of client threads per JVM and the number of objects to store in the grid per client were varied until the highest throughput was achieved without any grid or client errors. Once the optimal numbers for client threads per JVM and the number of objects to store in the grid per client were identified, two runs where executed and results averaged out. The client operations executed on the grid were 10% insert, 60% read, 20% update, and 10% delete.

The following sections go over my findings in detail.

WXS XDF/XIO vs. Infinispan JBoss Marshalling/HotRod Performance

In this section, WXS v8.6 performance is compared to Infinispan 6.0.2 and Infinispan 7.0.3. WebSphere eXtreme Scale is faster than Infinispan in all tests in this scenario. Here is the graph showing the results for each message size:

WXSvsInfinispanPerformance_Apr_9_2015e

The following table summarizes the percentage WXS performance was higher than Infinispan 6.0.2 and 7.0.3 for each of the message size runs.

WXSvsInfinispanPerfTable_Apr_9_2015e

The Infinispan 6.0.2 grid generated “SuspectException” errors which caused nodes to leave the cluster making the entire grid unstable. The Infinispan 6.0.2 test results shown above are the highest ones that did not generate these errors. The Infinispan 7.0.3 grid did not generate these type of errors and was able to handle a higher number of transactions per second than Infinispan 6.0.2.

 WXS XDF/XIO vs. Infinispan JBoss Marshalling/HotRod Client CPU Utilization

Here, the WXS and Infinispan clients’ CPU utilization is compared. The following graph depicts the test results:

WXSvsInfinispanClientAvgProcUtil_Apr_9_2015e

It was uncovered that Infinispan was not able to handle the number of client threads that WXS could, so this number had to be lowered for Infinispan until no errors were generated for all test cases. For example, for the 1k-message run, WXS was able to keep up with 192 threads/client and a target number of 817,060 objects/client whereas Infinispan 6.0.2 and 7.0.3 were able to keep up with only 40 threads/client and 22,369 objects/client and 37 threads/client and 22369 objects/client respectively.

For 1k and 10k messages, Infinispan 6.0.2 and WXS both had higher CPU consumption than for their respective 50k and 100k runs where their CPU utilization fell. In addition, in the preceding performance section, the Infinispan 7.0.3 grid performed slightly better than Infinispan 6.0.2 for the 1k and 10k messages, 12% and 11% respectively, and modestly better for the 50k and 100k messages, 30% and 50% respectively. However, Infinispan 7.0.3 client consistently showed high levels of CPU utilization for all of its runs.   This indicates that the Infinispan 7.0.3 HotRod client library is very CPU hungry.

In summary, the behavior of the CPU utilization of the WXS clients correlates to the performance of the WXS grid.  This can also be said of Infinispan 6.0.2 but not of Infinispan 7.0.3.

 WXS XDF/XIO vs. Infinispan JBoss Marshalling/HotRod IMDG CPU utilization

Here is the graphical result of the CPU utilization of WXS and Infinispan:

WXSvsInfinispanGridAvgProcUtil_Apr_9_2015e

For 1k messages, although the WXS IMDG consumed higher CPU on average compared to Infinispan, WXS also processed between 28.52% and 43.39% more transactions than Infinispan, which would explain its higher CPU consumption. Likewise, for 10k messages, WXS consumed higher CPU than Infinispan but WXS processed from 64.21% to 82.92% more transactions than Infinispan. For 50k and 100k messages, Infinispan 6.0.2 consumed more CPU than WXS and Infinispan 7.0.3 despite processing a smaller number of transactions than both. This indicates that Infinispan 7.0.3 has introduced CPU utilization improvements with respect to Infinispan 6.0.2. Finally, WXS consumed more CPU than Infinispan 7.0.3 for these tests but it also processed between 36.11% and 75.43% more transactions.

WXS XDF/XIO vs. Infinispan JBoss Marshalling/HotRod IMDG Context Switching

Here, the WXS and Infinispan context switching is compared. The following shows the results in graphical form:

WXSvsInfinispanGridAvgContextSwitching_Apr_9_2015e

WXS had higher context switching than Infinispan 7.0.3, attributable to its handling between 28.52% and 75.43% higher number of transactions/second than Infinispan 7.0.3. On the other hand, Infinispan 6.0.2, despite being slower than Infinispan 7.0.3, it showed the highest context switching which slowed it down. This also demonstrates that Infinispan 7.0.3 has introduced some design improvements that made it more efficient and lowered its context switching with respect to Infinispan 6.0.2.

Number of processes consumed by Infinispan

The number of processes per user had to be increased for the Infinispan 6.0.2 and 7.0.3 IMDG from 1024, which is what WXS needs, to a higher number as follows:

InfinispanNumberOfProcesses_Apr_9_2015

If the maximum # of processes per user was not adjusted, the following error was generated by the cluster member (and the grid would become unstable):

java.lang.OutOfMemoryError: unable to create new native thread

The numbers in the table above were obtained by using the same maximum # of processes for each node as they were added to the cluster until the grid failed. At that point, the maximum # of processes for each node was doubled and the exercise re-started again. Note that these are the maximum # of processes settings to get the grid started. No client was connected to the grid at all during this exercise. In addition, once the Infinispan 7.0.3 grid entered into an unstable state, it would not respond to Ctrl-C signals sent from its parent X-term process. The following error would be generated with each attempt:

Java HotSpot(TM) 64-Bit Server VM warning: Exception java.lang.OutOfMemoryError occurred dispatching signal SIGINT to handler- the VM may need to be forcibly terminated

The only way to kill the grid was to open another command prompt window and sending a “kill -9” to each cluster member in the grid.

Start-up times

Like WXS, the start-up time of each Infinispan 6.0.2 server was about 3 seconds. The start-up time of each Infinispan 7.0.3 server was about 4 seconds which is slower than a WXS container start-up time.

Conclusion

These tests revealed that WebSphere eXtreme Scale has better overall performance and more effective use of machine resources than Infinispan.  To a customer looking for an IMDG solution, these benefits translate into less license cost, hardware, and effort (development and maintenance) when choosing IBM over Infinispan.



Categories: Technology

Tags: , , , , , ,

1 reply

Trackbacks

  1. WebSphere eXtreme Scale eXtreme Memory (XM) on Power | WhyWebSphere Blog

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: