During the past week, I was able to visit with several InRule customers during the annual User Community Meeting (IUCM). Although marketing emphasis tends to focus on flashy user interfaces and buzz words, I found once again that many clients are using InRule to perform complex calculations and decision-making using heavy, back-end, batch processing.
These batch processing applications usually run on a schedule, say once a day or once a month, and may have no user interface. They grind through enormous sets of enterprise data performing various line of business functions, such as analyzing medical claims, updating trade account statements, or recalculating the best performing business locations based on real-time feeds of new customer data.
Although these applications lack the glamour of their user-facing counterparts, they perform much of the mission-critical data work on which our modern way of life has now come to depend.
Several of our clients asked about best practices when building batch processing applications that include the InRule business rule engine. Given the large amount of data, concerns were around reducing batch processing time and offering an easy way to scale out capacity as data set sizes continue to grow.
The list and diagrams below contain some high-level points for consideration:
Design for Stateless Requests Where Ever Possible
By definition, a batch contains more than one request. If the data in each request can be processed independently from every other request, then a “stateless” approach can be used for rule processing. In the stateless scenario, rule processing for each given record does not depend on a built up history or “session” that needs to be stored in memory.
Not all business problems can be stateless, and some batch processes require that certain records are correlated to each other or are processed in a specific order, which forces dependencies.
Use Multi-threaded Execution on the Server and the Consumer
Batch processing often takes place on powerful servers that have multiple CPUs and processor cores. Designing services and client consumers to work with requests concurrently will make the best use of the hardware that is available.
The stateless design works well with concurrent execution. Since the records do not need to be processed in any given order, they can be handled on any thread and processed as quickly as possible.
Modern web services, such as WCF (Windows Communication Foundation) services, provide multi-threaded request handling on the server side with almost no configuration or coding. In addition, the InRule rule engine integrated with these services is designed for heavy multi-threaded request processing. The InRule engine shares memory between concurrent requests where it will save on memory and processing time, while keeping other memory safe where values must be unique per request. InRule automatically processes rules this way without developers having to write any additional code.
Don’t Forget the Consumer
An important concept that many clients forget to look at is on the consuming client. After creating a rule processing web service that can handle concurrent execution, they often do not load the service using a client that submits concurrent requests. Designing for concurrency should be considered at the client as well as the server.
Architecture for Efficient Batch Concurrency
Below is a diagram of a possible software architecture for efficient batch concurrency. In this case, an open source library called Quartz is used for job scheduling the client application. A batch of requests is queried from a data store and then reduced into smaller batches. Each smaller batch is processed concurrently by both the client and server farm.
Scale Out with Farms of Servers as Demand Grows
Once a design supports stateless data and concurrent processing, the path to affordable scalability becomes clearer. When a request is processed by the system, it can be routed to any thread waiting on any given server in a farm. It can be processed as quickly as possible without accounting for other requests in the batch or other servers in the farm. Implementation of load balancing across a farm of servers becomes relatively simple, where any request can be routed to the least busy server in a farm of identically configured machines.
With the proper hardware and virtual server platform for deployment, there is now a system that can be designed and built once, but where investment can be incremental as demand for processing and reliability grows. Each time more capacity is required, another identically configured virtual server can be created and added to the farm to increase capacity.
Architecture for Scale Out
The diagram below depicts a high level design of three farms of servers that are configured to scale out on demand. Features of cloud-based platforms such as Windows Azure or on-premise hypervisors can be used to create multiple sets of communicating servers that quickly scale out by adjusting only simple configuration settings, instead of changing code or manually re-configuring and deploying new hardware.