In this world, Non-Functional-Requirements (NFR) always come as a surprise “System should be available throughout the year and service response time should be in nanoseconds”. From business point of view the requirements definitely make sense but from technical point of view they pose a mounting challenge.
In most of the cases involving integration space, business identifies the bottlenecks such as response time is high or system is not able to process multiple parallel requests at the same time and some modules have security lapses.
Following are some of the questions that needs answers through Performance Engineering.
- How should one manage the NFRs in an enterprise without getting into ‘obvious’ troubles?
- How can we use the NFRs to predict the performance even before the solution is in build or design phase?
- What-if, after doing this exercise and the resultant prediction itself is leading to unexpected values?
- Can we chart out a plan to get to the end goal with right justification to the cost?
If one can take the liberty to say ‘All’ enterprises face this kind of key challenge in their IT projects – getting the ‘right’ performance from their solution whether it is response time, processing high transaction volume, predicting the response delay in case more number of users are using a system, time taken to fail over, data centre switch, processing huge payloads through integration layer and other aspects of availability.
As part of the requirement analysis, the following inputs/steps will help us to manage, govern and address the non-functional requirements:
- Get the requirements in top to bottom approach:
- List out the Services as part of a business flow according to their gradation compliant to the Enterprise SOA standards – whether they fall under business services, technical services, foundation, mediation etc.
- List out the requirements for each of this service – let us say response time is 1 second as expected by business.
- Put them in Granular details that helps in capacity planning and predicting resource utilization
- At what times the services need to be available
- What are the usual peak times when the services will have high hit volume?
- Are they real-time or scheduled?
- Volume of information being transacted; volume of Users, size in bytes of each transaction.
- Put in the ‘brakes’ for these services. (this helps in putting the negative clauses to help predict a closer to accurate picture)
- Data access services has a dependency on the speed of database.
- Assume a factor for parallel processing, which will give throttling criteria.
- Assume a factor for deterioration in a shared environment.
- Network Latency: These can be of two types: 1. For intra data centres or same geographic location 2. When systems are separated such as in different continents and countries.
- Any other breaks, bottlenecks that is applicable to the enterprise.
- Standard Measures to the Environment
- Assume a standard environment based on existing Infrastructure footprint that these services are expected to host.
- Include all parameters such as Memory Utilization, CPU speed, Server Capacity, no. of services that will go in each server including load balancing instances and stand-by instances of the services.
Based on the above parameters, the response time and utilization can be predicted, the answers help in estimating total cost of Infrastructure.
In a typical world, mostly we end up diagnosing the designed and developed applications to improve on Non-functional requirements. Following are some of the best practices to check and validate in Mule applications:
- Usage of Session variables
- Reduce the number of session variable used in Mule flow. As the session scope is serialised and reserialised every time a messages goes through an endpoint including VM endpoint
- Use smaller session variables instead of copying huge payloads into session variables
- Payload considerations – When dealing with payload formats Mule is flexible and versatile in dealing with different payload formats. It is good to deal with Bean payloads as they tend to be faster compared to other formats wherever possible.
- Data Extraction
- For XML, use XPATH
- Use MEL instead of any other scripting language as most of scripting languages are dynamically typed and even interpreted at runtime.
- Use Synchronous process flows that help in avoiding Context Switches, Conflict resolutions when payload moves across thread pools
- HTTP Connections
- Use HTTP listener instead of HTTP inbound endpoint. HTTP listener uses non-blocking IO and it does not use one thread per client unlike HTTP endpoint
- HTTP Keep-Alive helps in persisting connection that helps in improving the performance
- Use Flow References instead of VM endpoints – Flow references are direct way to communicate with other flows within an application. They inject the payload into the target flows without any overhead unlike VM endpoints.
- Logging – In typical application development logging is always ignored or not fine-tuned to help the applications to perform better. By leveraging log4j2 with Mule latest versions (3.6 and above) it is easy to make logging as an asynchronous process. Please refer for more details.
Above explained are some of the tips but not all would help in improving the performance of Mule flows. Along with flow tuning tricks it is also suggested to look into JVM Settings, Messaging capabilities within the application, Mule ESB instance Clustering and Thread pooling that helps in meeting the Non-functional requirements defined by the business users.
If you would like to find out more about how Systems Integration could help you make the most out of your current infrastructure while enabling you to open your digital horizons, do give us a call at +44 (0)203 475 7980 or email us at email@example.com.
Other useful links: