Performance and benchmarking
Overview
We have performed extensive automated load testing with our platform to measure performance, stability, scalability, availability, and cost with various types of loads. Our automated load testing tool flowTester is available publicly on GitHub. The load testing was performed on our single instance (small) deployment of CXFabric, which is optimized for development and testing, not for performance, stability, scalability, or availability.
Quick look overview numbers of our benchmark testing.
1,000 Parallel flow executions
6.2 Second average runtime
16,000 Flow executions in 100 Seconds
100% Success rate
Setup
The test deployment consists of:
Two t3a.large AWS EC2 instances inside AWS EKS. Load testing took place with a “warm start” of the deployed platform, i.e., the platform had already been operating under some load prior to commencement of the load testing but the platform was not used for other purposes during load testing. Cloud cost examples can be viewed here.
Our load tests were parameterized, using different types of flows and different concurrency rates (parallel flow execution). Our latest load tests performed 25,000 executions of configured flows with concurrency rates between 25 at the low end and 1000 at the high end. In these tests, the flowTester triggered the execution of configured flows over the public Internet from a consumer-grade workstation.
We tracked the following metrics in our load testing:
Number of successfully executed flows in an average 100 or 60 second time window
Average flow execution time (ms)
Percentage of successfully executed flows within 100 or 60 seconds of triggering execution
Maximum CPU consumption (%)
Maximum memory consumption (%)
Load Testing with 10 Different Flows, Each with 35 Nodes, with a REST Trigger
#concurrent flows
avg flow execution time (ms)
%flows successfully executed
#flows executed on average in 100 second time window
1000
6218
100
16082.34159
750
5733
100
13082.15594
500
2418
100
20678.24648
250
1325
100
18867.92453
125
643
100
19440.12442
100
493
100
20283.97566
75
440
100
17045.45455
50
259
100
19305.01931
25
211
100
11848.34123
Interpretation
We observe an ability of the platform to sustain high throughput, measured in number of flow executions in a given time interval (100 seconds or 60 seconds), for complex as well as simple flows. For simple flows (3 nodes), the platform executes more than 1500 flows in 100 seconds when a high number of concurrent requests is launched (>= 50). For complex flows (35 nodes), the platform executes more than 500 flows in a 60 second time interval when a high number of concurrent requests is launched (>= 50).
At the same time, we also observe high platform stability, measured as the percentage of successfully executed flows, despite the load tests running in a development deployment that is not tuned for maximum performance and stability.
Cross Platform Comparison
CXFabric is quite similar to other iPaaS platforms such as n8n, a commercially successful iPaaS platform. n8n provides benchmarking information that makes a comparison useful. On its benchmarking webpage, n8n states “n8n can handle up to 220 workflow executions per second on a single instance.” To demonstrate this ability, n8n provides, as a graph, data from benchmark testing using a very simple two node flow on a “basic” single instance (ECS c5a.large instance with 4GB RAM). Architecturally, n8n uses a queuing model that queues any requests when all 220 “servers” are occupied. It then works off the queue in FIFO order as workflows complete and servers become available.
The CXFabric benchmark testing described above appears to be somewhat similar to the n8n benchmarking. A burst of requests is launched, all requests are allowed to process and complete, the number of successful completions within a given time window is noted, and another burst of requests is launched. This pattern of batches of requests is repeated to ensure that the platform continues to perform at the same level each batch and doesn’t show progressive degradation. To compare “concurrent” executions, the comparable tests are those that use a burst of 220 or more requests. Since there is no queuing in the CXF platform, comparing the “failure” rate in the CXF tests is instructive but not exactly comparable. However, the best direct comparison is shown below:
Basic Configuration
ECS c5a.large w/4GB
2*(ECS t3a.large w/ 8GB)
Workflow
2 steps
3 steps
Concurrent Flows
220
1000
Execution Time
Within 100 seconds (queued)
Real-time average 6 seconds
Last updated