Swift/Performance Metrics
Tests pre-deploy
test 1 (reads, 5 nodes)
- all reads
- two frontends (ms-fe1, ms-fe2)
- three backends (ms1, ms2, ms3)
0 ben@hume:~/swift$ ./geturls.bak -t 20 filelists/wikinews-filelist-urls.txt About to start calling all these URLs. I'll print status every 10% or 100k lines Press <return> at any time for current status. Ok, starting 0% of the way through the file. status report: progress: 10%, 2699 URLs tried, 0 URLs failed, execution time: 0:00:06.163779 status report: progress: 11%, 2700 URLs tried, 0 URLs failed, execution time: 0:00:06.164364 status report: progress: 20%, 5398 URLs tried, 0 URLs failed, execution time: 0:00:12.362599 status report: progress: 20%, 5399 URLs tried, 0 URLs failed, execution time: 0:00:12.362758 ^CFinal summary: total: 6843, failed: 0, execution time: 0:00:15.700910 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 8-10: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 11-13: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 14-17: 3 ( 0%) | 0 ( 0%) | 0 ( 0%) 18-22: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 23-28: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 29-36: 139 ( 2%) | 0 ( 0%) | 0 ( 0%) 37-46: 4679 (68%) | 0 ( 0%) | 0 ( 0%) 47-58: 1927 (28%) | 0 ( 0%) | 0 ( 0%) 59-73: 79 ( 1%) | 0 ( 0%) | 0 ( 0%) 74-92: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 93-116: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 1 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 1 ( 6%) 432-475: 15 (93%)
test 2 (mixed, 5 nodes)
- mixed reads and writes (6434 reads, 1926 writes)
- two frontends (ms-fe{1,2}) three backends (ms{1,2,3})
^CFinal summary: total: 8360, failed: 0, execution time: 0:00:40.099530 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 14-17: 3 ( 0%) | 0 ( 0%) | 0 ( 0%) 18-22: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 23-28: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 29-36: 208 ( 2%) | 0 ( 0%) | 0 ( 0%) 37-46: 4436 (52%) | 0 ( 0%) | 0 ( 0%) 47-58: 1836 (21%) | 0 ( 0%) | 0 ( 0%) 59-73: 302 ( 3%) | 0 ( 0%) | 0 ( 0%) 74-92: 263 ( 3%) | 0 ( 0%) | 0 ( 0%) 93-116: 318 ( 3%) | 0 ( 0%) | 0 ( 0%) 117-146: 283 ( 3%) | 0 ( 0%) | 0 ( 0%) 147-183: 193 ( 2%) | 0 ( 0%) | 0 ( 0%) 184-230: 79 ( 0%) | 0 ( 0%) | 0 ( 0%) 231-288: 25 ( 0%) | 0 ( 0%) | 0 ( 0%) 289-361: 23 ( 0%) | 0 ( 0%) | 0 ( 0%) 362-452: 28 ( 0%) | 0 ( 0%) | 0 ( 0%) 453-566: 140 ( 1%) | 0 ( 0%) | 0 ( 0%) 567-708: 105 ( 1%) | 0 ( 0%) | 0 ( 0%) 709-886: 11 ( 0%) | 0 ( 0%) | 0 ( 0%) 887-1108: 5 ( 0%) | 0 ( 0%) | 0 ( 0%) 897789-1122236: 98 ( 1%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 12 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 13-14: 1 ( 2%) 17-18: 1 ( 2%) 18-19: 1 ( 2%) 20-22: 2 ( 4%) 23-25: 2 ( 4%) 56-61: 2 ( 4%) 61-67: 6 (14%) 74-81: 1 ( 2%) 99-108: 3 ( 7%) 108-118: 1 ( 2%) 135-148: 6 (14%) 309-339: 1 ( 2%) 430-473: 14 (34%)
test 3 (writes, 5 nodes)
- all writes
- two frontends (ms-fe{1,2}), three backends (ms{1-3})
^CFinal summary: total: 6434, failed: 0, execution time: 0:00:57.390167 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 29-36: 107 ( 1%) | 0 ( 0%) | 0 ( 0%) 37-46: 308 ( 4%) | 0 ( 0%) | 0 ( 0%) 47-58: 498 ( 7%) | 0 ( 0%) | 0 ( 0%) 59-73: 627 ( 9%) | 0 ( 0%) | 0 ( 0%) 74-92: 996 (15%) | 0 ( 0%) | 0 ( 0%) 93-116: 1206 (18%) | 0 ( 0%) | 0 ( 0%) 117-146: 1150 (17%) | 0 ( 0%) | 0 ( 0%) 147-183: 669 (10%) | 0 ( 0%) | 0 ( 0%) 184-230: 270 ( 4%) | 0 ( 0%) | 0 ( 0%) 231-288: 120 ( 1%) | 0 ( 0%) | 0 ( 0%) 289-361: 129 ( 2%) | 0 ( 0%) | 0 ( 0%) 362-452: 118 ( 1%) | 0 ( 0%) | 0 ( 0%) 453-566: 44 ( 0%) | 0 ( 0%) | 0 ( 0%) 567-708: 25 ( 0%) | 0 ( 0%) | 0 ( 0%) 709-886: 13 ( 0%) | 0 ( 0%) | 0 ( 0%) 887-1108: 10 ( 0%) | 0 ( 0%) | 0 ( 0%) 897789-1122236: 49 ( 0%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 18 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 77 ( 1%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 2 ( 3%) 39-42: 2 ( 3%) 51-56: 2 ( 3%) 58-63: 5 ( 8%) 66-72: 3 ( 5%) 76-83: 2 ( 3%) 83-91: 4 ( 6%) 99-108: 3 ( 5%) 108-118: 3 ( 5%) 131-144: 17 (29%) 145-159: 14 (24%) 159-174: 1 ( 1%)
test 4
Annoyingly, it seems taht testing from fenari gets me much better speed than hume, so I can't really use the tests above - the bottleneck is hume, not swift.
- reads, 6 nodes (2 front 4 back)
Final summary: total: 8498, failed: 0, execution time: 0:00:07.396502 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 6-7: 187 ( 2%) | 0 ( 0%) | 0 ( 0%) 8-10: 2094 (24%) | 0 ( 0%) | 0 ( 0%) 11-13: 2241 (26%) | 0 ( 0%) | 0 ( 0%) 14-17: 1612 (18%) | 0 ( 0%) | 0 ( 0%) 18-22: 979 (11%) | 0 ( 0%) | 0 ( 0%) 23-28: 706 ( 8%) | 0 ( 0%) | 0 ( 0%) 29-36: 411 ( 4%) | 0 ( 0%) | 0 ( 0%) 37-46: 142 ( 1%) | 0 ( 0%) | 0 ( 0%) 47-58: 65 ( 0%) | 0 ( 0%) | 0 ( 0%) 59-73: 27 ( 0%) | 0 ( 0%) | 0 ( 0%) 74-92: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 93-116: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 117-146: 12 ( 0%) | 0 ( 0%) | 0 ( 0%) 147-183: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 184-230: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 289-361: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 22-24: 1 (12%) 987-1085: 1 (12%) 1187-1305: 6 (75%)
test 5
- mixed reads and writes
- 6 nodes (2 front 4 back)
- note that the final rows are merely bad data - they declare that those queries took longer than the script had been running.
- Throughput report is missing because of a bug in the script
Final summary: total: 13698, failed: 0, execution time: 0:01:01.957862 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 6-7: 184 ( 1%) | 0 ( 0%) | 0 ( 0%) 8-10: 2330 (17%) | 0 ( 0%) | 0 ( 0%) 11-13: 2264 (16%) | 0 ( 0%) | 0 ( 0%) 14-17: 1625 (11%) | 0 ( 0%) | 0 ( 0%) 18-22: 829 ( 6%) | 0 ( 0%) | 0 ( 0%) 23-28: 592 ( 4%) | 0 ( 0%) | 0 ( 0%) 29-36: 384 ( 2%) | 0 ( 0%) | 0 ( 0%) 37-46: 268 ( 1%) | 0 ( 0%) | 0 ( 0%) 47-58: 332 ( 2%) | 0 ( 0%) | 0 ( 0%) 59-73: 457 ( 3%) | 0 ( 0%) | 0 ( 0%) 74-92: 767 ( 5%) | 0 ( 0%) | 0 ( 0%) 93-116: 959 ( 6%) | 0 ( 0%) | 0 ( 0%) 117-146: 965 ( 7%) | 0 ( 0%) | 0 ( 0%) 147-183: 641 ( 4%) | 0 ( 0%) | 0 ( 0%) 184-230: 408 ( 2%) | 0 ( 0%) | 0 ( 0%) 231-288: 210 ( 1%) | 0 ( 0%) | 0 ( 0%) 289-361: 113 ( 0%) | 0 ( 0%) | 0 ( 0%) 362-452: 80 ( 0%) | 0 ( 0%) | 0 ( 0%) 453-566: 77 ( 0%) | 0 ( 0%) | 0 ( 0%) 567-708: 44 ( 0%) | 0 ( 0%) | 0 ( 0%) 709-886: 39 ( 0%) | 0 ( 0%) | 0 ( 0%) 887-1108: 13 ( 0%) | 0 ( 0%) | 0 ( 0%) 897789-1122236: 57 ( 0%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 17 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 26 ( 0%) | 0 ( 0%) | 0 ( 0%) 4281003-5351253: 11 ( 0%) | 0 ( 0%) | 0 ( 0%) 5351254-6689067: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 6689068-8361335: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 8361336-10451670: 1 ( 0%) | 0 ( 0%) | 0 ( 0%)
Here are the same numbers with the reads mostly removed:
Final summary: total: 13698, failed: 0, execution time: 0:01:01.957862 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 6-7: 0 ( 0%) | 0 ( 0%) | 0 ( 0%) 8-10: 236 ( 4%) | 0 ( 0%) | 0 ( 0%) 11-13: 23 ( 0%) | 0 ( 0%) | 0 ( 0%) 14-17: 13 ( 0%) | 0 ( 0%) | 0 ( 0%) 18-22: 0 ( 0%) | 0 ( 0%) | 0 ( 0%) 23-28: 0 ( 0%) | 0 ( 0%) | 0 ( 0%) 29-36: 0 ( 0%) | 0 ( 0%) | 0 ( 0%) 37-46: 126 ( 2%) | 0 ( 0%) | 0 ( 0%) 47-58: 430 ( 8%) | 0 ( 0%) | 0 ( 0%) 59-73: 457 ( 8%) | 0 ( 0%) | 0 ( 0%) 74-92: 763 (14%) | 0 ( 0%) | 0 ( 0%) 93-116: 953 (17%) | 0 ( 0%) | 0 ( 0%) 117-146: 953 (17%) | 0 ( 0%) | 0 ( 0%) 147-183: 637 (11%) | 0 ( 0%) | 0 ( 0%) 184-230: 402 ( 7%) | 0 ( 0%) | 0 ( 0%) 231-288: 210 ( 4%) | 0 ( 0%) | 0 ( 0%) 289-361: 111 ( 2%) | 0 ( 0%) | 0 ( 0%) 362-452: 80 ( 1%) | 0 ( 0%) | 0 ( 0%) 453-566: 77 ( 1%) | 0 ( 0%) | 0 ( 0%) 567-708: 44 ( 1%) | 0 ( 0%) | 0 ( 0%) 709-886: 39 ( 1%) | 0 ( 0%) | 0 ( 0%) 887-1108: 13 ( 0%) | 0 ( 0%) | 0 ( 0%) 897789-1122236: 57 ( 1%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 17 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 26 ( 0%) | 0 ( 0%) | 0 ( 0%) 4281003-5351253: 11 ( 0%) | 0 ( 0%) | 0 ( 0%) 5351254-6689067: 6 ( 0%) | 0 ( 0%) | 0 ( 0%) 6689068-8361335: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 8361336-10451670: 1 ( 0%) | 0 ( 0%) | 0 ( 0%)
test 6
- all writes
- 6 nodes (2 front 4 back)
Final summary: total: 2284, failed: 0, execution time: 0:00:21.048471 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 29-36: 4 ( 0%) | 0 ( 0%) | 0 ( 0%) 37-46: 45 ( 1%) | 0 ( 0%) | 0 ( 0%) 47-58: 108 ( 4%) | 0 ( 0%) | 0 ( 0%) 59-73: 173 ( 7%) | 0 ( 0%) | 0 ( 0%) 74-92: 301 (13%) | 0 ( 0%) | 0 ( 0%) 93-116: 359 (15%) | 0 ( 0%) | 0 ( 0%) 117-146: 399 (17%) | 0 ( 0%) | 0 ( 0%) 147-183: 367 (16%) | 0 ( 0%) | 0 ( 0%) 184-230: 234 (10%) | 0 ( 0%) | 0 ( 0%) 231-288: 86 ( 3%) | 0 ( 0%) | 0 ( 0%) 289-361: 48 ( 2%) | 0 ( 0%) | 0 ( 0%) 362-452: 54 ( 2%) | 0 ( 0%) | 0 ( 0%) 453-566: 38 ( 1%) | 0 ( 0%) | 0 ( 0%) 567-708: 23 ( 1%) | 0 ( 0%) | 0 ( 0%) 709-886: 3 ( 0%) | 0 ( 0%) | 0 ( 0%) 887-1108: 5 ( 0%) | 0 ( 0%) | 0 ( 0%) 897789-1122236: 21 ( 0%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 9 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 7 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 1 ( 4%) 85-93: 4 (18%) 93-102: 4 (18%) 106-116: 6 (27%) 116-127: 4 (18%) 128-140: 2 ( 9%) 144-158: 1 ( 4%)
Tests post-deploy
production load + read performance test
I was curious how swift would fare for reads while it's serving production traffic - I wanted to know whether, with regular stuff going on, my previous performance test results were still valid. I ran the test on fenari with 20 threads (aka concurrent requests). My sample was a bunch (26996) of images from wikinews. 5 of the images didn't exist (which you'll see called out as failure below) but the rest were all there.
With the cache cold:
Final summary: total: 26996, failed: 5, execution time: 0:01:33.878265 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 6-7: 357 ( 1%) | 0 ( 0%) | 0 ( 0%) 8-10: 4439 (16%) | 0 ( 0%) | 0 ( 0%) 11-13: 1330 ( 4%) | 0 ( 0%) | 0 ( 0%) 14-17: 1347 ( 4%) | 0 ( 0%) | 0 ( 0%) 18-22: 1968 ( 7%) | 0 ( 0%) | 0 ( 0%) 23-28: 2439 ( 9%) | 0 ( 0%) | 0 ( 0%) 29-36: 2271 ( 8%) | 0 ( 0%) | 0 ( 0%) 37-46: 2078 ( 7%) | 0 ( 0%) | 0 ( 0%) 47-58: 1756 ( 6%) | 0 ( 0%) | 0 ( 0%) 59-73: 1624 ( 6%) | 0 ( 0%) | 0 ( 0%) 74-92: 1435 ( 5%) | 0 ( 0%) | 0 ( 0%) 93-116: 1412 ( 5%) | 0 ( 0%) | 0 ( 0%) 117-146: 1251 ( 4%) | 2 (40%) | 0 ( 0%) 147-183: 999 ( 3%) | 1 (20%) | 0 ( 0%) 184-230: 811 ( 3%) | 1 (20%) | 0 ( 0%) 231-288: 635 ( 2%) | 0 ( 0%) | 0 ( 0%) 289-361: 367 ( 1%) | 0 ( 0%) | 0 ( 0%) 362-452: 230 ( 0%) | 0 ( 0%) | 0 ( 0%) 453-566: 133 ( 0%) | 0 ( 0%) | 0 ( 0%) 567-708: 76 ( 0%) | 1 (20%) | 0 ( 0%) 709-886: 12 ( 0%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 7 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 14 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 1 ( 1%) 129-141: 2 ( 2%) 142-156: 7 ( 7%) 162-178: 5 ( 5%) 178-195: 5 ( 5%) 200-220: 4 ( 4%) 220-242: 10 (10%) 243-267: 6 ( 6%) 270-297: 13 (13%) 300-330: 14 (14%) 333-366: 10 (10%) 378-415: 6 ( 6%) 415-456: 4 ( 4%) 472-519: 3 ( 3%) 523-575: 4 ( 4%)
With the cache warm (after the files had been requested ~6 times):
Final summary: total: 26993, failed: 5, execution time: 0:00:29.049621 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 6-7: 2278 ( 8%) | 0 ( 0%) | 0 ( 0%) 8-10: 11082 (41%) | 0 ( 0%) | 0 ( 0%) 11-13: 3833 (14%) | 0 ( 0%) | 0 ( 0%) 14-17: 2165 ( 8%) | 0 ( 0%) | 0 ( 0%) 18-22: 1690 ( 6%) | 0 ( 0%) | 0 ( 0%) 23-28: 1467 ( 5%) | 0 ( 0%) | 0 ( 0%) 29-36: 1319 ( 4%) | 0 ( 0%) | 0 ( 0%) 37-46: 997 ( 3%) | 0 ( 0%) | 0 ( 0%) 47-58: 646 ( 2%) | 0 ( 0%) | 0 ( 0%) 59-73: 537 ( 1%) | 0 ( 0%) | 0 ( 0%) 74-92: 367 ( 1%) | 0 ( 0%) | 0 ( 0%) 93-116: 258 ( 0%) | 1 (20%) | 0 ( 0%) 117-146: 135 ( 0%) | 2 (40%) | 0 ( 0%) 147-183: 91 ( 0%) | 0 ( 0%) | 0 ( 0%) 184-230: 46 ( 0%) | 2 (40%) | 0 ( 0%) 231-288: 24 ( 0%) | 0 ( 0%) | 0 ( 0%) 289-361: 14 ( 0%) | 0 ( 0%) | 0 ( 0%) 362-452: 34 ( 0%) | 0 ( 0%) | 0 ( 0%) 567-708: 1 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 9 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 1 ( 3%) 462-508: 2 ( 6%) 554-609: 1 ( 3%) 690-759: 2 ( 6%) 762-838: 6 (20%) 848-932: 4 (13%) 936-1029: 3 (10%) 1051-1156: 5 (16%) 1158-1273: 4 (13%) 1281-1409: 2 ( 6%)
Though peak qps measured on a per-second basis reached somewhere between 1281 and 1409, when averaged over a minute, the peak performance reached was just over 600qps (the 404s at the top remained constant and are the production traffic; you can see the red peaking at just over 600):
The effect on latency of the sustained read load was to make reads more efficient (as can also be seen comparing the cold and warm raw numbers above) while making PUTs and DELETEs slow down:
The next test I ran was to do the same thing (20 concurrent requests across the wikinews file list) from all three owa servers simultaneously, thereby tripling the incoming query rate and making sure the resources of the calling host weren't slowing down the test.
Under these conditions, the distribution of latency was pretty similar. Though the qps rate dropped per server, each number in the throughput chart should be multiplied by 3, so the top bucket (664-730) should actually read 1992-2190qps.
Final summary: total: 26993, failed: 0, execution time: 0:00:59.097536 Query duration report: dur: number of queries that took within <dur> range (in milliseconds) successes failures exceptions 4-5: 2 ( 0%) | 0 ( 0%) | 0 ( 0%) 6-7: 1212 ( 4%) | 0 ( 0%) | 0 ( 0%) 8-10: 4690 (17%) | 0 ( 0%) | 0 ( 0%) 11-13: 2401 ( 8%) | 0 ( 0%) | 0 ( 0%) 14-17: 2466 ( 9%) | 0 ( 0%) | 0 ( 0%) 18-22: 2925 (10%) | 0 ( 0%) | 0 ( 0%) 23-28: 4353 (16%) | 0 ( 0%) | 0 ( 0%) 29-36: 3002 (11%) | 0 ( 0%) | 0 ( 0%) 37-46: 1242 ( 4%) | 0 ( 0%) | 0 ( 0%) 47-58: 895 ( 3%) | 0 ( 0%) | 0 ( 0%) 59-73: 782 ( 2%) | 0 ( 0%) | 0 ( 0%) 74-92: 757 ( 2%) | 0 ( 0%) | 0 ( 0%) 93-116: 646 ( 2%) | 0 ( 0%) | 0 ( 0%) 117-146: 441 ( 1%) | 0 ( 0%) | 0 ( 0%) 147-183: 393 ( 1%) | 0 ( 0%) | 0 ( 0%) 184-230: 240 ( 0%) | 0 ( 0%) | 0 ( 0%) 231-288: 185 ( 0%) | 0 ( 0%) | 0 ( 0%) 289-361: 116 ( 0%) | 0 ( 0%) | 0 ( 0%) 362-452: 105 ( 0%) | 0 ( 0%) | 0 ( 0%) 453-566: 61 ( 0%) | 0 ( 0%) | 0 ( 0%) 567-708: 39 ( 0%) | 0 ( 0%) | 0 ( 0%) 709-886: 12 ( 0%) | 0 ( 0%) | 0 ( 0%) 1753497-2191871: 1 ( 0%) | 0 ( 0%) | 0 ( 0%) 2739841-3424801: 30 ( 0%) | 0 ( 0%) | 0 ( 0%) 4281003-5351253: 8 ( 0%) | 0 ( 0%) | 0 ( 0%) Throughput report: qps: number of seconds during which performance was in that qps range 20-22: 2 ( 3%) 66-72: 1 ( 1%) 77-84: 1 ( 1%) 104-114: 1 ( 1%) 157-172: 1 ( 1%) 192-211: 2 ( 3%) 238-261: 3 ( 5%) 261-287: 2 ( 3%) 297-326: 5 ( 8%) 328-360: 2 ( 3%) 360-396: 4 ( 6%) 398-437: 3 ( 5%) 465-511: 9 (15%) 543-597: 3 ( 5%) 597-656: 6 (10%) 664-730: 15 (25%)
The graphs confirm both the similar effects on latency and the much higher qps rate.