Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement the TPS centered executor #8

Open
4 of 7 tasks
hamax97 opened this issue Jul 13, 2023 · 4 comments
Open
4 of 7 tasks

Implement the TPS centered executor #8

hamax97 opened this issue Jul 13, 2023 · 4 comments

Comments

@hamax97
Copy link
Owner

hamax97 commented Jul 13, 2023

Tasks:

  • Find an http client. Options: see Find an http client that allows to get performance metrics from each request #1
  • Find an example web service, create a log-in flow and a transaction: Restful-booker.
  • Deploy the web service locally in a Docker container. Or where else could this be deployed cheaply?
  • Make the acceptance spec hit this Docker container.
  • Try to achieve as much TPS as possible. How about 2000 TPS?
  • Add benchmarks for memory usage.
  • Instead of a real web server, create a stub web server that introduces parametrized delays in each request. Also, it will return things to extract text from. It'll be hard to deploy a real web server that can handle 2000 TPS.
@hamax97
Copy link
Owner Author

hamax97 commented Jul 14, 2023

With a delay of less than 1 second it goes beyond 3000 TPS.

With a delay of 1 second or more on each iteration the system is not able to reach the desired TPS, even if it's only 10. When trying with 2000 TPS it reaches only 1000 TPS more or less. Next step is to find a way to use more CPU resources so that the desired TPS is reachable.

How about executing the testplan more than once per second based on some heuristic?
How about async-container?

@hamax97
Copy link
Owner Author

hamax97 commented Jul 27, 2023

Looking at how JMeter has this implemented in the Throughput Shaping Timer Feedback Function I found that if the current RPS, current_rps, is less than the desired RPS desired_rps, it will spawn the following number of threads:

needed_threads  = threads * rate_of_missing_threads
rate_of_missing_threads = (2 - current_rps / desired_rps)
  • current_rps / desired_rps will always be less than 1.
  • It uses 2 instead of 1 so that when multiplying threads by a value between 1 and 2, the needed_threads will increase, and not decrease.

Code for this: https://github.com/undera/jmeter-plugins/blob/master/plugins/tst/src/main/java/kg/apc/jmeter/timers/functions/TSTFeedback.java#L76C38-L76C60

The source code in k6 for the constant_arrival_rate executor is barely readable by someone not familiar with the project (very ugly.) It's very hard to understand. Here it is: https://github.com/grafana/k6/blob/master/lib/executor/constant_arrival_rate.go

@hamax97 hamax97 changed the title Create an acceptance spec that hits a real web server Implement the TPS centered executor Jul 27, 2023
@hamax97
Copy link
Owner Author

hamax97 commented Jul 27, 2023

I decided to go with the following approach:

  • Launch desired_tps - current_tps tasks, each second.
  • Refactor the acceptance spec so that I can verify the TPS reached in a per-second basis. Do not verify the total number of transactions executed:
    • Create an event handler that calls the registered callbacks each second.
    • Refactor the TPSCenteredOrchestrator.

@hamax97
Copy link
Owner Author

hamax97 commented Jul 27, 2023

How about looking at Tsung's model for adaptive users rate? http://tsung.erlang-projects.org/user_manual/conf-load.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant