-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add metric for GitHub API Rate Limit #5654
base: main
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
270d544
to
85c4442
Compare
This captures the rate limit values from GitHub for the ALI account user/process. We can use this to graph and track how much of the API rate limit is used by the CI infrastructure and flag when we are getting too close to the overall limit. Closes: pytorch/ci-infra#273 Signed-off-by: Thanh Ha <[email protected]>
85c4442
to
7af7287
Compare
export async function getGitHubRateLimit(installationId: number, metrics: Metrics): Promise<void> { | ||
const ghAuth = await createGithubAuth(installationId, 'installation', Config.Instance.ghesUrlApi, metrics); | ||
const githubInstallationClient = await createOctoClient(ghAuth, Config.Instance.ghesUrl); | ||
const rateLimit = await githubInstallationClient.rateLimit.get(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Calling the RateLimit API does not count towards the rate limit.
It would be more efficient if we can gather this information from an API call that's already happening rather than doing a separate call to the ratelimit API. However I couldn't find where in the code the best place for that would be so I decided on putting in scale-up.
…imit the request frequency to 1 every 10s
This captures the rate limit values from GitHub for the ALI account user/process. We can use this to graph and track how much of the API rate limit is used by the CI infrastructure and flag when we are getting too close to the overall limit.
Closes: pytorch/ci-infra#273