You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there a configuration available to set the timeout on the controller to wait for a runner to be available? Looks like it might default to 30s or after a certain amount of retries? We use Spot VMs which can take longer for the pods to completely come up and the controller ends up scaling down and deleting the ephemeral runners before they come up. Jobs do get assigned if there are idle runners but I see the same issue when there are more jobs than idle runners.
024-06-21T13:11:28Z INFO AutoscalingRunnerSet Find existing ephemeral runner set {"autoscalingrunnerset": {"name":"gha-runner","namespace":"default"}, "name": "gha-runner-mhwpw", "specHash": "67b5d49757"} │
│ 2024-06-21T13:11:28Z INFO actions-clients getting Actions tenant URL and JWT {"registrationURL": "https://api.github.com/actions/runner-registration"} │
│ 2024-06-21T13:11:28Z INFO EphemeralRunner Created ephemeral runner JIT config {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}, "runnerId": 391} │
│ 2024-06-21T13:11:28Z INFO EphemeralRunner Updating ephemeral runner status with runnerId and runnerJITConfig {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Updated ephemeral runner status with runnerId and runnerJITConfig {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunnerSet Ephemeral runner counts {"ephemeralrunnerset": {"name":"gha-runner-mhwpw","namespace":"default"}, "pending": 1, "running": 0, "finished": 0, "failed": 0, "deleting": 0} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Creating new ephemeral runner secret for jitconfig. {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Creating new secret for ephemeral runner {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Created new secret spec for ephemeral runner {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Created ephemeral runner secret {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}, "secretName": "gha-runner-mhwpw-runner-whfk8"} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Creating new EphemeralRunner pod. {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Creating new pod for ephemeral runner {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:29Z INFO EphemeralRunner Created new pod spec for ephemeral runner {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:59Z INFO EphemeralRunner Created ephemeral runner pod {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}, "runnerScaleSetId": 3, "runnerName": "gha-runner-mhwpw-runner-whfk8", "r │
│ 2024-06-21T13:11:59Z INFO EphemeralRunner Waiting for runner container status to be available {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:11:59Z INFO EphemeralRunner Waiting for runner container status to be available {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:12:00Z INFO EphemeralRunner Waiting for runner container status to be available {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:12:07Z INFO EphemeralRunner Waiting for runner container status to be available {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}}
2024-06-21T13:12:07Z INFO EphemeralRunner Waiting for runner container status to be available {"ephemeralrunner": {"name":"gha-runner-mhwpw-runner-whfk8","namespace":"default"}} │
│ 2024-06-21T13:12:18Z INFO EphemeralRunnerSet Ephemeral runner counts {"ephemeralrunnerset": {"name":"gha-runner-mhwpw","namespace":"default"}, "pending": 1, "running": 0, "finished": 0, "failed": 0, "deleting": 0} │
│ 2024-06-21T13:12:18Z INFO EphemeralRunnerSet Scaling comparison {"ephemeralrunnerset": {"name":"gha-runner-mhwpw","namespace":"default"}, "current": 1, "desired": 0} │
│ 2024-06-21T13:12:18Z INFO EphemeralRunnerSet Deleting ephemeral runners (scale down) {"ephemeralrunnerset": {"name":"gha-runner-mhwpw","namespace":"default"}, "count": 1}
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
Is there a configuration available to set the timeout on the controller to wait for a runner to be available? Looks like it might default to 30s or after a certain amount of retries? We use Spot VMs which can take longer for the pods to completely come up and the controller ends up scaling down and deleting the ephemeral runners before they come up. Jobs do get assigned if there are idle runners but I see the same issue when there are more jobs than idle runners.
TIA.
Beta Was this translation helpful? Give feedback.
All reactions