Compare commits

..

57 Commits

Author SHA1 Message Date
Yusuke Kuoka
c0b8f9d483 Merge pull request #380 from summerwind/ns-flag
Use --watch-namespace flag to restrict the namespace to watch
2021-03-09 15:03:32 +09:00
Yusuke Kuoka
ced1c2321a Fix chart-testing failing due to conflict between authSecret and dummySecret 2021-03-09 14:54:55 +09:00
Yusuke Kuoka
1b8a656051 Use --watch-namespace flag to restrict the namespace to watch
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793172995
2021-03-09 09:46:21 +09:00
Rob Whitby
1753fa3530 handle GET requests in webhook hra (#378) 2021-03-09 08:46:27 +09:00
Johannes Nicolai
8c0f3dfc79 Set runner group for runners with enterprise scope (#376)
* so far, runner group parameter is only set for runners with org scope
* now set group for enterprise runners as well
* removed null check for org scope as either org or enterprise will be set
2021-03-08 09:18:23 +09:00
Rob Whitby
dbda292f54 fix typo in examples (#373) 2021-03-08 09:18:10 +09:00
callum-tait-pbx
550a864198 chore: bumping helm chart (#372)
PR 355 made changes to the CRDs but didn't bump the version
2021-03-05 20:27:52 +09:00
Yusuke Kuoka
4fa5315311 Fix possible flapping autoscale on runner update (#371)
Addresses https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r587199428
2021-03-05 10:21:20 +09:00
Hiroshi Muraoka
11e58fcc41 Manage runner with label (#355)
* Update RunnerDeploymentSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerReplicaSetSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add CloneSelectorAndAddLabel to add Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix tests

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use label to find RunnerReplicaSet/Runner

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Update controller-gen versions in CRD

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update autoscaler to list Pods with labels

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add debug log

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Modify RunnerDeployment tests

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify RunnerReplicaset test

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify integration test

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use RunnerDeployment Template Labels as the default selector for backward compatibility

* Fix labeling

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update func in Eventually to return (int, error)

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerDeployment controller not to use label selector

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix potential replicaset controller breakage on replicaset created before v0.17.0

* Fix errors on existing runner replica sets

* Ensure RunnerReplicaSet Spec Selector addition does not break controller

* Ensure RunnerDeployment Template.Spec.Labels change does result in template hash change

* Fix comment

Co-authored-by: binoue <banji-inoue@cybozu.co.jp>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-03-05 10:15:39 +09:00
Mike Perry
f220fefe92 Update README.md (#370) 2021-03-05 09:17:32 +09:00
Hidetake Iwata
56b4598d1d Fix helm template error when webhook server is enabled (#365)
* Fix include block in githubwebhook.deployment.yaml

* Fix include block in githubwebhook.secrets.yaml
2021-03-03 09:21:58 +09:00
Taehyun Kim
8f977dbe48 Fix various bugs in helm chart (#364)
* Fix wrong trim

* add missing MutatingWeghookConfiguration.webhooks[*].sideEffects

* fix missing admissionReviewVersions

* admissionregistration.k8s.io/v1 for kustomization manifests

* revert webhook config
2021-03-03 09:21:20 +09:00
Yusuke Kuoka
9ae3551744 Remove unnecessary GitHub API calls (#363)
The controller had the 2 extra and redundant calls to List Workflow Runs API.

Ref #362
2021-03-02 10:55:30 +09:00
Rolf Ahrenberg
05ad3f5469 Set default python (#361) 2021-03-01 09:45:13 +09:00
callum-tait-pbx
9c7372a8e0 docs: styling fixes (#359)
* docs: styling fixes

* docs: grammer fixes
2021-03-01 09:44:35 +09:00
Yusuke Kuoka
584590e97c Use patch instead of update to alleviate HRA conflict on webhook (#358)
We sometimes see that integration test fails due to runner replicas not meeting the expected number in a timely manner. After investigating a bit, this turned out to be due to that HRA updates on webhook-based autoscaler and HRA controller are conflicting. This changes the controllers to use Patch instead of Update to make conflicts less likely to happen.

I have also updated the hra controller to use Patch when updating RunnerDeployment, too.

Overall, these changes should make the webhook-based autoscaling more reliable due to less conflicts.
2021-02-26 10:17:09 +09:00
Yusuke Kuoka
d18884a0b9 Fix HRA expired cache entries not cleaned up (#357)
Fixes #356
2021-02-26 09:54:24 +09:00
callum-tait-pbx
f987571b64 Improve docs (#303) 2021-02-26 09:32:18 +09:00
Taehyun Kim
450e384c4c Update helm chart (#343)
* add replicaCount

* Add authSecret.existingSecret

* set image.tag null by default

* implement ingress for githubwebhook server

* fix deprecated and secretName template

* backward compat .authSecret.enabled

* existingSecret for github webhook secret

* use secretName template

* set default secret names

* do not use app version based image tag

* create and name variable for secrets
2021-02-26 09:26:51 +09:00
Yusuke Kuoka
e9eef04993 Fix old HRA capacity reservations not cleaned up (#354)
Similar to #348 for #346, but for HRA.Spec.CapacityReservations usually modified by the webhook-based autoscaler controller.
This patch tries to fix that by improving the webhook-based autoscaler controller to omit expired reservations on updating HRA spec.
2021-02-25 11:08:00 +09:00
Yusuke Kuoka
598dd1d9fe Fix incorrect DESIRED on `kubectl get hra (#353)
`kubectl get horizontalrunnerautoscalers.actions.summerwind.dev` shows HRA.status.desiredReplicas as the DESIRED count. However the value had been not taking capacityReservations into account, which resulted in showing incorrect count when you used webhook-based autoscaler, or capacityReservations API directly. This fixes that.
2021-02-25 10:32:09 +09:00
Yusuke Kuoka
9890a90e69 Improve webhook-based autoscaler log (#352)
The controller had been writing confusing messages like the below on missing scale target:

```
Found too many scale targets: It must be exactly one to avoid ambiguity. Either set WatchNamespace for the webhook-based autoscaler to let it only find HRAs in the namespace, or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.{"scaleTargets": ""}
```

This fixes that, while improving many kinds of messages written while reconcilation, so that the error message is more actionable.
2021-02-25 10:07:41 +09:00
Yusuke Kuoka
9da123ae5e Fix integration test flakiness (#351)
Ref https://github.com/summerwind/actions-runner-controller/pull/345#issuecomment-785015406
2021-02-25 09:30:32 +09:00
Johannes Nicolai
4d4137aa28 Avoid zombie runners that missed token expiration by a bit (#345)
* if a new runner pod was just scheduled to start up right before a 
registration expired, it will not get a new registration token and go in 
an infinite update loop (until #341) kicks in
* if registzration tokens got updated a little bit before they actually 
expired, just starting up pods will way more likely get a working token
2021-02-25 09:07:49 +09:00
Yusuke Kuoka
022007078e Compact excessive error message on runnerreplicaset status update conflict (#350)
We occasionally see logs like the below:

```
2021-02-24T02:48:26.769ZERRORFailed to update runner status{"runnerreplicaset": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/summerwind/actions-runner-controller/controllers.(*RunnerReplicaSetReconciler).Reconcile
/home/runner/work/actions-runner-controller/actions-runner-controller/controllers/runnerreplicaset_controller.go:207
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
2021-02-24T02:48:26.769ZERRORcontroller-runtime.controllerReconciler error{"controller": "testns-244olrunnerreplicaset", "request": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
```

which can be compacted into one-liner, without the useless stack trace, without double-logging the same error from the logger and the controller.
2021-02-25 09:01:02 +09:00
Johannes Nicolai
31e5e61155 Log correct runner that was deleted (#349) 2021-02-25 08:38:55 +09:00
Aditya Purandare
1d1453c5f2 Fix user used for dind runner group permissions (#337) 2021-02-24 19:06:52 +09:00
Yusuke Kuoka
e44e53b88e Fix failure while saving HRA status after running controller for a while (#348)
Fixes #346
2021-02-24 11:20:21 +09:00
Yusuke Kuoka
398791241e Fix runner release workflow to do docker-push (#347)
Apparently I have mistakenly removed `push` option from the workflow in #323 which resulted in new runner build #323 not being pushed. This fixes that.
2021-02-24 11:08:33 +09:00
Yusuke Kuoka
991535e567 Fix panic on webhook for user-owned repository (#344)
* Fix panic on webhook for user-owned repository

Follow-up for #282 and #334
2021-02-23 08:05:25 +09:00
Johannes Nicolai
2d7fbbfb68 Handle offline runners gracefully (#341)
* if a runner pod starts up with an invalid token, it will go in an 
infinite retry loop, appearing as RUNNING from the outside
* normally, this error situation is detected because no corresponding 
runner objects exists in GitHub and the pod will get removed after 
registration timeout
* if the GitHub runner object already existed before - e.g. because a 
finalizer was not properly run as part of a partial Kubernetes crash, 
the runner will always stay in a running mode, even updating the 
registration token will not kill the problematic pod
* introducing RunnerOffline exception that can be handled in runner 
controller and replicaset controller
* as runners are offline when a pod is completed and marked for restart, 
only do additional restart checks if no restart was already decided, 
making code a bit cleaner and saving GitHub API calls after each job 
completion
2021-02-22 10:08:04 +09:00
Yusuke Kuoka
dd0b9f3e95 Merge pull request #340 from int128/integration-test-check-run
Fix index key to find HRA in GitHub webhook handler
2021-02-22 09:49:54 +09:00
Yusuke Kuoka
7cb2bc84c8 Merge pull request #334 from summerwind/integration-test-check-run
Add integration test for autoscaling on check_run webhook event
2021-02-22 09:38:07 +09:00
Hidetake Iwata
b0e74bebab Fix index key to find HRA in GitHub webhook handler 2021-02-20 21:25:23 +09:00
Hidetake Iwata
dfbe53dcca Fix webhook payload in integration test 2021-02-20 21:08:23 +09:00
Yusuke Kuoka
ebc3970b84 Add integration test for autoscaling on check_run webhook event 2021-02-19 10:33:04 +09:00
Hidetake Iwata
1ddcf6946a Fix nil pointer error on received check_run event (#331)
* Reproduce nil pointer error on received check_run event

* Fix nil pointer error on received check_run event
2021-02-18 20:22:36 +09:00
Yusuke Kuoka
cfbaad38c8 Merge pull request #328 from int128/fix-port-name-length
Changes:

1. Fix length of github-webhook-server port name
2. Add a cluster role binding for github-webhook-server
3. Remove --enable-leader-election from github-webhook-server
2021-02-18 20:20:39 +09:00
Yusuke Kuoka
67f6de010b feat: Common runner labels configurable per controller (#327)
* feat: Common runner labels configurable per controller

Ref #321
2021-02-18 20:19:08 +09:00
Hidetake Iwata
2db608879a Remove --enable-leader-election from github-webhook-server 2021-02-18 16:51:47 +09:00
Hidetake Iwata
2c4a6ca90b Add cluster role binding for github-webhook-server 2021-02-18 16:49:24 +09:00
Hidetake Iwata
829bf20449 Fix length of github-webhook-server port name 2021-02-18 16:42:15 +09:00
Reinier Timmer
be13322816 Update runner to 2.277.1 (#322)
* Update runner to 2.277.1

* Update build-and-release-runners.yml

* integration test condition

Don't run integration tests when only updating the runner image

* fixup! integration test condition

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-02-18 09:29:53 +09:00
Johannes Nicolai
7f4a76a39b Also log into DockerHub for release event (#326)
* so far, only push events would trigger the DockerHub login step
* hence, attempts to release would fail because of a permission problem (tested locally)
* adding OR condition to also login in case a release got published
2021-02-18 08:54:44 +09:00
callum-tait-pbx
0fce761686 fix: add trunate to ensure service kinds have valid names (#325)
* fix: adding truncate for service kinds

* chore : bumping chart version
2021-02-18 08:43:48 +09:00
Yusuke Kuoka
c88ff44518 Fix wip.yml workflow for building controller canary tags (#323)
In #306 we seem to have accidentally updated a wrong workflow, which was for runner builds. This updates the one for the controller.

Resolves #302
2021-02-18 08:42:24 +09:00
Yusuke Kuoka
2fdf35ac9d Refactor integration test to use helpers (#320)
This should make the test code a bit more DRY and readable.
2021-02-17 10:23:35 +09:00
Johannes Nicolai
6cce3fefc5 Add project to awesome-runners list (#319) 2021-02-17 09:14:42 +09:00
Yusuke Kuoka
eb2eaf8130 Fix TotalNumberOfQueuedAndInProgressWorkflowRuns to work with a lot of remaining completed jobs (#316)
I have heard from some user that they have hundred thousands of `status=completed` workflow runs in their repository which effectively blocked TotalNumberOfQueuedAndInProgressWorkflowRuns from working because of GitHub API rate limit due to excessive paginated requests.

This fixes that by separating list-workflow-runs calls to two - one for `queued` and one for `in_progress`, which can make the minimum API call from 1 to 2, but allows it to work regardless of number of remaining `completed` workflow runs.
2021-02-16 18:55:55 +09:00
callum-tait-pbx
7bf712d0d4 fix: duplicate name attribute (#318) 2021-02-16 18:52:08 +09:00
Yusuke Kuoka
7d024a6c05 Fix "duplicate metrics collector registration attempted" errors at startup (#317)
I have seen this error a lot in our integration test. It turned out due to https://github.com/kubernetes-sigs/controller-runtime/issues/484 and is being fixed with this change.
2021-02-16 18:51:33 +09:00
Yusuke Kuoka
434823bcb3 scale{Up,Down}Adjustment to add/remove constant number of replicas on scaling (#315)
* `scale{Up,Down}Adjustment` to add/remove constant number of replicas on scaling

Ref #305

* Bump chart version
2021-02-16 17:16:26 +09:00
Yusuke Kuoka
35d047db01 Fix enterprise runners misusing cached token (#314)
Follow-up for #290
2021-02-16 12:56:52 +09:00
Yusuke Kuoka
f1db6af1c5 Add repository runners support for PercentageRunnersBusy-based autoscaling (#313)
Resolves #258
2021-02-16 12:44:51 +09:00
Hidetake Iwata
4f3f2fb60d Add metrics for GitHub API rate limit (#312) 2021-02-16 09:58:09 +09:00
Johannes Nicolai
2623140c9a Make log message less scary (#311)
* the reconciliation loop is often much faster than the runner startup, 
so changing runner not found related messages to debug and also add the 
possibility that the runner just needs more time
2021-02-16 09:55:55 +09:00
Johannes Nicolai
1db9d9d574 Use ARM64 compatible kube-rbac-proxy from upstream (#310)
* as pointed out in #281 the currently used image for the 
kube-rbac-proxy - gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1" - does not 
have an ARM64 image
* hence, trying to use the standard deployment manifest / helm char will 
fail on ARM64 systems
* replaced image with quay.io/brancz/kube-rbac-proxy:v0.8.0 which is the 
latest version from the upstream maintainer 
(https://github.com/brancz/kube-rbac-proxy/blob/master/Makefile#L13)
* successfully tested on both AMD64 and ARM64 clusters
* fixes #281
2021-02-16 09:55:03 +09:00
53 changed files with 3232 additions and 762 deletions

View File

@@ -6,7 +6,7 @@ on:
- '**' - '**'
paths: paths:
- 'runner/**' - 'runner/**'
- .github/workflows/build-runner.yml - .github/workflows/build-and-release-runners.yml
push: push:
branches: branches:
- master - master
@@ -15,10 +15,8 @@ on:
- runner/Dockerfile - runner/Dockerfile
- runner/dindrunner.Dockerfile - runner/dindrunner.Dockerfile
- runner/entrypoint.sh - runner/entrypoint.sh
- .github/workflows/build-runner.yml - .github/workflows/build-and-release-runners.yml
release:
types: [published]
name: Runner
jobs: jobs:
build: build:
runs-on: ubuntu-latest runs-on: ubuntu-latest
@@ -31,7 +29,7 @@ jobs:
- name: actions-runner-dind - name: actions-runner-dind
dockerfile: dindrunner.Dockerfile dockerfile: dindrunner.Dockerfile
env: env:
RUNNER_VERSION: 2.276.1 RUNNER_VERSION: 2.277.1
DOCKER_VERSION: 19.03.12 DOCKER_VERSION: 19.03.12
DOCKERHUB_USERNAME: ${{ github.repository_owner }} DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps: steps:
@@ -52,36 +50,18 @@ jobs:
- name: Login to DockerHub - name: Login to DockerHub
uses: docker/login-action@v1 uses: docker/login-action@v1
if: ${{ github.event_name == 'push' }} if: ${{ github.event_name == 'push' || github.event_name == 'release' }}
with: with:
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
# Considered unstable builds - name: Build and Push
# Mutable (no sha) and immutable (include sha) tags are created, see Issue 285 and PR 286 for why
- name: Build and push canary builds
uses: docker/build-push-action@v2 uses: docker/build-push-action@v2
with: with:
context: ./runner context: ./runner
file: ./runner/${{ matrix.dockerfile }} file: ./runner/${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' && github.event_name != 'release' }} push: ${{ github.event_name != 'pull_request' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-canary
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-canary-${{ steps.vars.outputs.sha_short }}
# Considered stable builds
# Mutable (no sha) and immutable (include sha) tags are created, see Issue 285 and PR 286 for why
- name: Build and push release builds
uses: docker/build-push-action@v2
with:
context: ./runner
file: ./runner/${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name == 'release' }}
build-args: | build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }} RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }} DOCKER_VERSION=${{ env.DOCKER_VERSION }}

View File

@@ -57,6 +57,7 @@ jobs:
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
push: true push: true
tags: | tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }} ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }} ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}

View File

@@ -6,6 +6,7 @@ on:
- master - master
paths-ignore: paths-ignore:
- 'runner/**' - 'runner/**'
- .github/workflows/build-and-release-runners.yml
jobs: jobs:
test: test:

View File

@@ -30,11 +30,13 @@ jobs:
username: ${{ github.repository_owner }} username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Build and Push - name: Build and Push
uses: docker/build-push-action@v2 uses: docker/build-push-action@v2
with: with:
file: Dockerfile file: Dockerfile
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
push: true push: true
tags: ${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary

193
README.md
View File

@@ -1,5 +1,7 @@
# actions-runner-controller # actions-runner-controller
[![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)
This controller operates self-hosted runners for GitHub Actions on your Kubernetes cluster. This controller operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
ToC: ToC:
@@ -8,8 +10,8 @@ ToC:
- [Installation](#installation) - [Installation](#installation)
- [GitHub Enterprise support](#github-enterprise-support) - [GitHub Enterprise support](#github-enterprise-support)
- [Setting up authentication with GitHub API](#setting-up-authentication-with-github-api) - [Setting up authentication with GitHub API](#setting-up-authentication-with-github-api)
- [Using GitHub App](#using-github-app) - [Deploying using GitHub App Authentication](#deploying-using-github-app-authentication)
- [Using Personal AccessToken ](#using-personal-access-token) - [Deploying using PAT Authentication](#deploying-using-pat-authentication)
- [Usage](#usage) - [Usage](#usage)
- [Repository Runners](#repository-runners) - [Repository Runners](#repository-runners)
- [Organization Runners](#organization-runners) - [Organization Runners](#organization-runners)
@@ -19,7 +21,7 @@ ToC:
- [Runner with DinD](#runner-with-dind) - [Runner with DinD](#runner-with-dind)
- [Additional tweaks](#additional-tweaks) - [Additional tweaks](#additional-tweaks)
- [Runner labels](#runner-labels) - [Runner labels](#runner-labels)
- [Runer groups](#runner-groups) - [Runner groups](#runner-groups)
- [Using EKS IAM role for service accounts](#using-eks-iam-role-for-service-accounts) - [Using EKS IAM role for service accounts](#using-eks-iam-role-for-service-accounts)
- [Software installed in the runner image](#software-installed-in-the-runner-image) - [Software installed in the runner image](#software-installed-in-the-runner-image)
- [Common errors](#common-errors) - [Common errors](#common-errors)
@@ -42,14 +44,14 @@ Install the custom resource and actions-runner-controller with `kubectl` or `hel
`kubectl`: `kubectl`:
``` ```shell
# REPLACE "v0.16.1" with the latest release # REPLACE "v0.17.0" with the version you wish to deploy
kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/download/v0.16.1/actions-runner-controller.yaml kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/download/v0.17.0/actions-runner-controller.yaml
``` ```
`helm`: `helm`:
``` ```shell
helm repo add actions-runner-controller https://summerwind.github.io/actions-runner-controller helm repo add actions-runner-controller https://summerwind.github.io/actions-runner-controller
helm upgrade --install -n actions-runner-system actions-runner-controller/actions-runner-controller helm upgrade --install -n actions-runner-system actions-runner-controller/actions-runner-controller
``` ```
@@ -59,8 +61,7 @@ helm upgrade --install -n actions-runner-system actions-runner-controller/action
If you use either Github Enterprise Cloud or Server, you can use **actions-runner-controller** with those, too. If you use either Github Enterprise Cloud or Server, you can use **actions-runner-controller** with those, too.
Authentication works same way as with public Github (repo and organization level). Authentication works same way as with public Github (repo and organization level).
The minimum version of Github Enterprise Server is 3.0.0 (or rc1/rc2). The minimum version of Github Enterprise Server is 3.0.0 (or rc1/rc2).
In most cases maintainers do not have environment where to test changes and are reliant on the community for testing. __**NOTE : The maintainers do not have an Enterprise environment to be able to test changes and so are reliant on the community for testing, support is a best endeavors basis only and is community driven**__
```shell ```shell
kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system
@@ -106,20 +107,14 @@ spec:
## Setting up authentication with GitHub API ## Setting up authentication with GitHub API
There are two ways for actions-runner-controller to authenticate with the GitHub API: There are two ways for actions-runner-controller to authenticate with the GitHub API (only 1 can be configured at a time however):
1. Using GitHub App. 1. Using GitHub App.
2. Using Personal Access Token. 2. Using Personal Access Token.
Regardless of which authentication method you use, the same permissions are required, those permissions are: Functionality wise there isn't a difference between the 2 authentication methods. There are however some benefits to using a GitHub App for authentication over a PAT such as an [increased API quota](https://docs.github.com/en/developers/apps/rate-limits-for-github-apps), if you run into rate limiting consider deploying this solution using GitHub App authentication instead.
- Repository: Administration (read/write)
- Repository: Actions (read)
- Organization: Self-hosted runners (read/write)
### Deploying using GitHub App Authentication
**NOTE: It is extremely important to only follow one of the sections below and not both.**
### Using GitHub App
You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page. You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page.
@@ -156,19 +151,29 @@ $ kubectl create secret generic controller-manager \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH} --from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH}
``` ```
### Using Personal Access Token ### Deploying using PAT Authentication
From an account that has `admin` privileges for the repository, create a [personal access token](https://github.com/settings/tokens) with `repo` scope. This token is used to register a self-hosted runner by *actions-runner-controller*. Personal Acess Token can be used to register a self-hosted runner by *actions-runner-controller*.
Self-hosted runners in GitHub can either be connected to a single repository, or to a GitHub organization (so they are available to all repositories in the organization). This token is used to register a self-hosted runner by *actions-runner-controller*. Self-hosted runners in GitHub can either be connected to a single repository, or to a GitHub organization (so they are available to all repositories in the organization). How you plan on using the runner will affect what scopes are needed for the token.
For adding a runner to a repository, the token should have `repo` scope. If the runner should be added to an organization, the token should have `admin:org` scope. Note that to use a Personal Access Token, you must issue the token with an account that has `admin` privileges (on the repository and/or the organization). Log-in to a GitHub account that has `admin` privileges for the repository, and [create a personal access token](https://github.com/settings/tokens/new) with the appropriate scopes listed below:
Open the Create Token page from the following link, grant the `repo` and/or `admin:org` scope, and press the "Generate Token" button at the bottom of the page to create the token. **Scopes for a Repository Runner**
- [Create personal access token](https://github.com/settings/tokens/new) * repo (Full control)
Register the created token (`GITHUB_TOKEN`) as a Kubernetes secret. **Scopes for a Organisation Runner**
* repo (Full control)
* admin:org (Full control)
* admin:public_key - read:public_key
* admin:repo_hook - read:repo_hook
* admin:org_hook
* notifications
* workflow
Once you have created the appropriate token, deploy it as a secret to your kubernetes cluster that you are going to deploy the solution on:
```shell ```shell
kubectl create secret generic controller-manager \ kubectl create secret generic controller-manager \
@@ -183,7 +188,7 @@ There are two ways to use this controller:
- Manage runners one by one with `Runner`. - Manage runners one by one with `Runner`.
- Manage a set of runners with `RunnerDeployment`. - Manage a set of runners with `RunnerDeployment`.
### Repository runners ### Repository Runners
To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository. To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository.
@@ -281,9 +286,33 @@ example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
#### Autoscaling #### Autoscaling
`RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields, depending on pending workflow runs. A `RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields based the chosen scaling metric as defined in the `metrics` attribute
**Scaling Metrics**
**TotalNumberOfQueuedAndInProgressWorkflowRuns**
In the below example, `actions-runner` will pole GitHub for all pending workflows with the pole period defined by the sync period configuration. It will then scale to e.g. 3 if there're 3 pending jobs at sync time.
With this scaling metric we are required to define a list of repositories within our metric.
The scale out performance is controlled via the manager containers startup `--sync-period` argument. The default value is set to 10 minutes to prevent default deployments rate limiting themselves from the GitHub API.
**Kustomize Config :** The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch<br />
**Helm Config :** `syncPeriod`
**Benefits of this metric**
1. Supports named repositories allowing you to restrict the runner to a specified set of repositories server side.
2. Scales quickly (within the bounds of the syncPeriod) as it will spin up the number of runners based on the depth of the workflow queue
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels).
**Drawbacks of this metric**
1. Repositories must be named within the scaling metric, maintaining a list of repositories may not be viable in larger environments or self-serve environments.
2. May not scale quick enough for some users needs
3. Relatively large amounts of API requests required to maintain this metric, you may run in API rate limiting issues depending on the size of your environment and how aggressive your sync period configuration is
Example `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`
In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time.
```yaml ```yaml
apiVersion: actions.summerwind.dev/v1alpha1 apiVersion: actions.summerwind.dev/v1alpha1
@@ -310,38 +339,34 @@ spec:
- summerwind/actions-runner-controller - summerwind/actions-runner-controller
``` ```
The scale out performance is controlled via the manager containers startup `--sync-period` argument. The default value is 10 minutes to prevent unconfigured deployments rate limiting themselves from the GitHub API. The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch for those that are building the solution via the kustomize setup. Additionally, the `HorizontalRunnerAutoscaler` also has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured however by adding the setting `scaleDownDelaySecondsAfterScaleOut` in the `HorizontalRunnerAutoscaler` `spec`:
Additionally, the autoscaling feature has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured by setting `scaleDownDelaySecondsAfterScaleUp`:
```yaml ```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runner-deployment
spec: spec:
template:
spec:
repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60 scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
``` ```
If you do not want to manage an explicit list of repositories to scale, an alternate autoscaling scheme that can be applied is the PercentageRunnersBusy scheme. The number of desired pods are evaulated by checking how many runners are currently busy and applying a scaleup or scale down factor if certain thresholds are met. By setting the metric type to PercentageRunnersBusy, the HorizontalRunnerAutoscaler will query github for the number of busy runners which live in the RunnerDeployment namespace. Scaleup and scaledown thresholds are the percentage of busy runners at which the number of desired runners are re-evaluated. Scaleup and scaledown factors are the multiplicative factor applied to the current number of runners used to calculate the number of desired runners. This scheme is also especially useful if you want multiple controllers in various clusters, each responsible for scaling their own runner pods per namespace. **PercentageRunnersBusy**
The `HorizontalRunnerAutoscaler` will pole GitHub based on the configuration sync period for the number of busy runners which live in the RunnerDeployment's namespace and scale based on the settings
**Kustomize Config :** The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch<br />
**Helm Config :** `syncPeriod`
**Benefits of this metric**
1. Allows for multiple controllers to be deployed as each controller deployed is responsible for scaling their own runner pods on a per namespace basis.
2. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
3. Supports github organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
4. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
5. Supports scaling runner count on both a percentage increase / descrease basis as well as on a fixed runner count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
**Drawbacks of this metric**
1. May not scale quick enough for some users needs as we are scaling up and down based on indicative information rather than a direct count of the workflow queue depth
Examples of each scaling type implemented with a `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`:
```yaml ```yaml
--- ---
@@ -354,13 +379,38 @@ spec:
name: example-runner-deployment name: example-runner-deployment
minReplicas: 1 minReplicas: 1
maxReplicas: 3 maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics: metrics:
- type: PercentageRunnersBusy - type: PercentageRunnersBusy
scaleUpThreshold: '0.75' scaleUpThreshold: '0.75' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: '0.3' scaleDownThreshold: '0.3' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down
scaleUpFactor: '1.4' scaleUpFactor: '1.4' # The scale up multiplier factor applied to desired count
scaleDownFactor: '0.7' scaleDownFactor: '0.7' # The scale down multiplier factor applied to desired count
```
```yaml
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: '0.3' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down
ScaleUpAdjustment: '2' # The scale up runner count added to desired count
ScaleDownAdjustment: '1' # The scale down runner count subtracted from the desired count
```
Like the previous metric, the scale down factor respects the anti-flapping configuration is applied to the `HorizontalRunnerAutoscaler` as mentioned previously:
```yaml
spec:
scaleDownDelaySecondsAfterScaleOut: 60
``` ```
#### Faster Autoscaling with GitHub Webhook #### Faster Autoscaling with GitHub Webhook
@@ -383,7 +433,7 @@ kind: HorizontalRunnerAutoscaler
spec: spec:
scaleTargetRef: scaleTargetRef:
name: myrunners name: myrunners
scaleUpTrigggers: scaleUpTriggers:
- githubEvent: - githubEvent:
checkRun: checkRun:
types: ["created"] types: ["created"]
@@ -442,7 +492,7 @@ kind: HorizontalRunnerAutoscaler
spec: spec:
scaleTargetRef: scaleTargetRef:
name: myrunners name: myrunners
scaleUpTrigggers: scaleUpTriggers:
- githubEvent: - githubEvent:
checkRun: checkRun:
types: ["created"] types: ["created"]
@@ -464,7 +514,7 @@ kind: HorizontalRunnerAutoscaler
spec: spec:
scaleTargetRef: scaleTargetRef:
name: myrunners name: myrunners
scaleUpTrigggers: scaleUpTriggers:
- githubEvent: - githubEvent:
pullRequest: pullRequest:
types: ["synchronize"] types: ["synchronize"]
@@ -529,20 +579,20 @@ spec:
requests: requests:
cpu: "2.0" cpu: "2.0"
memory: "4Gi" memory: "4Gi"
# Timeout after a node crashed or became unreachable to evict your pods somewhere else (default 5mins) # Timeout after a node crashed or became unreachable to evict your pods somewhere else (default 5mins)
tolerations: tolerations:
- key: "node.kubernetes.io/unreachable" - key: "node.kubernetes.io/unreachable"
operator: "Exists" operator: "Exists"
effect: "NoExecute" effect: "NoExecute"
tolerationSeconds: 10 tolerationSeconds: 10
# true (default) = A privileged docker sidecar container is included in the runner pod.
# false = A docker sidecar container is not included in the runner pod and you can't use docker.
# If set to false, there are no privileged container and you cannot use docker. # If set to false, there are no privileged container and you cannot use docker.
dockerEnabled: false dockerEnabled: false
# If set to true, runner pod container only 1 container that's expected to be able to run docker, too. # false (default) = Docker support is provided by a sidecar container deployed in the runner pod.
# image summerwind/actions-runner-dind or custom one should be used with true -value # true = No docker sidecar container is deployed in the runner pod but docker can be used within teh runner container instead. The image summerwind/actions-runner-dind is used by default.
dockerdWithinRunnerContainer: false dockerdWithinRunnerContainer: true
# Valid if dockerdWithinRunnerContainer is not true # Docker sidecar container image tweaks examples below, only applicable if dockerdWithinRunnerContainer = false
dockerdContainerResources: dockerdContainerResources:
limits: limits:
cpu: "4.0" cpu: "4.0"
@@ -550,6 +600,7 @@ spec:
requests: requests:
cpu: "2.0" cpu: "2.0"
memory: "4Gi" memory: "4Gi"
# Additional N number of sidecar containers
sidecarContainers: sidecarContainers:
- name: mysql - name: mysql
image: mysql:5.7 image: mysql:5.7
@@ -558,8 +609,8 @@ spec:
value: abcd1234 value: abcd1234
securityContext: securityContext:
runAsUser: 0 runAsUser: 0
# if workDir is not specified, the default working directory is /runner/_work # workDir if not specified (default = /runner/_work)
# this setting allows you to customize the working directory location # You can customise this setting allowing you to change the default working directory location
# for example, the below setting is the same as on the ubuntu-18.04 image # for example, the below setting is the same as on the ubuntu-18.04 image
workDir: /home/runner/work workDir: /home/runner/work
``` ```
@@ -737,9 +788,11 @@ NAME=$DOCKER_USER/actions-runner-controller \
The following is a list of alternative solutions that may better fit you depending on your use-case: The following is a list of alternative solutions that may better fit you depending on your use-case:
- <https://github.com/evryfs/github-actions-runner-operator/> - <https://github.com/evryfs/github-actions-runner-operator/>
- <https://github.com/philips-labs/terraform-aws-github-runner/>
Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are: Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are:
- `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99) - `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99)
- `actions-runner-controller` is able to gracefully stop runners (#103) - `actions-runner-controller` is able to gracefully stop runners (#103)
- `actions-runner-controller` has ARM support - `actions-runner-controller` has ARM support
- `actions-runner-controller` has GitHub Enterprise support (see [GitHub Enterprise support](#github-enterprise-support) section for caveats)

View File

@@ -126,6 +126,16 @@ type MetricSpec struct {
// to determine how many pods should be removed. // to determine how many pods should be removed.
// +optional // +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"` ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
// ScaleUpAdjustment is the number of runners added on scale-up.
// You can only specify either ScaleUpFactor or ScaleUpAdjustment.
// +optional
ScaleUpAdjustment int `json:"scaleUpAdjustment,omitempty"`
// ScaleDownAdjustment is the number of runners removed on scale-down.
// You can only specify either ScaleDownFactor or ScaleDownAdjustment.
// +optional
ScaleDownAdjustment int `json:"scaleDownAdjustment,omitempty"`
} }
type HorizontalRunnerAutoscalerStatus struct { type HorizontalRunnerAutoscalerStatus struct {

View File

@@ -25,13 +25,16 @@ const (
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy" AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
) )
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment // RunnerDeploymentSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct { type RunnerDeploymentSpec struct {
// +optional // +optional
// +nullable // +nullable
Replicas *int `json:"replicas,omitempty"` Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"` // +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
} }
type RunnerDeploymentStatus struct { type RunnerDeploymentStatus struct {

View File

@@ -26,7 +26,10 @@ type RunnerReplicaSetSpec struct {
// +nullable // +nullable
Replicas *int `json:"replicas,omitempty"` Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"` // +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
} }
type RunnerReplicaSetStatus struct { type RunnerReplicaSetStatus struct {

View File

@@ -22,6 +22,7 @@ package v1alpha1
import ( import (
"k8s.io/api/core/v1" "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
) )
@@ -403,6 +404,11 @@ func (in *RunnerDeploymentSpec) DeepCopyInto(out *RunnerDeploymentSpec) {
*out = new(int) *out = new(int)
**out = **in **out = **in
} }
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template) in.Template.DeepCopyInto(&out.Template)
} }
@@ -535,6 +541,11 @@ func (in *RunnerReplicaSetSpec) DeepCopyInto(out *RunnerReplicaSetSpec) {
*out = new(int) *out = new(int)
**out = **in **out = **in
} }
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template) in.Template.DeepCopyInto(&out.Template)
} }

View File

@@ -15,17 +15,17 @@ type: application
# This is the chart version. This version number should be incremented each time you make changes # This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version. # to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/) # Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.4.0 version: 0.8.0
home: https://github.com/summerwind/actions-runner-controller home: https://github.com/summerwind/actions-runner-controller
sources: sources:
- https://github.com/summerwind/actions-runner-controller - https://github.com/summerwind/actions-runner-controller
maintainers: maintainers:
- name: summerwind - name: summerwind
email: contact@summerwind.jp email: contact@summerwind.jp
url: https://github.com/summerwind url: https://github.com/summerwind
- name: funkypenguin - name: funkypenguin
email: davidy@funkypenguin.co.nz email: davidy@funkypenguin.co.nz
url: https://www.funkypenguin.co.nz url: https://www.funkypenguin.co.nz

View File

@@ -22,6 +22,9 @@ resources:
cpu: 100m cpu: 100m
memory: 128Mi memory: 128Mi
authSecret:
create: false
# Set the following to true to create a dummy secret, allowing the manager pod to start # Set the following to true to create a dummy secret, allowing the manager pod to start
# This is only useful in CI # This is only useful in CI
createDummySecret: true createDummySecret: true

View File

@@ -78,6 +78,11 @@ spec:
items: items:
type: string type: string
type: array type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed
on scale-down. You can only specify either ScaleDownFactor or
ScaleDownAdjustment.
type: integer
scaleDownFactor: scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many to the current number of runners used to determine how many
@@ -87,6 +92,10 @@ spec:
description: ScaleDownThreshold is the percentage of busy runners description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down. less than which will trigger the hpa to scale the runners down.
type: string type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added
on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor: scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many to the current number of runners used to determine how many

View File

@@ -38,11 +38,42 @@ spec:
metadata: metadata:
type: object type: object
spec: spec:
description: RunnerReplicaSetSpec defines the desired state of RunnerDeployment description: RunnerDeploymentSpec defines the desired state of RunnerDeployment
properties: properties:
replicas: replicas:
nullable: true nullable: true
type: integer type: integer
selector:
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
template: template:
properties: properties:
metadata: metadata:

View File

@@ -43,6 +43,37 @@ spec:
replicas: replicas:
nullable: true nullable: true
type: integer type: integer
selector:
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
template: template:
properties: properties:
metadata: metadata:

View File

@@ -1,8 +1,8 @@
1. Get the application URL by running these commands: 1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }} {{- if .Values.githubWebhookServer.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }} {{- range $host := .Values.githubWebhookServer.ingress.hosts }}
{{- range .paths }} {{- range .paths }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }} http{{ if $.Values.githubWebhookServer.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- else if contains "NodePort" .Values.service.type }} {{- else if contains "NodePort" .Values.service.type }}

View File

@@ -47,6 +47,10 @@ Create the name of the service account to use
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- define "actions-runner-controller-github-webhook-server.secretName" -}}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.roleName" -}} {{- define "actions-runner-controller-github-webhook-server.roleName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . }} {{- include "actions-runner-controller-github-webhook-server.fullname" . }}
{{- end }} {{- end }}

View File

@@ -64,6 +64,10 @@ Create the name of the service account to use
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- define "actions-runner-controller.secretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.authSecret.name -}}
{{- end }}
{{- define "actions-runner-controller.leaderElectionRoleName" -}} {{- define "actions-runner-controller.leaderElectionRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-leader-election {{- include "actions-runner-controller.fullname" . }}-leader-election
{{- end }} {{- end }}
@@ -85,11 +89,11 @@ Create the name of the service account to use
{{- end }} {{- end }}
{{- define "actions-runner-controller.webhookServiceName" -}} {{- define "actions-runner-controller.webhookServiceName" -}}
{{- include "actions-runner-controller.fullname" . }}-webhook {{- include "actions-runner-controller.fullname" . | trunc 55 }}-webhook
{{- end }} {{- end }}
{{- define "actions-runner-controller.authProxyServiceName" -}} {{- define "actions-runner-controller.authProxyServiceName" -}}
{{- include "actions-runner-controller.fullname" . }}-metrics-service {{- include "actions-runner-controller.fullname" . | trunc 47 }}-metrics-service
{{- end }} {{- end }}
{{- define "actions-runner-controller.selfsignedIssuerName" -}} {{- define "actions-runner-controller.selfsignedIssuerName" -}}

View File

@@ -6,6 +6,7 @@ metadata:
labels: labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }} {{- include "actions-runner-controller.labels" . | nindent 4 }}
spec: spec:
replicas: {{ .Values.replicaCount }}
selector: selector:
matchLabels: matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }} {{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
@@ -34,6 +35,9 @@ spec:
- "--enable-leader-election" - "--enable-leader-election"
- "--sync-period={{ .Values.syncPeriod }}" - "--sync-period={{ .Values.syncPeriod }}"
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}" - "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
command: command:
- "/manager" - "/manager"
env: env:
@@ -41,19 +45,19 @@ spec:
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: github_token key: github_token
name: controller-manager name: {{ include "actions-runner-controller.secretName" . }}
optional: true optional: true
- name: GITHUB_APP_ID - name: GITHUB_APP_ID
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: github_app_id key: github_app_id
name: controller-manager name: {{ include "actions-runner-controller.secretName" . }}
optional: true optional: true
- name: GITHUB_APP_INSTALLATION_ID - name: GITHUB_APP_INSTALLATION_ID
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: github_app_installation_id key: github_app_installation_id
name: controller-manager name: {{ include "actions-runner-controller.secretName" . }}
optional: true optional: true
- name: GITHUB_APP_PRIVATE_KEY - name: GITHUB_APP_PRIVATE_KEY
value: /etc/actions-runner-controller/github_app_private_key value: /etc/actions-runner-controller/github_app_private_key
@@ -61,7 +65,7 @@ spec:
- name: {{ $key }} - name: {{ $key }}
value: {{ $val | quote }} value: {{ $val | quote }}
{{- end }} {{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}" image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: manager name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
ports: ports:
@@ -74,7 +78,7 @@ spec:
{{- toYaml .Values.securityContext | nindent 12 }} {{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts: volumeMounts:
- mountPath: "/etc/actions-runner-controller" - mountPath: "/etc/actions-runner-controller"
name: controller-manager name: secret
readOnly: true readOnly: true
- mountPath: /tmp - mountPath: /tmp
name: tmp name: tmp
@@ -98,9 +102,9 @@ spec:
{{- toYaml .Values.securityContext | nindent 12 }} {{- toYaml .Values.securityContext | nindent 12 }}
terminationGracePeriodSeconds: 10 terminationGracePeriodSeconds: 10
volumes: volumes:
- name: controller-manager - name: secret
secret: secret:
secretName: controller-manager secretName: {{ include "actions-runner-controller.secretName" . }}
- name: cert - name: cert
secret: secret:
defaultMode: 420 defaultMode: 420

View File

@@ -7,6 +7,7 @@ metadata:
labels: labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }} {{- include "actions-runner-controller.labels" . | nindent 4 }}
spec: spec:
replicas: {{ .Values.githubWebhookServer.replicaCount }}
selector: selector:
matchLabels: matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }} {{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}
@@ -32,7 +33,6 @@ spec:
containers: containers:
- args: - args:
- "--metrics-addr=127.0.0.1:8080" - "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}" - "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
command: command:
- "/github-webhook-server" - "/github-webhook-server"
@@ -41,18 +41,18 @@ spec:
valueFrom: valueFrom:
secretKeyRef: secretKeyRef:
key: github_webhook_secret_token key: github_webhook_secret_token
name: github-webhook-server name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
optional: true optional: true
{{- range $key, $val := .Values.githubWebhookServer.env }} {{- range $key, $val := .Values.githubWebhookServer.env }}
- name: {{ $key }} - name: {{ $key }}
value: {{ $val | quote }} value: {{ $val | quote }}
{{- end }} {{- end }}
image: "{{ .Values.githubWebhookServer.image.repository }}:{{ .Values.githubWebhookServer.image.tag | default (cat "v" .Chart.AppVersion | replace " " "") }}" image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: github-webhook-server name: github-webhook-server
imagePullPolicy: {{ .Values.image.pullPolicy }} imagePullPolicy: {{ .Values.image.pullPolicy }}
ports: ports:
- containerPort: 8000 - containerPort: 8000
name: github-webhook-server name: http
protocol: TCP protocol: TCP
resources: resources:
{{- toYaml .Values.githubWebhookServer.resources | nindent 12 }} {{- toYaml .Values.githubWebhookServer.resources | nindent 12 }}
@@ -74,10 +74,6 @@ spec:
securityContext: securityContext:
{{- toYaml .Values.securityContext | nindent 12 }} {{- toYaml .Values.securityContext | nindent 12 }}
terminationGracePeriodSeconds: 10 terminationGracePeriodSeconds: 10
volumes:
- name: github-webhook-server
secret:
secretName: github-webhook-server
{{- with .Values.githubWebhookServer.nodeSelector }} {{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector: nodeSelector:
{{- toYaml . | nindent 8 }} {{- toYaml . | nindent 8 }}

View File

@@ -0,0 +1,41 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := .Values.githubWebhookServer.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.githubWebhookServer.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.githubWebhookServer.ingress.tls }}
tls:
{{- range .Values.githubWebhookServer.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.githubWebhookServer.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -1,16 +1,16 @@
{{- if .Values.githubWebhookServer.enabled }} {{- if .Values.githubWebhookServer.enabled }}
{{- if .Values.githubWebhookServer.secret.enabled }} {{- if .Values.githubWebhookServer.secret.create }}
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: github-webhook-server name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }} {{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque type: Opaque
data: data:
{{- range $k, $v := .Values.githubWebhookServer.secret }} {{- if .Values.githubWebhookServer.secret.github_webhook_secret_token }}
{{ $k }}: {{ $v | toString | b64enc }} github_webhook_secret_token: {{ .Values.githubWebhookServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }} {{- end }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -1,14 +1,23 @@
{{- if or .Values.authSecret.enabled }} {{- if .Values.authSecret.create }}
apiVersion: v1 apiVersion: v1
kind: Secret kind: Secret
metadata: metadata:
name: controller-manager name: {{ include "actions-runner-controller.secretName" . }}
namespace: {{ .Release.Namespace }} namespace: {{ .Release.Namespace }}
labels: labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }} {{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque type: Opaque
data: data:
{{- range $k, $v := .Values.authSecret }} {{- if .Values.authSecret.github_app_id }}
{{ $k }}: {{ $v | toString | b64enc }} github_app_id: {{ .Values.authSecret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_installation_id }}
github_app_installation_id: {{ .Values.authSecret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_private_key }}
github_app_private_key: {{ .Values.authSecret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_token }}
github_token: {{ .Values.authSecret.github_token | toString | b64enc }}
{{- end }} {{- end }}
{{- end }} {{- end }}

View File

@@ -11,7 +11,8 @@ syncPeriod: 10m
# Only 1 authentication method can be deployed at a time # Only 1 authentication method can be deployed at a time
# Uncomment the configuration you are applying and fill in the details # Uncomment the configuration you are applying and fill in the details
authSecret: authSecret:
enabled: false create: true
name: "controller-manager"
### GitHub Apps Configuration ### GitHub Apps Configuration
#github_app_id: "" #github_app_id: ""
#github_app_installation_id: "" #github_app_installation_id: ""
@@ -21,15 +22,14 @@ authSecret:
image: image:
repository: summerwind/actions-runner-controller repository: summerwind/actions-runner-controller
# Overrides the manager image tag whose default is the chart appVersion if the tag key is commented out tag: "v0.17.0"
tag: "latest"
dindSidecarRepositoryAndTag: "docker:dind" dindSidecarRepositoryAndTag: "docker:dind"
pullPolicy: IfNotPresent pullPolicy: IfNotPresent
kube_rbac_proxy: kube_rbac_proxy:
image: image:
repository: gcr.io/kubebuilder/kube-rbac-proxy repository: quay.io/brancz/kube-rbac-proxy
tag: v0.4.1 tag: v0.8.0
imagePullSecrets: [] imagePullSecrets: []
nameOverride: "" nameOverride: ""
@@ -46,10 +46,12 @@ serviceAccount:
podAnnotations: {} podAnnotations: {}
podSecurityContext: {} podSecurityContext:
{}
# fsGroup: 2000 # fsGroup: 2000
securityContext: {} securityContext:
{}
# capabilities: # capabilities:
# drop: # drop:
# - ALL # - ALL
@@ -61,20 +63,8 @@ service:
type: ClusterIP type: ClusterIP
port: 443 port: 443
ingress: resources:
enabled: false {}
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious # We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little # choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following # resources, such as Minikube. If you do want to specify resources, uncomment the following
@@ -104,25 +94,29 @@ affinity: {}
# PriorityClass: system-cluster-critical # PriorityClass: system-cluster-critical
priorityClassName: "" priorityClassName: ""
env: {} env:
{}
# http_proxy: "proxy.com:8080" # http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080" # https_proxy: "proxy.com:8080"
# no_proxy: "" # no_proxy: ""
scope:
# If true, the controller will only watch custom resources in a single namespace
singleNamespace: false
# If `scope.singleNamespace=true`, the controller will only watch custom resources in this namespace
# The default value is "", which means the namespace of the controller
watchNamespace: ""
githubWebhookServer: githubWebhookServer:
enabled: false enabled: false
labels: {} labels: {}
replicaCount: 1 replicaCount: 1
syncPeriod: 10m syncPeriod: 10m
secret: secret:
enabled: false create: true
name: "github-webhook-server"
### GitHub Webhook Configuration ### GitHub Webhook Configuration
#github_webhook_secret_token: "" #github_webhook_secret_token: ""
image:
repository: summerwind/actions-runner-controller
# Overrides the manager image tag whose default is the chart appVersion if the tag key is commented out
tag: "latest"
pullPolicy: IfNotPresent
imagePullSecrets: [] imagePullSecrets: []
nameOverride: "" nameOverride: ""
fullnameOverride: "" fullnameOverride: ""
@@ -144,10 +138,23 @@ githubWebhookServer:
affinity: {} affinity: {}
priorityClassName: "" priorityClassName: ""
service: service:
type: NodePort type: ClusterIP
ports: ports:
- port: 80 - port: 80
targetPort: 8000 targetPort: http
protocol: TCP protocol: TCP
name: http name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc #nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

View File

@@ -110,7 +110,7 @@ func main() {
Recorder: nil, Recorder: nil,
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
SecretKeyBytes: []byte(webhookSecretToken), SecretKeyBytes: []byte(webhookSecretToken),
WatchNamespace: watchNamespace, Namespace: watchNamespace,
} }
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil { if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {

View File

@@ -78,6 +78,11 @@ spec:
items: items:
type: string type: string
type: array type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed
on scale-down. You can only specify either ScaleDownFactor or
ScaleDownAdjustment.
type: integer
scaleDownFactor: scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many to the current number of runners used to determine how many
@@ -87,6 +92,10 @@ spec:
description: ScaleDownThreshold is the percentage of busy runners description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down. less than which will trigger the hpa to scale the runners down.
type: string type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added
on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor: scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many to the current number of runners used to determine how many

View File

@@ -38,11 +38,42 @@ spec:
metadata: metadata:
type: object type: object
spec: spec:
description: RunnerReplicaSetSpec defines the desired state of RunnerDeployment description: RunnerDeploymentSpec defines the desired state of RunnerDeployment
properties: properties:
replicas: replicas:
nullable: true nullable: true
type: integer type: integer
selector:
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
template: template:
properties: properties:
metadata: metadata:

View File

@@ -43,6 +43,37 @@ spec:
replicas: replicas:
nullable: true nullable: true
type: integer type: integer
selector:
description: A label selector is a label query over a set of resources. The result of matchLabels and matchExpressions are ANDed. An empty label selector matches all objects. A null label selector matches no objects.
nullable: true
properties:
matchExpressions:
description: matchExpressions is a list of label selector requirements. The requirements are ANDed.
items:
description: A label selector requirement is a selector that contains values, a key, and an operator that relates the key and values.
properties:
key:
description: key is the label key that the selector applies to.
type: string
operator:
description: operator represents a key's relationship to a set of values. Valid operators are In, NotIn, Exists and DoesNotExist.
type: string
values:
description: values is an array of string values. If the operator is In or NotIn, the values array must be non-empty. If the operator is Exists or DoesNotExist, the values array must be empty. This array is replaced during a strategic merge patch.
items:
type: string
type: array
required:
- key
- operator
type: object
type: array
matchLabels:
additionalProperties:
type: string
description: matchLabels is a map of {key,value} pairs. A single {key,value} in the matchLabels map is equivalent to an element of matchExpressions, whose key field is "key", the operator is "In", and the values array contains only "value". The requirements are ANDed.
type: object
type: object
template: template:
properties: properties:
metadata: metadata:

View File

@@ -10,7 +10,7 @@ spec:
spec: spec:
containers: containers:
- name: kube-rbac-proxy - name: kube-rbac-proxy
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1 image: quay.io/brancz/kube-rbac-proxy:v0.8.0
args: args:
- "--secure-listen-address=0.0.0.0:8443" - "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/" - "--upstream=http://127.0.0.1:8080/"

View File

@@ -10,6 +10,8 @@ import (
"time" "time"
"github.com/summerwind/actions-runner-controller/api/v1alpha1" "github.com/summerwind/actions-runner-controller/api/v1alpha1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client" "sigs.k8s.io/controller-runtime/pkg/client"
) )
@@ -189,6 +191,9 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInPro
"workflow_runs_in_progress", inProgress, "workflow_runs_in_progress", inProgress,
"workflow_runs_queued", queued, "workflow_runs_queued", queued,
"workflow_runs_unknown", unknown, "workflow_runs_unknown", unknown,
"namespace", hra.Namespace,
"runner_deployment", rd.Name,
"horizontal_runner_autoscaler", hra.Name,
) )
return &replicas, nil return &replicas, nil
@@ -196,7 +201,6 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInPro
func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunnersBusy(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) { func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunnersBusy(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
ctx := context.Background() ctx := context.Background()
orgName := rd.Spec.Template.Spec.Organization
minReplicas := *hra.Spec.MinReplicas minReplicas := *hra.Spec.MinReplicas
maxReplicas := *hra.Spec.MaxReplicas maxReplicas := *hra.Spec.MaxReplicas
metrics := hra.Spec.Metrics[0] metrics := hra.Spec.Metrics[0]
@@ -220,14 +224,34 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
scaleDownThreshold = sdt scaleDownThreshold = sdt
} }
if metrics.ScaleUpFactor != "" {
scaleUpAdjustment := metrics.ScaleUpAdjustment
if scaleUpAdjustment != 0 {
if metrics.ScaleUpAdjustment < 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpAdjustment cannot be lower than 0")
}
if metrics.ScaleUpFactor != "" {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[]: scaleUpAdjustment and scaleUpFactor cannot be specified together")
}
} else if metrics.ScaleUpFactor != "" {
suf, err := strconv.ParseFloat(metrics.ScaleUpFactor, 64) suf, err := strconv.ParseFloat(metrics.ScaleUpFactor, 64)
if err != nil { if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpFactor cannot be parsed into a float64") return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpFactor cannot be parsed into a float64")
} }
scaleUpFactor = suf scaleUpFactor = suf
} }
if metrics.ScaleDownFactor != "" {
scaleDownAdjustment := metrics.ScaleDownAdjustment
if scaleDownAdjustment != 0 {
if metrics.ScaleDownAdjustment < 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownAdjustment cannot be lower than 0")
}
if metrics.ScaleDownFactor != "" {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[]: scaleDownAdjustment and scaleDownFactor cannot be specified together")
}
} else if metrics.ScaleDownFactor != "" {
sdf, err := strconv.ParseFloat(metrics.ScaleDownFactor, 64) sdf, err := strconv.ParseFloat(metrics.ScaleDownFactor, 64)
if err != nil { if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownFactor cannot be parsed into a float64") return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownFactor cannot be parsed into a float64")
@@ -235,35 +259,84 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
scaleDownFactor = sdf scaleDownFactor = sdf
} }
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns. selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
var runnerList v1alpha1.RunnerList if err != nil {
if err := r.List(ctx, &runnerList, client.InNamespace(rd.Namespace)); err != nil {
return nil, err return nil, err
} }
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
var runnerList v1alpha1.RunnerList
if err := r.List(
ctx,
&runnerList,
client.InNamespace(rd.Namespace),
client.MatchingLabelsSelector{Selector: selector},
); err != nil {
if !kerrors.IsNotFound(err) {
return nil, err
}
}
runnerMap := make(map[string]struct{}) runnerMap := make(map[string]struct{})
for _, items := range runnerList.Items { for _, items := range runnerList.Items {
runnerMap[items.Name] = struct{}{} runnerMap[items.Name] = struct{}{}
} }
var (
enterprise = rd.Spec.Template.Spec.Enterprise
organization = rd.Spec.Template.Spec.Organization
repository = rd.Spec.Template.Spec.Repository
)
// ListRunners will return all runners managed by GitHub - not restricted to ns // ListRunners will return all runners managed by GitHub - not restricted to ns
runners, err := r.GitHubClient.ListRunners(ctx, "", orgName, "") runners, err := r.GitHubClient.ListRunners(
ctx,
enterprise,
organization,
repository)
if err != nil { if err != nil {
return nil, err return nil, err
} }
numRunners := len(runnerList.Items)
numRunnersBusy := 0 var desiredReplicasBefore int
if v := rd.Spec.Replicas; v == nil {
desiredReplicasBefore = 1
} else {
desiredReplicasBefore = *v
}
var (
numRunners int
numRunnersRegistered int
numRunnersBusy int
)
numRunners = len(runnerList.Items)
for _, runner := range runners { for _, runner := range runners {
if _, ok := runnerMap[*runner.Name]; ok && runner.GetBusy() { if _, ok := runnerMap[*runner.Name]; ok {
numRunnersBusy++ numRunnersRegistered++
if runner.GetBusy() {
numRunnersBusy++
}
} }
} }
var desiredReplicas int var desiredReplicas int
fractionBusy := float64(numRunnersBusy) / float64(numRunners) fractionBusy := float64(numRunnersBusy) / float64(desiredReplicasBefore)
if fractionBusy >= scaleUpThreshold { if fractionBusy >= scaleUpThreshold {
desiredReplicas = int(math.Ceil(float64(numRunners) * scaleUpFactor)) if scaleUpAdjustment > 0 {
desiredReplicas = desiredReplicasBefore + scaleUpAdjustment
} else {
desiredReplicas = int(math.Ceil(float64(desiredReplicasBefore) * scaleUpFactor))
}
} else if fractionBusy < scaleDownThreshold { } else if fractionBusy < scaleDownThreshold {
desiredReplicas = int(float64(numRunners) * scaleDownFactor) if scaleDownAdjustment > 0 {
desiredReplicas = desiredReplicasBefore - scaleDownAdjustment
} else {
desiredReplicas = int(float64(desiredReplicasBefore) * scaleDownFactor)
}
} else { } else {
desiredReplicas = *rd.Spec.Replicas desiredReplicas = *rd.Spec.Replicas
} }
@@ -274,14 +347,26 @@ func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunn
desiredReplicas = maxReplicas desiredReplicas = maxReplicas
} }
// NOTES for operators:
//
// - num_runners can be as twice as large as replicas_desired_before while
// the runnerdeployment controller is replacing RunnerReplicaSet for runner update.
r.Log.V(1).Info( r.Log.V(1).Info(
"Calculated desired replicas", "Calculated desired replicas",
"computed_replicas_desired", desiredReplicas, "replicas_min", minReplicas,
"spec_replicas_min", minReplicas, "replicas_max", maxReplicas,
"spec_replicas_max", maxReplicas, "replicas_desired_before", desiredReplicasBefore,
"current_replicas", rd.Spec.Replicas, "replicas_desired", desiredReplicas,
"num_runners", numRunners, "num_runners", numRunners,
"num_runners_registered", numRunnersRegistered,
"num_runners_busy", numRunnersBusy, "num_runners_busy", numRunnersBusy,
"namespace", hra.Namespace,
"runner_deployment", rd.Name,
"horizontal_runner_autoscaler", hra.Name,
"enterprise", enterprise,
"organization", organization,
"repository", repository,
) )
rd.Status.Replicas = &desiredReplicas rd.Status.Replicas = &desiredReplicas

View File

@@ -40,14 +40,18 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
metav1Now := metav1.Now() metav1Now := metav1.Now()
testcases := []struct { testcases := []struct {
repo string repo string
org string org string
fixed *int fixed *int
max *int max *int
min *int min *int
sReplicas *int sReplicas *int
sTime *metav1.Time sTime *metav1.Time
workflowRuns string
workflowRuns string
workflowRuns_queued string
workflowRuns_in_progress string
workflowJobs map[int]string workflowJobs map[int]string
want int want int
err string err string
@@ -55,87 +59,107 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
// Legacy functionality // Legacy functionality
// 3 demanded, max at 3 // 3 demanded, max at 3
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
}, },
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period // 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
sReplicas: intPtr(3), sReplicas: intPtr(3),
sTime: &metav1Now, sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
}, },
// 3 demanded, max at 2 // 3 demanded, max at 2
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(2), max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
}, },
// 2 demanded, min at 2 // 2 demanded, min at 2
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
}, },
// 1 demanded, min at 2 // 1 demanded, min at 2
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
}, },
// 1 demanded, min at 2 // 1 demanded, min at 2
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
}, },
// 1 demanded, min at 1 // 1 demanded, min at 1
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 1, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 1,
}, },
// 1 demanded, min at 1 // 1 demanded, min at 1
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 1, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
}, },
// fixed at 3 // fixed at 3
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
fixed: intPtr(3), fixed: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
}, },
// Job-level autoscaling // Job-level autoscaling
// 5 requested from 3 workflows // 5 requested from 3 workflows
{ {
repo: "test/valid", repo: "test/valid",
min: intPtr(2), min: intPtr(2),
max: intPtr(10), max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}]}"`,
workflowJobs: map[int]string{ workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`, 1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`, 2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
@@ -158,7 +182,7 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) { t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer( server := fake.NewServer(
fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns, tc.workflowRuns_queued, tc.workflowRuns_in_progress),
fake.WithListWorkflowJobsResponse(200, tc.workflowJobs), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs),
fake.WithListRunnersResponse(200, fake.RunnersListBody), fake.WithListRunnersResponse(200, fake.RunnersListBody),
) )
@@ -228,129 +252,157 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
metav1Now := metav1.Now() metav1Now := metav1.Now()
testcases := []struct { testcases := []struct {
repos []string repos []string
org string org string
fixed *int fixed *int
max *int max *int
min *int min *int
sReplicas *int sReplicas *int
sTime *metav1.Time sTime *metav1.Time
workflowRuns string
workflowRuns string
workflowRuns_queued string
workflowRuns_in_progress string
workflowJobs map[int]string workflowJobs map[int]string
want int want int
err string err string
}{ }{
// 3 demanded, max at 3 // 3 demanded, max at 3
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
}, },
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period // 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
sReplicas: intPtr(3), sReplicas: intPtr(3),
sTime: &metav1Now, sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
}, },
// 3 demanded, max at 2 // 3 demanded, max at 2
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(2), max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
}, },
// 2 demanded, min at 2 // 2 demanded, min at 2
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
}, },
// 1 demanded, min at 2 // 1 demanded, min at 2
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
}, },
// 1 demanded, min at 2 // 1 demanded, min at 2
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
}, },
// 1 demanded, min at 1 // 1 demanded, min at 1
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 1, workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 1,
}, },
// 1 demanded, min at 1 // 1 demanded, min at 1
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 1, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
}, },
// fixed at 3 // fixed at 3
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
fixed: intPtr(1), fixed: intPtr(1),
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
}, },
// org runner, fixed at 3 // org runner, fixed at 3
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
fixed: intPtr(1), fixed: intPtr(1),
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3, workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
}, },
// org runner, 1 demanded, min at 1, no repos // org runner, 1 demanded, min at 1, no repos
{ {
org: "test", org: "test",
min: intPtr(1), min: intPtr(1),
max: intPtr(3), max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
err: "validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment", workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
err: "validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment",
}, },
// Job-level autoscaling // Job-level autoscaling
// 5 requested from 3 workflows // 5 requested from 3 workflows
{ {
org: "test", org: "test",
repos: []string{"valid"}, repos: []string{"valid"},
min: intPtr(2), min: intPtr(2),
max: intPtr(10), max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`, workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowJobs: map[int]string{ workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`, 1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`, 2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
@@ -373,7 +425,7 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) { t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer( server := fake.NewServer(
fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns, tc.workflowRuns_queued, tc.workflowRuns_in_progress),
fake.WithListWorkflowJobsResponse(200, tc.workflowJobs), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs),
fake.WithListRunnersResponse(200, fake.RunnersListBody), fake.WithListRunnersResponse(200, fake.RunnersListBody),
) )
@@ -391,7 +443,17 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
Name: "testrd", Name: "testrd",
}, },
Spec: v1alpha1.RunnerDeploymentSpec{ Spec: v1alpha1.RunnerDeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: v1alpha1.RunnerTemplate{ Template: v1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: v1alpha1.RunnerSpec{ Spec: v1alpha1.RunnerSpec{
Organization: tc.org, Organization: tc.org,
}, },

View File

@@ -24,6 +24,7 @@ import (
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"net/http" "net/http"
"sigs.k8s.io/controller-runtime/pkg/reconcile" "sigs.k8s.io/controller-runtime/pkg/reconcile"
"strings"
"time" "time"
"github.com/go-logr/logr" "github.com/go-logr/logr"
@@ -52,10 +53,11 @@ type HorizontalRunnerAutoscalerGitHubWebhook struct {
// the administrator is generated and specified in GitHub Web UI. // the administrator is generated and specified in GitHub Web UI.
SecretKeyBytes []byte SecretKeyBytes []byte
// WatchNamespace is the namespace to watch for HorizontalRunnerAutoscaler's to be // Namespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
// scaled on Webhook. // scaled on Webhook.
// Set to empty for letting it watch for all namespaces. // Set to empty for letting it watch for all namespaces.
WatchNamespace string Namespace string
Name string
} }
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(request reconcile.Request) (reconcile.Result, error) { func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(request reconcile.Request) (reconcile.Result, error) {
@@ -93,6 +95,12 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
} }
}() }()
// respond ok to GET / e.g. for health check
if r.Method == http.MethodGet {
fmt.Fprintln(w, "webhook server is running")
return
}
var payload []byte var payload []byte
if len(autoscaler.SecretKeyBytes) > 0 { if len(autoscaler.SecretKeyBytes) > 0 {
@@ -126,28 +134,38 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
var target *ScaleTarget var target *ScaleTarget
autoscaler.Log.Info("processing webhook event", "eventType", webhookType) log := autoscaler.Log.WithValues(
"event", webhookType,
"hookID", r.Header.Get("X-GitHub-Hook-ID"),
"delivery", r.Header.Get("X-GitHub-Delivery"),
)
switch e := event.(type) { switch e := event.(type) {
case *gogithub.PushEvent: case *gogithub.PushEvent:
target, err = autoscaler.getScaleUpTarget( target, err = autoscaler.getScaleUpTarget(
context.TODO(), context.TODO(),
*e.Repo.Name, log,
*e.Repo.Organization, e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchPushEvent(e), autoscaler.MatchPushEvent(e),
) )
case *gogithub.PullRequestEvent: case *gogithub.PullRequestEvent:
target, err = autoscaler.getScaleUpTarget( target, err = autoscaler.getScaleUpTarget(
context.TODO(), context.TODO(),
*e.Repo.Name, log,
*e.Repo.Organization.Name, e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchPullRequestEvent(e), autoscaler.MatchPullRequestEvent(e),
) )
case *gogithub.CheckRunEvent: case *gogithub.CheckRunEvent:
target, err = autoscaler.getScaleUpTarget( target, err = autoscaler.getScaleUpTarget(
context.TODO(), context.TODO(),
*e.Repo.Name, log,
*e.Org.Name, e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchCheckRunEvent(e), autoscaler.MatchCheckRunEvent(e),
) )
case *gogithub.PingEvent: case *gogithub.PingEvent:
@@ -158,20 +176,20 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
msg := "pong" msg := "pong"
if written, err := w.Write([]byte(msg)); err != nil { if written, err := w.Write([]byte(msg)); err != nil {
autoscaler.Log.Error(err, "failed writing http response", "msg", msg, "written", written) log.Error(err, "failed writing http response", "msg", msg, "written", written)
} }
autoscaler.Log.Info("received ping event") log.Info("received ping event")
return return
default: default:
autoscaler.Log.Info("unknown event type", "eventType", webhookType) log.Info("unknown event type", "eventType", webhookType)
return return
} }
if err != nil { if err != nil {
autoscaler.Log.Error(err, "handling check_run event") log.Error(err, "handling check_run event")
return return
} }
@@ -179,21 +197,21 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
if target == nil { if target == nil {
msg := "no horizontalrunnerautoscaler to scale for this github event" msg := "no horizontalrunnerautoscaler to scale for this github event"
autoscaler.Log.Info(msg, "eventType", webhookType) log.Info(msg, "eventType", webhookType)
ok = true ok = true
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
if written, err := w.Write([]byte(msg)); err != nil { if written, err := w.Write([]byte(msg)); err != nil {
autoscaler.Log.Error(err, "failed writing http response", "msg", msg, "written", written) log.Error(err, "failed writing http response", "msg", msg, "written", written)
} }
return return
} }
if err := autoscaler.tryScaleUp(context.TODO(), target); err != nil { if err := autoscaler.tryScaleUp(context.TODO(), target); err != nil {
autoscaler.Log.Error(err, "could not scale up") log.Error(err, "could not scale up")
return return
} }
@@ -207,12 +225,12 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.Respons
autoscaler.Log.Info(msg) autoscaler.Log.Info(msg)
if written, err := w.Write([]byte(msg)); err != nil { if written, err := w.Write([]byte(msg)); err != nil {
autoscaler.Log.Error(err, "failed writing http response", "msg", msg, "written", written) log.Error(err, "failed writing http response", "msg", msg, "written", written)
} }
} }
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx context.Context, value string) ([]v1alpha1.HorizontalRunnerAutoscaler, error) { func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx context.Context, value string) ([]v1alpha1.HorizontalRunnerAutoscaler, error) {
ns := autoscaler.WatchNamespace ns := autoscaler.Namespace
var defaultListOpts []client.ListOption var defaultListOpts []client.ListOption
@@ -226,6 +244,10 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx con
opts := append([]client.ListOption{}, defaultListOpts...) opts := append([]client.ListOption{}, defaultListOpts...)
opts = append(opts, client.MatchingFields{scaleTargetKey: value}) opts = append(opts, client.MatchingFields{scaleTargetKey: value})
if autoscaler.Namespace != "" {
opts = append(opts, client.InNamespace(autoscaler.Namespace))
}
var hraList v1alpha1.HorizontalRunnerAutoscalerList var hraList v1alpha1.HorizontalRunnerAutoscalerList
if err := autoscaler.List(ctx, &hraList, opts...); err != nil { if err := autoscaler.List(ctx, &hraList, opts...); err != nil {
@@ -294,28 +316,59 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleTarget(ctx co
targets := autoscaler.searchScaleTargets(hras, f) targets := autoscaler.searchScaleTargets(hras, f)
if len(targets) != 1 { n := len(targets)
if n == 0 {
return nil, nil
}
if n > 1 {
var scaleTargetIDs []string
for _, t := range targets {
scaleTargetIDs = append(scaleTargetIDs, t.HorizontalRunnerAutoscaler.Name)
}
autoscaler.Log.Info(
"Found too many scale targets: "+
"It must be exactly one to avoid ambiguity. "+
"Either set Namespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
"or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.",
"scaleTargets", strings.Join(scaleTargetIDs, ","))
return nil, nil return nil, nil
} }
return &targets[0], nil return &targets[0], nil
} }
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTarget(ctx context.Context, repoNameFromWebhook, orgNameFromWebhook string, f func(v1alpha1.ScaleUpTrigger) bool) (*ScaleTarget, error) { func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTarget(ctx context.Context, log logr.Logger, repo, owner, ownerType string, f func(v1alpha1.ScaleUpTrigger) bool) (*ScaleTarget, error) {
if target, err := autoscaler.getScaleTarget(ctx, repoNameFromWebhook, f); err != nil { repositoryRunnerKey := owner + "/" + repo
if target, err := autoscaler.getScaleTarget(ctx, repositoryRunnerKey, f); err != nil {
autoscaler.Log.Info("finding repository-wide runner", "repository", repositoryRunnerKey)
return nil, err return nil, err
} else if target != nil { } else if target != nil {
autoscaler.Log.Info("scale up target is repository-wide runners", "repository", repoNameFromWebhook) autoscaler.Log.Info("scale up target is repository-wide runners", "repository", repo)
return target, nil return target, nil
} }
if target, err := autoscaler.getScaleTarget(ctx, orgNameFromWebhook, f); err != nil { if ownerType == "User" {
return nil, nil
}
if target, err := autoscaler.getScaleTarget(ctx, owner, f); err != nil {
log.Info("finding organizational runner", "organization", owner)
return nil, err return nil, err
} else if target != nil { } else if target != nil {
autoscaler.Log.Info("scale up target is organizational runners", "repository", orgNameFromWebhook) log.Info("scale up target is organizational runners", "organization", owner)
return target, nil return target, nil
} }
log.Info(
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
)
return nil, nil return nil, nil
} }
@@ -324,8 +377,6 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) tryScaleUp(ctx contex
return nil return nil
} }
log := autoscaler.Log.WithValues("horizontalrunnerautoscaler", target.HorizontalRunnerAutoscaler.Name)
copy := target.HorizontalRunnerAutoscaler.DeepCopy() copy := target.HorizontalRunnerAutoscaler.DeepCopy()
amount := 1 amount := 1
@@ -334,22 +385,41 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) tryScaleUp(ctx contex
amount = target.ScaleUpTrigger.Amount amount = target.ScaleUpTrigger.Amount
} }
copy.Spec.CapacityReservations = append(copy.Spec.CapacityReservations, v1alpha1.CapacityReservation{ capacityReservations := getValidCapacityReservations(copy)
copy.Spec.CapacityReservations = append(capacityReservations, v1alpha1.CapacityReservation{
ExpirationTime: metav1.Time{Time: time.Now().Add(target.ScaleUpTrigger.Duration.Duration)}, ExpirationTime: metav1.Time{Time: time.Now().Add(target.ScaleUpTrigger.Duration.Duration)},
Replicas: amount, Replicas: amount,
}) })
if err := autoscaler.Client.Update(ctx, copy); err != nil { if err := autoscaler.Client.Patch(ctx, copy, client.MergeFrom(&target.HorizontalRunnerAutoscaler)); err != nil {
log.Error(err, "Failed to update horizontalrunnerautoscaler resource") return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
return err
} }
return nil return nil
} }
func getValidCapacityReservations(autoscaler *v1alpha1.HorizontalRunnerAutoscaler) []v1alpha1.CapacityReservation {
var capacityReservations []v1alpha1.CapacityReservation
now := time.Now()
for _, reservation := range autoscaler.Spec.CapacityReservations {
if reservation.ExpirationTime.Time.After(now) {
capacityReservations = append(capacityReservations, reservation)
}
}
return capacityReservations
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) SetupWithManager(mgr ctrl.Manager) error { func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) SetupWithManager(mgr ctrl.Manager) error {
autoscaler.Recorder = mgr.GetEventRecorderFor("webhookbasedautoscaler") name := "webhookbasedautoscaler"
if autoscaler.Name != "" {
name = autoscaler.Name
}
autoscaler.Recorder = mgr.GetEventRecorderFor(name)
if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.HorizontalRunnerAutoscaler{}, scaleTargetKey, func(rawObj runtime.Object) []string { if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.HorizontalRunnerAutoscaler{}, scaleTargetKey, func(rawObj runtime.Object) []string {
hra := rawObj.(*v1alpha1.HorizontalRunnerAutoscaler) hra := rawObj.(*v1alpha1.HorizontalRunnerAutoscaler)
@@ -371,5 +441,6 @@ func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) SetupWithManager(mgr
return ctrl.NewControllerManagedBy(mgr). return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.HorizontalRunnerAutoscaler{}). For(&v1alpha1.HorizontalRunnerAutoscaler{}).
Named(name).
Complete(autoscaler) Complete(autoscaler)
} }

View File

@@ -9,13 +9,16 @@ import (
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1" actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"io" "io"
"io/ioutil" "io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme" clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"net/url" "net/url"
"os"
"sigs.k8s.io/controller-runtime/pkg/client/fake" "sigs.k8s.io/controller-runtime/pkg/client/fake"
"testing" "testing"
"time"
) )
var ( var (
@@ -27,21 +30,37 @@ func init() {
_ = actionsv1alpha1.AddToScheme(sc) _ = actionsv1alpha1.AddToScheme(sc)
} }
func TestWebhookCheckRun(t *testing.T) { func TestOrgWebhookCheckRun(t *testing.T) {
f, err := os.Open("testdata/org_webhook_check_run_payload.json")
if err != nil {
t.Fatalf("could not open the fixture: %s", err)
}
defer f.Close()
var e github.CheckRunEvent
if err := json.NewDecoder(f).Decode(&e); err != nil {
t.Fatalf("invalid json: %s", err)
}
testServer(t, testServer(t,
"check_run", "check_run",
&github.CheckRunEvent{ &e,
CheckRun: &github.CheckRun{ 200,
Status: github.String("queued"), "no horizontalrunnerautoscaler to scale for this github event",
}, )
Repo: &github.Repository{ }
Name: github.String("myorg/myrepo"),
}, func TestRepoWebhookCheckRun(t *testing.T) {
Org: &github.Organization{ f, err := os.Open("testdata/repo_webhook_check_run_payload.json")
Name: github.String("myorg"), if err != nil {
}, t.Fatalf("could not open the fixture: %s", err)
Action: github.String("created"), }
}, defer f.Close()
var e github.CheckRunEvent
if err := json.NewDecoder(f).Decode(&e); err != nil {
t.Fatalf("invalid json: %s", err)
}
testServer(t,
"check_run",
&e,
200, 200,
"no horizontalrunnerautoscaler to scale for this github event", "no horizontalrunnerautoscaler to scale for this github event",
) )
@@ -94,6 +113,56 @@ func TestWebhookPing(t *testing.T) {
) )
} }
func TestGetRequest(t *testing.T) {
hra := HorizontalRunnerAutoscalerGitHubWebhook{}
request, _ := http.NewRequest(http.MethodGet, "/", nil)
recorder := httptest.ResponseRecorder{}
hra.Handle(&recorder, request)
response := recorder.Result()
if response.StatusCode != http.StatusOK {
t.Errorf("want %d, got %d", http.StatusOK, response.StatusCode)
}
}
func TestGetValidCapacityReservations(t *testing.T) {
now := time.Now()
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
CapacityReservations: []actionsv1alpha1.CapacityReservation{
{
ExpirationTime: metav1.Time{Time: now.Add(-time.Second)},
Replicas: 1,
},
{
ExpirationTime: metav1.Time{Time: now},
Replicas: 2,
},
{
ExpirationTime: metav1.Time{Time: now.Add(time.Second)},
Replicas: 3,
},
},
},
}
revs := getValidCapacityReservations(hra)
var count int
for _, r := range revs {
count += r.Replicas
}
want := 3
if count != want {
t.Errorf("want %d, got %d", want, count)
}
}
func installTestLogger(webhook *HorizontalRunnerAutoscalerGitHubWebhook) *bytes.Buffer { func installTestLogger(webhook *HorizontalRunnerAutoscalerGitHubWebhook) *bytes.Buffer {
logs := &bytes.Buffer{} logs := &bytes.Buffer{}

View File

@@ -18,6 +18,7 @@ package controllers
import ( import (
"context" "context"
"fmt"
"time" "time"
"github.com/summerwind/actions-runner-controller/github" "github.com/summerwind/actions-runner-controller/github"
@@ -48,6 +49,7 @@ type HorizontalRunnerAutoscalerReconciler struct {
Scheme *runtime.Scheme Scheme *runtime.Scheme
CacheDuration time.Duration CacheDuration time.Duration
Name string
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;update;patch // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;update;patch
@@ -122,25 +124,23 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
copy := rd.DeepCopy() copy := rd.DeepCopy()
copy.Spec.Replicas = &newDesiredReplicas copy.Spec.Replicas = &newDesiredReplicas
if err := r.Client.Update(ctx, copy); err != nil { if err := r.Client.Patch(ctx, copy, client.MergeFrom(&rd)); err != nil {
log.Error(err, "Failed to update runnerderployment resource") return ctrl.Result{}, fmt.Errorf("patching runnerdeployment to have %d replicas: %w", newDesiredReplicas, err)
return ctrl.Result{}, err
} }
} }
var updated *v1alpha1.HorizontalRunnerAutoscaler var updated *v1alpha1.HorizontalRunnerAutoscaler
if hra.Status.DesiredReplicas == nil || *hra.Status.DesiredReplicas != *replicas { if hra.Status.DesiredReplicas == nil || *hra.Status.DesiredReplicas != newDesiredReplicas {
updated = hra.DeepCopy() updated = hra.DeepCopy()
if (hra.Status.DesiredReplicas == nil && *replicas > 1) || if (hra.Status.DesiredReplicas == nil && newDesiredReplicas > 1) ||
(hra.Status.DesiredReplicas != nil && *replicas > *hra.Status.DesiredReplicas) { (hra.Status.DesiredReplicas != nil && newDesiredReplicas > *hra.Status.DesiredReplicas) {
updated.Status.LastSuccessfulScaleOutTime = &metav1.Time{Time: time.Now()} updated.Status.LastSuccessfulScaleOutTime = &metav1.Time{Time: time.Now()}
} }
updated.Status.DesiredReplicas = replicas updated.Status.DesiredReplicas = &newDesiredReplicas
} }
if replicasFromCache == nil { if replicasFromCache == nil {
@@ -148,13 +148,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
updated = hra.DeepCopy() updated = hra.DeepCopy()
} }
var cacheEntries []v1alpha1.CacheEntry cacheEntries := getValidCacheEntries(updated, now)
for _, ent := range updated.Status.CacheEntries {
if ent.ExpirationTime.Before(&metav1.Time{Time: now}) {
cacheEntries = append(cacheEntries, ent)
}
}
var cacheDuration time.Duration var cacheDuration time.Duration
@@ -164,7 +158,7 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
cacheDuration = 10 * time.Minute cacheDuration = 10 * time.Minute
} }
updated.Status.CacheEntries = append(updated.Status.CacheEntries, v1alpha1.CacheEntry{ updated.Status.CacheEntries = append(cacheEntries, v1alpha1.CacheEntry{
Key: v1alpha1.CacheEntryKeyDesiredReplicas, Key: v1alpha1.CacheEntryKeyDesiredReplicas,
Value: *replicas, Value: *replicas,
ExpirationTime: metav1.Time{Time: time.Now().Add(cacheDuration)}, ExpirationTime: metav1.Time{Time: time.Now().Add(cacheDuration)},
@@ -172,21 +166,37 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
} }
if updated != nil { if updated != nil {
if err := r.Status().Update(ctx, updated); err != nil { if err := r.Status().Patch(ctx, updated, client.MergeFrom(&hra)); err != nil {
log.Error(err, "Failed to update horizontalrunnerautoscaler status") return ctrl.Result{}, fmt.Errorf("patching horizontalrunnerautoscaler status to add cache entry: %w", err)
return ctrl.Result{}, err
} }
} }
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
func getValidCacheEntries(hra *v1alpha1.HorizontalRunnerAutoscaler, now time.Time) []v1alpha1.CacheEntry {
var cacheEntries []v1alpha1.CacheEntry
for _, ent := range hra.Status.CacheEntries {
if ent.ExpirationTime.After(now) {
cacheEntries = append(cacheEntries, ent)
}
}
return cacheEntries
}
func (r *HorizontalRunnerAutoscalerReconciler) SetupWithManager(mgr ctrl.Manager) error { func (r *HorizontalRunnerAutoscalerReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller") name := "horizontalrunnerautoscaler-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr). return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.HorizontalRunnerAutoscaler{}). For(&v1alpha1.HorizontalRunnerAutoscaler{}).
Named(name).
Complete(r) Complete(r)
} }

View File

@@ -0,0 +1,49 @@
package controllers
import (
"github.com/google/go-cmp/cmp"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
"time"
)
func TestGetValidCacheEntries(t *testing.T) {
now := time.Now()
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
Status: actionsv1alpha1.HorizontalRunnerAutoscalerStatus{
CacheEntries: []actionsv1alpha1.CacheEntry{
{
Key: "foo",
Value: 1,
ExpirationTime: metav1.Time{Time: now.Add(-time.Second)},
},
{
Key: "foo",
Value: 2,
ExpirationTime: metav1.Time{Time: now},
},
{
Key: "foo",
Value: 3,
ExpirationTime: metav1.Time{Time: now.Add(time.Second)},
},
},
},
}
revs := getValidCacheEntries(hra, now)
counts := map[string]int{}
for _, r := range revs {
counts[r.Key] += r.Value
}
want := map[string]int{"foo": 3}
if d := cmp.Diff(want, counts); d != "" {
t.Errorf("%s", d)
}
}

View File

@@ -2,13 +2,15 @@ package controllers
import ( import (
"context" "context"
"github.com/google/go-github/v33/github" "fmt"
github3 "github.com/google/go-github/v33/github"
github2 "github.com/summerwind/actions-runner-controller/github"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"time" "time"
"github.com/google/go-github/v33/github"
github2 "github.com/summerwind/actions-runner-controller/github"
"k8s.io/apimachinery/pkg/runtime"
"github.com/summerwind/actions-runner-controller/github/fake" "github.com/summerwind/actions-runner-controller/github/fake"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
@@ -28,19 +30,22 @@ import (
type testEnvironment struct { type testEnvironment struct {
Namespace *corev1.Namespace Namespace *corev1.Namespace
Responses *fake.FixedResponses Responses *fake.FixedResponses
webhookServer *httptest.Server
ghClient *github2.Client
fakeRunnerList *fake.RunnersList
fakeGithubServer *httptest.Server
} }
var ( var (
workflowRunsFor3Replicas = `{"total_count": 5, "workflow_runs":[{"status":"queued"}, {"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"` workflowRunsFor3Replicas = `{"total_count": 5, "workflow_runs":[{"status":"queued"}, {"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`
workflowRunsFor1Replicas = `{"total_count": 6, "workflow_runs":[{"status":"queued"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}]}"` workflowRunsFor3Replicas_queued = `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"queued"}]}"`
workflowRunsFor3Replicas_in_progress = `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`
workflowRunsFor1Replicas = `{"total_count": 6, "workflow_runs":[{"status":"queued"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}, {"status":"completed"}]}"`
workflowRunsFor1Replicas_queued = `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`
workflowRunsFor1Replicas_in_progress = `{"total_count": 0, "workflow_runs":[]}"`
) )
var webhookServer *httptest.Server
var ghClient *github2.Client
var fakeRunnerList *fake.RunnersList
// SetupIntegrationTest will set up a testing environment. // SetupIntegrationTest will set up a testing environment.
// This includes: // This includes:
// * creating a Namespace to be used during the test // * creating a Namespace to be used during the test
@@ -51,15 +56,11 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
var stopCh chan struct{} var stopCh chan struct{}
ns := &corev1.Namespace{} ns := &corev1.Namespace{}
responses := &fake.FixedResponses{} env := &testEnvironment{
responses.ListRunners = fake.DefaultListRunnersHandler() Namespace: ns,
responses.ListRepositoryWorkflowRuns = &fake.Handler{ webhookServer: nil,
Status: 200, ghClient: nil,
Body: workflowRunsFor3Replicas,
} }
fakeRunnerList = fake.NewRunnersList()
responses.ListRunners = fakeRunnerList.HandleList()
fakeGithubServer := fake.NewServer(fake.WithFixedResponses(responses))
BeforeEach(func() { BeforeEach(func() {
stopCh = make(chan struct{}) stopCh = make(chan struct{})
@@ -73,14 +74,36 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
mgr, err := ctrl.NewManager(cfg, ctrl.Options{}) mgr, err := ctrl.NewManager(cfg, ctrl.Options{})
Expect(err).NotTo(HaveOccurred(), "failed to create manager") Expect(err).NotTo(HaveOccurred(), "failed to create manager")
ghClient = newGithubClient(fakeGithubServer) responses := &fake.FixedResponses{}
responses.ListRunners = fake.DefaultListRunnersHandler()
responses.ListRepositoryWorkflowRuns = &fake.Handler{
Status: 200,
Body: workflowRunsFor3Replicas,
Statuses: map[string]string{
"queued": workflowRunsFor3Replicas_queued,
"in_progress": workflowRunsFor3Replicas_in_progress,
},
}
fakeRunnerList := fake.NewRunnersList()
responses.ListRunners = fakeRunnerList.HandleList()
fakeGithubServer := fake.NewServer(fake.WithFixedResponses(responses))
env.Responses = responses
env.fakeRunnerList = fakeRunnerList
env.fakeGithubServer = fakeGithubServer
env.ghClient = newGithubClient(fakeGithubServer)
controllerName := func(name string) string {
return fmt.Sprintf("%s%s", ns.Name, name)
}
replicasetController := &RunnerReplicaSetReconciler{ replicasetController := &RunnerReplicaSetReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient, GitHubClient: env.ghClient,
Name: controllerName("runnerreplicaset"),
} }
err = replicasetController.SetupWithManager(mgr) err = replicasetController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -90,28 +113,30 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerdeployment-controller"), Recorder: mgr.GetEventRecorderFor("runnerdeployment-controller"),
Name: controllerName("runnnerdeployment"),
} }
err = deploymentsController.SetupWithManager(mgr) err = deploymentsController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
client := newGithubClient(fakeGithubServer)
autoscalerController := &HorizontalRunnerAutoscalerReconciler{ autoscalerController := &HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
GitHubClient: client, GitHubClient: env.ghClient,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"), Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
CacheDuration: 1 * time.Second, CacheDuration: 1 * time.Second,
Name: controllerName("horizontalrunnerautoscaler"),
} }
err = autoscalerController.SetupWithManager(mgr) err = autoscalerController.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
autoscalerWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{ autoscalerWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"), Recorder: mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller"),
Name: controllerName("horizontalrunnerautoscalergithubwebhook"),
Namespace: ns.Name,
} }
err = autoscalerWebhook.SetupWithManager(mgr) err = autoscalerWebhook.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler webhook") Expect(err).NotTo(HaveOccurred(), "failed to setup autoscaler webhook")
@@ -119,7 +144,7 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
mux := http.NewServeMux() mux := http.NewServeMux()
mux.HandleFunc("/", autoscalerWebhook.Handle) mux.HandleFunc("/", autoscalerWebhook.Handle)
webhookServer = httptest.NewServer(mux) env.webhookServer = httptest.NewServer(mux)
go func() { go func() {
defer GinkgoRecover() defer GinkgoRecover()
@@ -132,36 +157,45 @@ func SetupIntegrationTest(ctx context.Context) *testEnvironment {
AfterEach(func() { AfterEach(func() {
close(stopCh) close(stopCh)
fakeGithubServer.Close() env.fakeGithubServer.Close()
webhookServer.Close() env.webhookServer.Close()
err := k8sClient.Delete(ctx, ns) err := k8sClient.Delete(ctx, ns)
Expect(err).NotTo(HaveOccurred(), "failed to delete test namespace") Expect(err).NotTo(HaveOccurred(), "failed to delete test namespace")
}) })
return &testEnvironment{Namespace: ns, Responses: responses} return env
} }
var _ = Context("INTEGRATION: Inside of a new namespace", func() { var _ = Context("INTEGRATION: Inside of a new namespace", func() {
ctx := context.TODO() ctx := context.TODO()
env := SetupIntegrationTest(ctx) env := SetupIntegrationTest(ctx)
ns := env.Namespace ns := env.Namespace
responses := env.Responses
Describe("when no existing resources exist", func() { Describe("when no existing resources exist", func() {
It("should create and scale runners", func() { It("should create and scale organization's repository runners on pull_request event", func() {
name := "example-runnerdeploy" name := "example-runnerdeploy"
{ {
rs := &actionsv1alpha1.RunnerDeployment{ rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
Name: name, Name: name,
Namespace: ns.Name, Namespace: ns.Name,
}, },
Spec: actionsv1alpha1.RunnerDeploymentSpec{ Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1), Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{ Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{ Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid", Repository: "test/valid",
Image: "bar", Image: "bar",
@@ -174,80 +208,17 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
}, },
} }
err := k8sClient.Create(ctx, rs) ExpectCreate(ctx, rd, "test RunnerDeployment")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
Expect(err).NotTo(HaveOccurred(), "failed to create test RunnerDeployment resource") ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
} }
{ {
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification ExpectRunnerDeploymentEventuallyUpdates(ctx, ns.Name, name, func(rd *actionsv1alpha1.RunnerDeployment) {
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
Eventually(func() error {
var rd actionsv1alpha1.RunnerDeployment
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerDeployment resource")
rd.Spec.Replicas = intPtr(2) rd.Spec.Replicas = intPtr(2)
})
return k8sClient.Update(ctx, &rd) ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
}, ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2)
time.Second*1, time.Millisecond*500).Should(BeNil())
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
} }
// Scale-up to 3 replicas // Scale-up to 3 replicas
@@ -280,68 +251,24 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
}, },
} }
err := k8sClient.Create(ctx, hra) ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
Expect(err).NotTo(HaveOccurred(), "failed to create test HorizontalRunnerAutoscaler resource") ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3)
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}} ExpectHRAStatusCacheEntryLengthEventuallyEquals(ctx, ns.Name, name, 1)
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(3))
} }
{ {
var runnerList actionsv1alpha1.RunnerList env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake runners after HRA creation")
err := k8sClient.List(ctx, &runnerList, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runners")
}
for i, r := range runnerList.Items {
fakeRunnerList.Add(&github3.Runner{
ID: github.Int64(int64(i)),
Name: github.String(r.Name),
OS: github.String("linux"),
Status: github.String("online"),
Busy: github.Bool(false),
})
}
rs, err := ghClient.ListRunners(context.Background(), "", "", "test/valid")
Expect(err).NotTo(HaveOccurred(), "verifying list fake runners response")
Expect(len(rs)).To(Equal(3), "count of fake list runners")
} }
// Scale-down to 1 replica // Scale-down to 1 replica
{ {
time.Sleep(time.Second) time.Sleep(time.Second)
responses.ListRepositoryWorkflowRuns.Body = workflowRunsFor1Replicas env.Responses.ListRepositoryWorkflowRuns.Body = workflowRunsFor1Replicas
env.Responses.ListRepositoryWorkflowRuns.Statuses["queued"] = workflowRunsFor1Replicas_queued
env.Responses.ListRepositoryWorkflowRuns.Statuses["in_progress"] = workflowRunsFor1Replicas_in_progress
var hra actionsv1alpha1.HorizontalRunnerAutoscaler var hra actionsv1alpha1.HorizontalRunnerAutoscaler
@@ -357,77 +284,568 @@ var _ = Context("INTEGRATION: Inside of a new namespace", func() {
Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource") Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource")
Eventually( ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1, "runners after HRA force update for scale-down")
func() int {
var runnerSets actionsv1alpha1.RunnerReplicaSetList
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1), "runners after HRA force update for scale-down")
} }
// Scale-up to 2 replicas on first pull_request create webhook event
{ {
resp, err := sendWebhook(webhookServer, "pull_request", &github.PullRequestEvent{ env.SendOrgPullRequestEvent("test", "valid", "main", "created")
PullRequest: &github.PullRequest{ ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
Base: &github.PullRequestBranch{ ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2, "runners after first webhook event")
Ref: github.String("main"),
},
},
Repo: &github.Repository{
Name: github.String("test/valid"),
Organization: &github.Organization{
Name: github.String("test"),
},
},
Action: github.String("created"),
})
Expect(err).NotTo(HaveOccurred(), "failed to send pull_request event")
Expect(resp.StatusCode).To(Equal(200))
} }
// Scale-up to 2 replicas // Scale-up to 3 replicas on second pull_request create webhook event
{ {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}} env.SendOrgPullRequestEvent("test", "valid", "main", "created")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1), "runner sets after webhook")
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2), "runners after webhook")
} }
}) })
It("should create and scale organization's repository runners on check_run event", func() {
name := "example-runnerdeploy"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid",
Image: "bar",
Group: "baz",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
ExpectCreate(ctx, rd, "test RunnerDeployment")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
}
// Scale-up to 3 replicas by the default TotalNumberOfQueuedAndInProgressWorkflowRuns-based scaling
// See workflowRunsFor3Replicas_queued and workflowRunsFor3Replicas_in_progress for GitHub List-Runners API responses
// used while testing.
{
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
Name: name,
},
MinReplicas: intPtr(1),
MaxReplicas: intPtr(5),
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
Metrics: nil,
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
{
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
CheckRun: &actionsv1alpha1.CheckRunSpec{
Types: []string{"created"},
Status: "pending",
},
},
Amount: 1,
Duration: metav1.Duration{Duration: time.Minute},
},
},
},
}
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake list runners")
}
// Scale-up to 4 replicas on first check_run create webhook event
{
env.SendOrgCheckRunEvent("test", "valid", "pending", "created")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 4, "runners after first webhook event")
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(4, "count of fake list runners")
}
// Scale-up to 5 replicas on second check_run create webhook event
{
env.SendOrgCheckRunEvent("test", "valid", "pending", "created")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 5, "runners after second webhook event")
}
env.ExpectRegisteredNumberCountEventuallyEquals(5, "count of fake list runners")
})
It("should create and scale user's repository runners on pull_request event", func() {
name := "example-runnerdeploy"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid",
Image: "bar",
Group: "baz",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
ExpectCreate(ctx, rd, "test RunnerDeployment")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
}
{
ExpectRunnerDeploymentEventuallyUpdates(ctx, ns.Name, name, func(rd *actionsv1alpha1.RunnerDeployment) {
rd.Spec.Replicas = intPtr(2)
})
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2)
}
// Scale-up to 3 replicas
{
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
Name: name,
},
MinReplicas: intPtr(1),
MaxReplicas: intPtr(3),
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
Metrics: nil,
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
{
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
PullRequest: &actionsv1alpha1.PullRequestSpec{
Types: []string{"created"},
Branches: []string{"main"},
},
},
Amount: 1,
Duration: metav1.Duration{Duration: time.Minute},
},
},
},
}
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake runners after HRA creation")
}
// Scale-down to 1 replica
{
time.Sleep(time.Second)
env.Responses.ListRepositoryWorkflowRuns.Body = workflowRunsFor1Replicas
env.Responses.ListRepositoryWorkflowRuns.Statuses["queued"] = workflowRunsFor1Replicas_queued
env.Responses.ListRepositoryWorkflowRuns.Statuses["in_progress"] = workflowRunsFor1Replicas_in_progress
var hra actionsv1alpha1.HorizontalRunnerAutoscaler
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &hra)
Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource")
hra.Annotations = map[string]string{
"force-update": "1",
}
err = k8sClient.Update(ctx, &hra)
Expect(err).NotTo(HaveOccurred(), "failed to get test HorizontalRunnerAutoscaler resource")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1, "runners after HRA force update for scale-down")
ExpectHRADesiredReplicasEquals(ctx, ns.Name, name, 1, "runner deployment desired replicas")
}
// Scale-up to 2 replicas on first pull_request create webhook event
{
env.SendUserPullRequestEvent("test", "valid", "main", "created")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 2, "runners after first webhook event")
ExpectHRADesiredReplicasEquals(ctx, ns.Name, name, 2, "runner deployment desired replicas")
}
// Scale-up to 3 replicas on second pull_request create webhook event
{
env.SendUserPullRequestEvent("test", "valid", "main", "created")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3, "runners after second webhook event")
ExpectHRADesiredReplicasEquals(ctx, ns.Name, name, 3, "runner deployment desired replicas")
}
})
It("should create and scale user's repository runners on check_run event", func() {
name := "example-runnerdeploy"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "test/valid",
Image: "bar",
Group: "baz",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
ExpectCreate(ctx, rd, "test RunnerDeployment")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 1)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(1, "count of fake list runners")
}
// Scale-up to 3 replicas by the default TotalNumberOfQueuedAndInProgressWorkflowRuns-based scaling
// See workflowRunsFor3Replicas_queued and workflowRunsFor3Replicas_in_progress for GitHub List-Runners API responses
// used while testing.
{
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
ScaleTargetRef: actionsv1alpha1.ScaleTargetRef{
Name: name,
},
MinReplicas: intPtr(1),
MaxReplicas: intPtr(5),
ScaleDownDelaySecondsAfterScaleUp: intPtr(1),
Metrics: nil,
ScaleUpTriggers: []actionsv1alpha1.ScaleUpTrigger{
{
GitHubEvent: &actionsv1alpha1.GitHubEventScaleUpTriggerSpec{
CheckRun: &actionsv1alpha1.CheckRunSpec{
Types: []string{"created"},
Status: "pending",
},
},
Amount: 1,
Duration: metav1.Duration{Duration: time.Minute},
},
},
},
}
ExpectCreate(ctx, hra, "test HorizontalRunnerAutoscaler")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1)
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 3)
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(3, "count of fake list runners")
}
// Scale-up to 4 replicas on first check_run create webhook event
{
env.SendUserCheckRunEvent("test", "valid", "pending", "created")
ExpectRunnerSetsCountEventuallyEquals(ctx, ns.Name, 1, "runner sets after webhook")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 4, "runners after first webhook event")
}
{
env.ExpectRegisteredNumberCountEventuallyEquals(4, "count of fake list runners")
}
// Scale-up to 5 replicas on second check_run create webhook event
{
env.SendUserCheckRunEvent("test", "valid", "pending", "created")
ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx, ns.Name, 5, "runners after second webhook event")
}
env.ExpectRegisteredNumberCountEventuallyEquals(5, "count of fake list runners")
})
}) })
}) })
func ExpectHRAStatusCacheEntryLengthEventuallyEquals(ctx context.Context, ns string, name string, value int, optionalDescriptions ...interface{}) {
EventuallyWithOffset(
1,
func() int {
var hra actionsv1alpha1.HorizontalRunnerAutoscaler
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns, Name: name}, &hra)
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to get test HRA resource")
return len(hra.Status.CacheEntries)
},
time.Second*5, time.Millisecond*500).Should(Equal(value), optionalDescriptions...)
}
func ExpectHRADesiredReplicasEquals(ctx context.Context, ns, name string, desired int, optionalDescriptions ...interface{}) {
var rd actionsv1alpha1.HorizontalRunnerAutoscaler
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns, Name: name}, &rd)
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to get test HRA resource")
replicas := rd.Status.DesiredReplicas
ExpectWithOffset(1, *replicas).To(Equal(desired), optionalDescriptions...)
}
func (env *testEnvironment) ExpectRegisteredNumberCountEventuallyEquals(want int, optionalDescriptions ...interface{}) {
EventuallyWithOffset(
1,
func() int {
env.SyncRunnerRegistrations()
rs, err := env.ghClient.ListRunners(context.Background(), "", "", "test/valid")
Expect(err).NotTo(HaveOccurred(), "verifying list fake runners response")
return len(rs)
},
time.Second*5, time.Millisecond*500).Should(Equal(want), optionalDescriptions...)
}
func (env *testEnvironment) SendOrgPullRequestEvent(org, repo, branch, action string) {
resp, err := sendWebhook(env.webhookServer, "pull_request", &github.PullRequestEvent{
PullRequest: &github.PullRequest{
Base: &github.PullRequestBranch{
Ref: github.String(branch),
},
},
Repo: &github.Repository{
Name: github.String(repo),
Owner: &github.User{
Login: github.String(org),
Type: github.String("Organization"),
},
},
Action: github.String(action),
})
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to send pull_request event")
ExpectWithOffset(1, resp.StatusCode).To(Equal(200))
}
func (env *testEnvironment) SendOrgCheckRunEvent(org, repo, status, action string) {
resp, err := sendWebhook(env.webhookServer, "check_run", &github.CheckRunEvent{
CheckRun: &github.CheckRun{
Status: github.String(status),
},
Org: &github.Organization{
Login: github.String(org),
},
Repo: &github.Repository{
Name: github.String(repo),
Owner: &github.User{
Login: github.String(org),
Type: github.String("Organization"),
},
},
Action: github.String(action),
})
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to send check_run event")
ExpectWithOffset(1, resp.StatusCode).To(Equal(200))
}
func (env *testEnvironment) SendUserPullRequestEvent(owner, repo, branch, action string) {
resp, err := sendWebhook(env.webhookServer, "pull_request", &github.PullRequestEvent{
PullRequest: &github.PullRequest{
Base: &github.PullRequestBranch{
Ref: github.String(branch),
},
},
Repo: &github.Repository{
Name: github.String(repo),
Owner: &github.User{
Login: github.String(owner),
Type: github.String("User"),
},
},
Action: github.String(action),
})
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to send pull_request event")
ExpectWithOffset(1, resp.StatusCode).To(Equal(200))
}
func (env *testEnvironment) SendUserCheckRunEvent(owner, repo, status, action string) {
resp, err := sendWebhook(env.webhookServer, "check_run", &github.CheckRunEvent{
CheckRun: &github.CheckRun{
Status: github.String(status),
},
Repo: &github.Repository{
Name: github.String(repo),
Owner: &github.User{
Login: github.String(owner),
Type: github.String("User"),
},
},
Action: github.String(action),
})
ExpectWithOffset(1, err).NotTo(HaveOccurred(), "failed to send check_run event")
ExpectWithOffset(1, resp.StatusCode).To(Equal(200))
}
func (env *testEnvironment) SyncRunnerRegistrations() {
var runnerList actionsv1alpha1.RunnerList
err := k8sClient.List(context.TODO(), &runnerList, client.InNamespace(env.Namespace.Name))
if err != nil {
logf.Log.Error(err, "list runners")
}
env.fakeRunnerList.Sync(runnerList.Items)
}
func ExpectCreate(ctx context.Context, rd runtime.Object, s string) {
err := k8sClient.Create(ctx, rd)
ExpectWithOffset(1, err).NotTo(HaveOccurred(), fmt.Sprintf("failed to create %s resource", s))
}
func ExpectRunnerDeploymentEventuallyUpdates(ctx context.Context, ns string, name string, f func(rd *actionsv1alpha1.RunnerDeployment)) {
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
EventuallyWithOffset(
1,
func() error {
var rd actionsv1alpha1.RunnerDeployment
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns, Name: name}, &rd)
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerDeployment resource")
f(&rd)
return k8sClient.Update(ctx, &rd)
},
time.Second*1, time.Millisecond*500).Should(BeNil())
}
func ExpectRunnerSetsCountEventuallyEquals(ctx context.Context, ns string, count int, optionalDescription ...interface{}) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
EventuallyWithOffset(
1,
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return len(runnerSets.Items)
},
time.Second*10, time.Millisecond*500).Should(BeEquivalentTo(count), optionalDescription...)
}
func ExpectRunnerSetsManagedReplicasCountEventuallyEquals(ctx context.Context, ns string, count int, optionalDescription ...interface{}) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
EventuallyWithOffset(
1,
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
if len(runnerSets.Items) != 1 {
logf.Log.Info("Too many runnerreplicasets exist", "runnerSets", runnerSets)
return -1
}
return *runnerSets.Items[0].Spec.Replicas
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(count), optionalDescription...)
}

View File

@@ -244,60 +244,85 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
notRegistered := false // all checks done below only decide whether a restart is needed
// if a restart was already decided before, there is no need for the checks
// saving API calls and scary log messages
if !restart {
runnerBusy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name) notRegistered := false
if err != nil { offline := false
var e *github.RunnerNotFound
if errors.As(err, &e) {
log.Error(err, "Failed to check if runner is busy. Probably this runner has never been successfully registered to GitHub.")
notRegistered = true runnerBusy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
} else { if err != nil {
var e *gogithub.RateLimitError var notFoundException *github.RunnerNotFound
if errors.As(err, &e) { var offlineException *github.RunnerOffline
// We log the underlying error when we failed calling GitHub API to list or unregisters, if errors.As(err, &notFoundException) {
// or the runner is still busy. log.V(1).Info("Failed to check if runner is busy. Either this runner has never been successfully registered to GitHub or it still needs more time.", "runnerName", runner.Name)
log.Error(
err,
fmt.Sprintf(
"Failed to check if runner is busy due to Github API rate limit. Retrying in %s to avoid excessive GitHub API calls",
retryDelayOnGitHubAPIRateLimitError,
),
)
return ctrl.Result{RequeueAfter: retryDelayOnGitHubAPIRateLimitError}, err notRegistered = true
} else if errors.As(err, &offlineException) {
log.V(1).Info("GitHub runner appears to be offline, waiting for runner to get online ...", "runnerName", runner.Name)
offline = true
} else {
var e *gogithub.RateLimitError
if errors.As(err, &e) {
// We log the underlying error when we failed calling GitHub API to list or unregisters,
// or the runner is still busy.
log.Error(
err,
fmt.Sprintf(
"Failed to check if runner is busy due to Github API rate limit. Retrying in %s to avoid excessive GitHub API calls",
retryDelayOnGitHubAPIRateLimitError,
),
)
return ctrl.Result{RequeueAfter: retryDelayOnGitHubAPIRateLimitError}, err
}
return ctrl.Result{}, err
} }
return ctrl.Result{}, err
} }
}
// See the `newPod` function called above for more information // See the `newPod` function called above for more information
// about when this hash changes. // about when this hash changes.
curHash := pod.Labels[LabelKeyPodTemplateHash] curHash := pod.Labels[LabelKeyPodTemplateHash]
newHash := newPod.Labels[LabelKeyPodTemplateHash] newHash := newPod.Labels[LabelKeyPodTemplateHash]
if !runnerBusy && curHash != newHash { if !runnerBusy && curHash != newHash {
restart = true restart = true
} }
registrationTimeout := 10 * time.Minute registrationTimeout := 10 * time.Minute
currentTime := time.Now() currentTime := time.Now()
registrationDidTimeout := currentTime.Sub(pod.CreationTimestamp.Add(registrationTimeout)) > 0 registrationDidTimeout := currentTime.Sub(pod.CreationTimestamp.Add(registrationTimeout)) > 0
if notRegistered && registrationDidTimeout { if notRegistered && registrationDidTimeout {
log.Info( log.Info(
"Runner failed to register itself to GitHub in timely manner. "+ "Runner failed to register itself to GitHub in timely manner. "+
"Recreating the pod to see if it resolves the issue. "+ "Recreating the pod to see if it resolves the issue. "+
"CAUTION: If you see this a lot, you should investigate the root cause. "+ "CAUTION: If you see this a lot, you should investigate the root cause. "+
"See https://github.com/summerwind/actions-runner-controller/issues/288", "See https://github.com/summerwind/actions-runner-controller/issues/288",
"podCreationTimestamp", pod.CreationTimestamp, "podCreationTimestamp", pod.CreationTimestamp,
"currentTime", currentTime, "currentTime", currentTime,
"configuredRegistrationTimeout", registrationTimeout, "configuredRegistrationTimeout", registrationTimeout,
) )
restart = true
}
if offline && registrationDidTimeout {
log.Info(
"Already existing GitHub runner still appears offline . "+
"Recreating the pod to see if it resolves the issue. "+
"CAUTION: If you see this a lot, you should investigate the root cause. ",
"podCreationTimestamp", pod.CreationTimestamp,
"currentTime", currentTime,
"configuredRegistrationTimeout", registrationTimeout,
)
restart = true
}
restart = true
} }
// Don't do anything if there's no need to restart the runner // Don't do anything if there's no need to restart the runner
@@ -655,11 +680,14 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
} }
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error { func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runner-controller") name := "runner-controller"
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr). return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.Runner{}). For(&v1alpha1.Runner{}).
Owns(&corev1.Pod{}). Owns(&corev1.Pod{}).
Named(name).
Complete(r) Complete(r)
} }

View File

@@ -20,6 +20,7 @@ import (
"context" "context"
"fmt" "fmt"
"hash/fnv" "hash/fnv"
"reflect"
"sort" "sort"
"time" "time"
@@ -48,9 +49,11 @@ const (
// RunnerDeploymentReconciler reconciles a Runner object // RunnerDeploymentReconciler reconciles a Runner object
type RunnerDeploymentReconciler struct { type RunnerDeploymentReconciler struct {
client.Client client.Client
Log logr.Logger Log logr.Logger
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
CommonRunnerLabels []string
Name string
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;create;update;patch;delete
@@ -141,6 +144,28 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
} }
if !reflect.DeepEqual(newestSet.Spec.Selector, desiredRS.Spec.Selector) {
updateSet := newestSet.DeepCopy()
updateSet.Spec = *desiredRS.Spec.DeepCopy()
// A selector update change doesn't trigger replicaset replacement,
// but we still need to update the existing replicaset with it.
// Otherwise selector-based runner query will never work on replicasets created before the controller v0.17.0
// See https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r585379259
if err := r.Client.Update(ctx, updateSet); err != nil {
log.Error(err, "Failed to update runnerreplicaset resource")
return ctrl.Result{}, err
}
// At this point, we are already sure that there's no need to create a new replicaset
// as the runner template hash is not changed.
//
// But we still need to requeue for the (possibly rare) cases that there are still old replicasets that needs
// to be cleaned up.
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
}
const defaultReplicas = 1 const defaultReplicas = 1
currentDesiredReplicas := getIntOrDefault(newestSet.Spec.Replicas, defaultReplicas) currentDesiredReplicas := getIntOrDefault(newestSet.Spec.Replicas, defaultReplicas)
@@ -256,14 +281,70 @@ func CloneAndAddLabel(labels map[string]string, labelKey, labelValue string) map
return newLabels return newLabels
} }
// Clones the given selector and returns a new selector with the given key and value added.
// Returns the given selector, if labelKey is empty.
//
// Proudly copied from k8s.io/kubernetes/pkg/util/labels.CloneSelectorAndAddLabel
func CloneSelectorAndAddLabel(selector *metav1.LabelSelector, labelKey, labelValue string) *metav1.LabelSelector {
if labelKey == "" {
// Don't need to add a label.
return selector
}
// Clone.
newSelector := new(metav1.LabelSelector)
newSelector.MatchLabels = make(map[string]string)
if selector.MatchLabels != nil {
for key, val := range selector.MatchLabels {
newSelector.MatchLabels[key] = val
}
}
newSelector.MatchLabels[labelKey] = labelValue
if selector.MatchExpressions != nil {
newMExps := make([]metav1.LabelSelectorRequirement, len(selector.MatchExpressions))
for i, me := range selector.MatchExpressions {
newMExps[i].Key = me.Key
newMExps[i].Operator = me.Operator
if me.Values != nil {
newMExps[i].Values = make([]string, len(me.Values))
copy(newMExps[i].Values, me.Values)
} else {
newMExps[i].Values = nil
}
}
newSelector.MatchExpressions = newMExps
} else {
newSelector.MatchExpressions = nil
}
return newSelector
}
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) { func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
return newRunnerReplicaSet(&rd, r.CommonRunnerLabels, r.Scheme)
}
func newRunnerReplicaSet(rd *v1alpha1.RunnerDeployment, commonRunnerLabels []string, scheme *runtime.Scheme) (*v1alpha1.RunnerReplicaSet, error) {
newRSTemplate := *rd.Spec.Template.DeepCopy() newRSTemplate := *rd.Spec.Template.DeepCopy()
templateHash := ComputeHash(&newRSTemplate) templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector. // Add template hash label to selector.
labels := CloneAndAddLabel(rd.Spec.Template.Labels, LabelKeyRunnerTemplateHash, templateHash) labels := CloneAndAddLabel(rd.Spec.Template.Labels, LabelKeyRunnerTemplateHash, templateHash)
for _, l := range commonRunnerLabels {
newRSTemplate.Spec.Labels = append(newRSTemplate.Spec.Labels, l)
}
newRSTemplate.Labels = labels newRSTemplate.Labels = labels
selector := rd.Spec.Selector
if selector == nil {
selector = &metav1.LabelSelector{MatchLabels: labels}
}
newRSSelector := CloneSelectorAndAddLabel(selector, LabelKeyRunnerTemplateHash, templateHash)
rs := v1alpha1.RunnerReplicaSet{ rs := v1alpha1.RunnerReplicaSet{
TypeMeta: metav1.TypeMeta{}, TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{ ObjectMeta: metav1.ObjectMeta{
@@ -273,11 +354,12 @@ func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeplo
}, },
Spec: v1alpha1.RunnerReplicaSetSpec{ Spec: v1alpha1.RunnerReplicaSetSpec{
Replicas: rd.Spec.Replicas, Replicas: rd.Spec.Replicas,
Selector: newRSSelector,
Template: newRSTemplate, Template: newRSTemplate,
}, },
} }
if err := ctrl.SetControllerReference(&rd, &rs, r.Scheme); err != nil { if err := ctrl.SetControllerReference(rd, &rs, scheme); err != nil {
return &rs, err return &rs, err
} }
@@ -285,7 +367,12 @@ func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeplo
} }
func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error { func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runnerdeployment-controller") name := "runnerdeployment-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.RunnerReplicaSet{}, runnerSetOwnerKey, func(rawObj runtime.Object) []string { if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.RunnerReplicaSet{}, runnerSetOwnerKey, func(rawObj runtime.Object) []string {
runnerSet := rawObj.(*v1alpha1.RunnerReplicaSet) runnerSet := rawObj.(*v1alpha1.RunnerReplicaSet)
@@ -306,5 +393,6 @@ func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr). return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.RunnerDeployment{}). For(&v1alpha1.RunnerDeployment{}).
Owns(&v1alpha1.RunnerReplicaSet{}). Owns(&v1alpha1.RunnerReplicaSet{}).
Named(name).
Complete(r) Complete(r)
} }

View File

@@ -2,8 +2,13 @@ package controllers
import ( import (
"context" "context"
"fmt"
"testing"
"time" "time"
"github.com/google/go-cmp/cmp"
"k8s.io/apimachinery/pkg/runtime"
corev1 "k8s.io/api/core/v1" corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types" "k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme" "k8s.io/client-go/kubernetes/scheme"
@@ -18,6 +23,103 @@ import (
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1" actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
) )
func TestNewRunnerReplicaSet(t *testing.T) {
scheme := runtime.NewScheme()
if err := actionsv1alpha1.AddToScheme(scheme); err != nil {
t.Fatalf("%v", err)
}
r := &RunnerDeploymentReconciler{
CommonRunnerLabels: []string{"dev"},
Scheme: scheme,
}
rd := actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: "example",
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Labels: []string{"project1"},
},
},
},
}
rs, err := r.newRunnerReplicaSet(rd)
if err != nil {
t.Fatalf("%v", err)
}
if val, ok := rs.Labels["foo"]; ok {
if val != "bar" {
t.Errorf("foo label does not have bar but %v", val)
}
} else {
t.Errorf("foo label does not exist")
}
hash1, ok := rs.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
runnerLabel := []string{"project1", "dev"}
if d := cmp.Diff(runnerLabel, rs.Spec.Template.Spec.Labels); d != "" {
t.Errorf("%s", d)
}
rd2 := rd.DeepCopy()
rd2.Spec.Template.Spec.Labels = []string{"project2"}
rs2, err := r.newRunnerReplicaSet(*rd2)
if err != nil {
t.Fatalf("%v", err)
}
hash2, ok := rs2.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
if hash1 == hash2 {
t.Errorf(
"runner replica sets from runner deployments with varying labels must have different template hash, but got %s and %s",
hash1, hash2,
)
}
rd3 := rd.DeepCopy()
rd3.Spec.Template.Labels["foo"] = "baz"
rs3, err := r.newRunnerReplicaSet(*rd3)
if err != nil {
t.Fatalf("%v", err)
}
hash3, ok := rs3.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
if hash1 == hash3 {
t.Errorf(
"runner replica sets from runner deployments with varying meta labels must have different template hash, but got %s and %s",
hash1, hash3,
)
}
}
// SetupDeploymentTest will set up a testing environment. // SetupDeploymentTest will set up a testing environment.
// This includes: // This includes:
// * creating a Namespace to be used during the test // * creating a Namespace to be used during the test
@@ -45,6 +147,7 @@ func SetupDeploymentTest(ctx context.Context) *corev1.Namespace {
Scheme: scheme.Scheme, Scheme: scheme.Scheme,
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
Name: "runnerdeployment-" + ns.Name,
} }
err = controller.SetupWithManager(mgr) err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -74,7 +177,113 @@ var _ = Context("Inside of a new namespace", func() {
Describe("when no existing resources exist", func() { Describe("when no existing resources exist", func() {
It("should create a new RunnerReplicaSet resource from the specified template, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() { It("should create a new RunnerReplicaSet resource from the specified template, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
name := "example-runnerdeploy" name := "example-runnerdeploy-1"
{
rs := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar",
Image: "bar",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
err := k8sClient.Create(ctx, rs)
Expect(err).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil {
return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
}
{
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
var rd actionsv1alpha1.RunnerDeployment
Eventually(func() error {
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
if err != nil {
return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
}
rd.Spec.Replicas = intPtr(2)
return k8sClient.Update(ctx, &rd)
},
time.Second*1, time.Millisecond*500).Should(BeNil())
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
if err != nil {
return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
}
})
It("should create a new RunnerReplicaSet resource from the specified template without labels and selector, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
name := "example-runnerdeploy-2"
{ {
rs := &actionsv1alpha1.RunnerDeployment{ rs := &actionsv1alpha1.RunnerDeployment{
@@ -103,29 +312,25 @@ var _ = Context("Inside of a new namespace", func() {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}} runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually( Eventually(
func() int { func() (int, error) {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name)) selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil { if err != nil {
logf.Log.Error(err, "list runner sets") return 0, err
} }
err = k8sClient.List(
return len(runnerSets.Items) ctx,
}, &runnerSets,
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1)) client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
Eventually( )
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil { if err != nil {
logf.Log.Error(err, "list runner sets") return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
} }
if len(runnerSets.Items) == 0 { return *runnerSets.Items[0].Spec.Replicas, nil
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
}, },
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1)) time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
} }
@@ -134,13 +339,12 @@ var _ = Context("Inside of a new namespace", func() {
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification // We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas // made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again // Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
var rd actionsv1alpha1.RunnerDeployment
Eventually(func() error { Eventually(func() error {
var rd actionsv1alpha1.RunnerDeployment
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd) err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
if err != nil {
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerReplicaSet resource") return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
}
rd.Spec.Replicas = intPtr(2) rd.Spec.Replicas = intPtr(2)
return k8sClient.Update(ctx, &rd) return k8sClient.Update(ctx, &rd)
@@ -150,27 +354,126 @@ var _ = Context("Inside of a new namespace", func() {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}} runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually( Eventually(
func() int { func() (int, error) {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name)) selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
if err != nil { if err != nil {
logf.Log.Error(err, "list runner sets") return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
} }
return len(runnerSets.Items) return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return *runnerSets.Items[0].Spec.Replicas
}, },
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2)) time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
} }
}) })
It("should adopt RunnerReplicaSet created before 0.18.0 to have Spec.Selector", func() {
name := "example-runnerdeploy-2"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Template: actionsv1alpha1.RunnerTemplate{
Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar",
Image: "bar",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
createRDErr := k8sClient.Create(ctx, rd)
Expect(createRDErr).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
Eventually(
func() (int, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return 0, err
}
return len(runnerSets.Items), nil
},
time.Second*1, time.Millisecond*500).Should(BeEquivalentTo(1))
var rs17 *actionsv1alpha1.RunnerReplicaSet
Consistently(
func() (*metav1.LabelSelector, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return nil, err
}
if len(runnerSets.Items) != 1 {
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
rs17 = &runnerSets.Items[0]
return runnerSets.Items[0].Spec.Selector, nil
},
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
// We simulate the old, pre 0.18.0 RunnerReplicaSet by updating it.
// I've tried to use controllerutil.Set{Owner,Controller}Reference and k8sClient.Create(rs17)
// but it didn't work due to missing RD UID, where UID is generated on K8s API server on k8sCLient.Create(rd)
rs17.Spec.Selector = nil
updateRSErr := k8sClient.Update(ctx, rs17)
Expect(updateRSErr).NotTo(HaveOccurred())
Eventually(
func() (*metav1.LabelSelector, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return nil, err
}
if len(runnerSets.Items) != 1 {
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return runnerSets.Items[0].Spec.Selector, nil
},
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
}
})
}) })
}) })

View File

@@ -20,9 +20,10 @@ import (
"context" "context"
"errors" "errors"
"fmt" "fmt"
gogithub "github.com/google/go-github/v33/github"
"time" "time"
gogithub "github.com/google/go-github/v33/github"
"github.com/go-logr/logr" "github.com/go-logr/logr"
kerrors "k8s.io/apimachinery/pkg/api/errors" kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/runtime"
@@ -44,6 +45,7 @@ type RunnerReplicaSetReconciler struct {
Recorder record.EventRecorder Recorder record.EventRecorder
Scheme *runtime.Scheme Scheme *runtime.Scheme
GitHubClient *github.Client GitHubClient *github.Client
Name string
} }
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete // +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete
@@ -66,8 +68,18 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
return ctrl.Result{}, nil return ctrl.Result{}, nil
} }
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil {
return ctrl.Result{}, err
}
// Get the Runners managed by the target RunnerReplicaSet
var allRunners v1alpha1.RunnerList var allRunners v1alpha1.RunnerList
if err := r.List(ctx, &allRunners, client.InNamespace(req.Namespace)); err != nil { if err := r.List(
ctx,
&allRunners,
client.InNamespace(req.Namespace),
client.MatchingLabelsSelector{Selector: selector},
); err != nil {
if !kerrors.IsNotFound(err) { if !kerrors.IsNotFound(err) {
return ctrl.Result{}, err return ctrl.Result{}, err
} }
@@ -75,9 +87,14 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
var myRunners []v1alpha1.Runner var myRunners []v1alpha1.Runner
var available, ready int var (
available int
ready int
)
for _, r := range allRunners.Items { for _, r := range allRunners.Items {
// This guard is required to avoid the RunnerReplicaSet created by the controller v0.17.0 or before
// to not treat all the runners in the namespace as its children.
if metav1.IsControlledBy(&r, &rs) { if metav1.IsControlledBy(&r, &rs) {
myRunners = append(myRunners, r) myRunners = append(myRunners, r)
@@ -104,15 +121,19 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
// get runners that are currently not busy // get runners that are currently not busy
var notBusy []v1alpha1.Runner var notBusy []v1alpha1.Runner
for _, runner := range myRunners { for _, runner := range allRunners.Items {
busy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name) busy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil { if err != nil {
notRegistered := false notRegistered := false
offline := false
var e *github.RunnerNotFound var notFoundException *github.RunnerNotFound
if errors.As(err, &e) { var offlineException *github.RunnerOffline
log.Error(err, "Failed to check if runner is busy. Probably this runner has never been successfully registered to GitHub, and therefore we prioritize it for deletion", "runnerName", runner.Name) if errors.As(err, &notFoundException) {
log.V(1).Info("Failed to check if runner is busy. Either this runner has never been successfully registered to GitHub or it still needs more time.", "runnerName", runner.Name)
notRegistered = true notRegistered = true
} else if errors.As(err, &offlineException) {
offline = true
} else { } else {
var e *gogithub.RateLimitError var e *gogithub.RateLimitError
if errors.As(err, &e) { if errors.As(err, &e) {
@@ -139,7 +160,7 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
if notRegistered && registrationDidTimeout { if notRegistered && registrationDidTimeout {
log.Info( log.Info(
"Runner failed to register itself to GitHub in timely manner. "+ "Runner failed to register itself to GitHub in timely manner. "+
"Recreating the pod to see if it resolves the issue. "+ "Marking the runner for scale down. "+
"CAUTION: If you see this a lot, you should investigate the root cause. "+ "CAUTION: If you see this a lot, you should investigate the root cause. "+
"See https://github.com/summerwind/actions-runner-controller/issues/288", "See https://github.com/summerwind/actions-runner-controller/issues/288",
"runnerCreationTimestamp", runner.CreationTimestamp, "runnerCreationTimestamp", runner.CreationTimestamp,
@@ -149,6 +170,11 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
notBusy = append(notBusy, runner) notBusy = append(notBusy, runner)
} }
// offline runners should always be a great target for scale down
if offline {
notBusy = append(notBusy, runner)
}
} else if !busy { } else if !busy {
notBusy = append(notBusy, runner) notBusy = append(notBusy, runner)
} }
@@ -165,7 +191,7 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
return ctrl.Result{}, err return ctrl.Result{}, err
} }
r.Recorder.Event(&rs, corev1.EventTypeNormal, "RunnerDeleted", fmt.Sprintf("Deleted runner '%s'", myRunners[i].Name)) r.Recorder.Event(&rs, corev1.EventTypeNormal, "RunnerDeleted", fmt.Sprintf("Deleted runner '%s'", notBusy[i].Name))
log.Info("Deleted runner", "runnerreplicaset", rs.ObjectMeta.Name) log.Info("Deleted runner", "runnerreplicaset", rs.ObjectMeta.Name)
} }
} else if desired > available { } else if desired > available {
@@ -193,8 +219,10 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
updated.Status.ReadyReplicas = ready updated.Status.ReadyReplicas = ready
if err := r.Status().Update(ctx, updated); err != nil { if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status") log.Info("Failed to update status. Retrying immediately", "error", err.Error())
return ctrl.Result{}, err return ctrl.Result{
Requeue: true,
}, nil
} }
} }
@@ -221,10 +249,16 @@ func (r *RunnerReplicaSetReconciler) newRunner(rs v1alpha1.RunnerReplicaSet) (v1
} }
func (r *RunnerReplicaSetReconciler) SetupWithManager(mgr ctrl.Manager) error { func (r *RunnerReplicaSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runnerreplicaset-controller") name := "runnerreplicaset-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr). return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.RunnerReplicaSet{}). For(&v1alpha1.RunnerReplicaSet{}).
Owns(&v1alpha1.Runner{}). Owns(&v1alpha1.Runner{}).
Named(name).
Complete(r) Complete(r)
} }

View File

@@ -60,6 +60,7 @@ func SetupTest(ctx context.Context) *corev1.Namespace {
Log: logf.Log, Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"), Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient, GitHubClient: ghClient,
Name: "runnerreplicaset-" + ns.Name,
} }
err = controller.SetupWithManager(mgr) err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller") Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -114,7 +115,17 @@ var _ = Context("Inside of a new namespace", func() {
}, },
Spec: actionsv1alpha1.RunnerReplicaSetSpec{ Spec: actionsv1alpha1.RunnerReplicaSetSpec{
Replicas: intPtr(1), Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{ Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{ Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar", Repository: "foo/bar",
Image: "bar", Image: "bar",
@@ -134,9 +145,26 @@ var _ = Context("Inside of a new namespace", func() {
Eventually( Eventually(
func() int { func() int {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name)) selector, err := metav1.LabelSelectorAsSelector(
&metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
)
if err != nil {
logf.Log.Error(err, "failed to create labelselector")
return -1
}
err = k8sClient.List(
ctx,
&runners,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil { if err != nil {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
return -1
} }
for i, runner := range runners.Items { for i, runner := range runners.Items {
@@ -175,7 +203,23 @@ var _ = Context("Inside of a new namespace", func() {
Eventually( Eventually(
func() int { func() int {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name)) selector, err := metav1.LabelSelectorAsSelector(
&metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
)
if err != nil {
logf.Log.Error(err, "failed to create labelselector")
return -1
}
err = k8sClient.List(
ctx,
&runners,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil { if err != nil {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
} }
@@ -219,6 +263,7 @@ var _ = Context("Inside of a new namespace", func() {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name)) err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
if err != nil { if err != nil {
logf.Log.Error(err, "list runners") logf.Log.Error(err, "list runners")
return -1
} }
for i, runner := range runners.Items { for i, runner := range runners.Items {

View File

@@ -0,0 +1,373 @@
{
"action": "created",
"check_run": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_sha": "1234567890123456789012345678901234567890",
"external_id": "92058b04-f16a-5035-546c-cae3ad5e2f5f",
"url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890",
"html_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"details_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"status": "queued",
"conclusion": null,
"started_at": "2021-02-18T06:16:31Z",
"completed_at": null,
"output": {
"title": null,
"summary": null,
"text": null,
"annotations_count": 0,
"annotations_url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890/annotations"
},
"name": "validate",
"check_suite": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_branch": "MYNAME/actions-runner-controller-webhook",
"head_sha": "1234567890123456789012345678901234567890",
"status": "queued",
"conclusion": null,
"url": "https://api.github.com/repos/MYORG/MYREPO/check-suites/1234567890",
"before": "1234567890123456789012345678901234567890",
"after": "1234567890123456789012345678901234567890",
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/2033",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
],
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/123467890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"created_at": "2021-02-18T06:15:32Z",
"updated_at": "2021-02-18T06:16:31Z"
},
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/1234567890",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
]
},
"repository": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"name": "MYREPO",
"full_name": "MYORG/MYREPO",
"private": true,
"owner": {
"login": "MYORG",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYORG",
"html_url": "https://github.com/MYORG",
"followers_url": "https://api.github.com/users/MYORG/followers",
"following_url": "https://api.github.com/users/MYORG/following{/other_user}",
"gists_url": "https://api.github.com/users/MYORG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYORG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYORG/subscriptions",
"organizations_url": "https://api.github.com/users/MYORG/orgs",
"repos_url": "https://api.github.com/users/MYORG/repos",
"events_url": "https://api.github.com/users/MYORG/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYORG/received_events",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/MYORG/MYREPO",
"description": "MYREPO",
"fork": false,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"forks_url": "https://api.github.com/repos/MYORG/MYREPO/forks",
"keys_url": "https://api.github.com/repos/MYORG/MYREPO/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/MYORG/MYREPO/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/MYORG/MYREPO/teams",
"hooks_url": "https://api.github.com/repos/MYORG/MYREPO/hooks",
"issue_events_url": "https://api.github.com/repos/MYORG/MYREPO/issues/events{/number}",
"events_url": "https://api.github.com/repos/MYORG/MYREPO/events",
"assignees_url": "https://api.github.com/repos/MYORG/MYREPO/assignees{/user}",
"branches_url": "https://api.github.com/repos/MYORG/MYREPO/branches{/branch}",
"tags_url": "https://api.github.com/repos/MYORG/MYREPO/tags",
"blobs_url": "https://api.github.com/repos/MYORG/MYREPO/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/MYORG/MYREPO/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/MYORG/MYREPO/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/MYORG/MYREPO/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/MYORG/MYREPO/statuses/{sha}",
"languages_url": "https://api.github.com/repos/MYORG/MYREPO/languages",
"stargazers_url": "https://api.github.com/repos/MYORG/MYREPO/stargazers",
"contributors_url": "https://api.github.com/repos/MYORG/MYREPO/contributors",
"subscribers_url": "https://api.github.com/repos/MYORG/MYREPO/subscribers",
"subscription_url": "https://api.github.com/repos/MYORG/MYREPO/subscription",
"commits_url": "https://api.github.com/repos/MYORG/MYREPO/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/MYORG/MYREPO/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/MYORG/MYREPO/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/MYORG/MYREPO/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/MYORG/MYREPO/contents/{+path}",
"compare_url": "https://api.github.com/repos/MYORG/MYREPO/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/MYORG/MYREPO/merges",
"archive_url": "https://api.github.com/repos/MYORG/MYREPO/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/MYORG/MYREPO/downloads",
"issues_url": "https://api.github.com/repos/MYORG/MYREPO/issues{/number}",
"pulls_url": "https://api.github.com/repos/MYORG/MYREPO/pulls{/number}",
"milestones_url": "https://api.github.com/repos/MYORG/MYREPO/milestones{/number}",
"notifications_url": "https://api.github.com/repos/MYORG/MYREPO/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/MYORG/MYREPO/labels{/name}",
"releases_url": "https://api.github.com/repos/MYORG/MYREPO/releases{/id}",
"deployments_url": "https://api.github.com/repos/MYORG/MYREPO/deployments",
"created_at": "2017-08-10T02:21:10Z",
"updated_at": "2021-02-18T04:40:55Z",
"pushed_at": "2021-02-18T06:15:30Z",
"git_url": "git://github.com/MYORG/MYREPO.git",
"ssh_url": "git@github.com:MYORG/MYREPO.git",
"clone_url": "https://github.com/MYORG/MYREPO.git",
"svn_url": "https://github.com/MYORG/MYREPO",
"homepage": null,
"size": 30782,
"stargazers_count": 2,
"watchers_count": 2,
"language": "Shell",
"has_issues": false,
"has_projects": true,
"has_downloads": true,
"has_wiki": false,
"has_pages": false,
"forks_count": 0,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 6,
"license": null,
"forks": 0,
"open_issues": 6,
"watchers": 2,
"default_branch": "master"
},
"organization": {
"login": "MYORG",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"url": "https://api.github.com/orgs/MYORG",
"repos_url": "https://api.github.com/orgs/MYORG/repos",
"events_url": "https://api.github.com/orgs/MYORG/events",
"hooks_url": "https://api.github.com/orgs/MYORG/hooks",
"issues_url": "https://api.github.com/orgs/MYORG/issues",
"members_url": "https://api.github.com/orgs/MYORG/members{/member}",
"public_members_url": "https://api.github.com/orgs/MYORG/public_members{/member}",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"description": ""
},
"sender": {
"login": "MYNAME",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYNAME",
"html_url": "https://github.com/MYNAME",
"followers_url": "https://api.github.com/users/MYNAME/followers",
"following_url": "https://api.github.com/users/MYNAME/following{/other_user}",
"gists_url": "https://api.github.com/users/MYNAME/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYNAME/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYNAME/subscriptions",
"organizations_url": "https://api.github.com/users/MYNAME/orgs",
"repos_url": "https://api.github.com/users/MYNAME/repos",
"events_url": "https://api.github.com/users/MYNAME/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYNAME/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -0,0 +1,360 @@
{
"action": "completed",
"check_run": {
"id": 1949438388,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_sha": "1234567890123456789012345678901234567890",
"external_id": "ca395085-040a-526b-2ce8-bdc85f692774",
"url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890",
"html_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"details_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"status": "queued",
"conclusion": null,
"started_at": "2021-02-18T06:16:31Z",
"completed_at": null,
"output": {
"title": null,
"summary": null,
"text": null,
"annotations_count": 0,
"annotations_url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890/annotations"
},
"name": "build",
"name": "validate",
"check_suite": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_branch": "MYNAME/actions-runner-controller-webhook",
"head_sha": "1234567890123456789012345678901234567890",
"status": "queued",
"conclusion": null,
"url": "https://api.github.com/repos/MYORG/MYREPO/check-suites/1234567890",
"before": "1234567890123456789012345678901234567890",
"after": "1234567890123456789012345678901234567890",
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/2033",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
],
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/123467890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"created_at": "2021-02-18T06:15:32Z",
"updated_at": "2021-02-18T06:16:31Z"
},
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/1234567890",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
]
},
"repository": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"name": "MYREPO",
"full_name": "MYORG/MYREPO",
"private": true,
"owner": {
"login": "MYUSER",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYUSER",
"html_url": "https://github.com/MYUSER",
"followers_url": "https://api.github.com/users/MYUSER/followers",
"following_url": "https://api.github.com/users/MYUSER/following{/other_user}",
"gists_url": "https://api.github.com/users/MYUSER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYUSER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYUSER/subscriptions",
"organizations_url": "https://api.github.com/users/MYUSER/orgs",
"repos_url": "https://api.github.com/users/MYUSER/repos",
"events_url": "https://api.github.com/users/MYUSER/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYUSER/received_events",
"type": "User",
"site_admin": false
},
"html_url": "https://github.com/MYUSER/MYREPO",
"description": null,
"fork": false,
"url": "https://api.github.com/repos/MYUSER/MYREPO",
"forks_url": "https://api.github.com/repos/MYUSER/MYREPO/forks",
"keys_url": "https://api.github.com/repos/MYUSER/MYREPO/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/MYUSER/MYREPO/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/MYUSER/MYREPO/teams",
"hooks_url": "https://api.github.com/repos/MYUSER/MYREPO/hooks",
"issue_events_url": "https://api.github.com/repos/MYUSER/MYREPO/issues/events{/number}",
"events_url": "https://api.github.com/repos/MYUSER/MYREPO/events",
"assignees_url": "https://api.github.com/repos/MYUSER/MYREPO/assignees{/user}",
"branches_url": "https://api.github.com/repos/MYUSER/MYREPO/branches{/branch}",
"tags_url": "https://api.github.com/repos/MYUSER/MYREPO/tags",
"blobs_url": "https://api.github.com/repos/MYUSER/MYREPO/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/MYUSER/MYREPO/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/MYUSER/MYREPO/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/MYUSER/MYREPO/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/MYUSER/MYREPO/statuses/{sha}",
"languages_url": "https://api.github.com/repos/MYUSER/MYREPO/languages",
"stargazers_url": "https://api.github.com/repos/MYUSER/MYREPO/stargazers",
"contributors_url": "https://api.github.com/repos/MYUSER/MYREPO/contributors",
"subscribers_url": "https://api.github.com/repos/MYUSER/MYREPO/subscribers",
"subscription_url": "https://api.github.com/repos/MYUSER/MYREPO/subscription",
"commits_url": "https://api.github.com/repos/MYUSER/MYREPO/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/MYUSER/MYREPO/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/MYUSER/MYREPO/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/MYUSER/MYREPO/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/MYUSER/MYREPO/contents/{+path}",
"compare_url": "https://api.github.com/repos/MYUSER/MYREPO/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/MYUSER/MYREPO/merges",
"archive_url": "https://api.github.com/repos/MYUSER/MYREPO/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/MYUSER/MYREPO/downloads",
"issues_url": "https://api.github.com/repos/MYUSER/MYREPO/issues{/number}",
"pulls_url": "https://api.github.com/repos/MYUSER/MYREPO/pulls{/number}",
"milestones_url": "https://api.github.com/repos/MYUSER/MYREPO/milestones{/number}",
"notifications_url": "https://api.github.com/repos/MYUSER/MYREPO/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/MYUSER/MYREPO/labels{/name}",
"releases_url": "https://api.github.com/repos/MYUSER/MYREPO/releases{/id}",
"deployments_url": "https://api.github.com/repos/MYUSER/MYREPO/deployments",
"created_at": "2021-02-18T06:16:31Z",
"updated_at": "2021-02-18T06:16:31Z",
"pushed_at": "2021-02-18T06:16:31Z",
"git_url": "git://github.com/MYUSER/MYREPO.git",
"ssh_url": "git@github.com:MYUSER/MYREPO.git",
"clone_url": "https://github.com/MYUSER/MYREPO.git",
"svn_url": "https://github.com/MYUSER/MYREPO",
"homepage": null,
"size": 4,
"stargazers_count": 0,
"watchers_count": 0,
"language": null,
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 0,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 0,
"license": null,
"forks": 0,
"open_issues": 0,
"watchers": 0,
"default_branch": "main"
},
"sender": {
"login": "MYUSER",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYUSER",
"html_url": "https://github.com/MYUSER",
"followers_url": "https://api.github.com/users/MYUSER/followers",
"following_url": "https://api.github.com/users/MYUSER/following{/other_user}",
"gists_url": "https://api.github.com/users/MYUSER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYUSER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYUSER/subscriptions",
"organizations_url": "https://api.github.com/users/MYUSER/orgs",
"repos_url": "https://api.github.com/users/MYUSER/repos",
"events_url": "https://api.github.com/users/MYUSER/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYUSER/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -37,10 +37,21 @@ func (h *ListRunnersHandler) ServeHTTP(w http.ResponseWriter, req *http.Request)
type Handler struct { type Handler struct {
Status int Status int
Body string Body string
Statuses map[string]string
} }
func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) { func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(h.Status) w.WriteHeader(h.Status)
status := req.URL.Query().Get("status")
if h.Statuses != nil {
if body, ok := h.Statuses[status]; ok {
fmt.Fprintf(w, body)
return
}
}
fmt.Fprintf(w, h.Body) fmt.Fprintf(w, h.Body)
} }
@@ -102,6 +113,18 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/enterprises/test/actions/runners/registration-token": &Handler{
Status: http.StatusCreated,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
},
"/enterprises/invalid/actions/runners/registration-token": &Handler{
Status: http.StatusOK,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
},
"/enterprises/error/actions/runners/registration-token": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For ListRunners // For ListRunners
"/repos/test/valid/actions/runners": config.FixedResponses.ListRunners, "/repos/test/valid/actions/runners": config.FixedResponses.ListRunners,
@@ -125,6 +148,18 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/enterprises/test/actions/runners": &Handler{
Status: http.StatusOK,
Body: RunnersListBody,
},
"/enterprises/invalid/actions/runners": &Handler{
Status: http.StatusNoContent,
Body: "",
},
"/enterprises/error/actions/runners": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For RemoveRunner // For RemoveRunner
"/repos/test/valid/actions/runners/1": &Handler{ "/repos/test/valid/actions/runners/1": &Handler{
@@ -151,6 +186,18 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest, Status: http.StatusBadRequest,
Body: "", Body: "",
}, },
"/enterprises/test/actions/runners/1": &Handler{
Status: http.StatusNoContent,
Body: "",
},
"/enterprises/invalid/actions/runners/1": &Handler{
Status: http.StatusOK,
Body: "",
},
"/enterprises/error/actions/runners/1": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For auto-scaling based on the number of queued(pending) workflow runs // For auto-scaling based on the number of queued(pending) workflow runs
"/repos/test/valid/actions/runs": config.FixedResponses.ListRepositoryWorkflowRuns, "/repos/test/valid/actions/runs": config.FixedResponses.ListRepositoryWorkflowRuns,

View File

@@ -10,11 +10,15 @@ type FixedResponses struct {
type Option func(*ServerConfig) type Option func(*ServerConfig)
func WithListRepositoryWorkflowRunsResponse(status int, body string) Option { func WithListRepositoryWorkflowRunsResponse(status int, body, queued, in_progress string) Option {
return func(c *ServerConfig) { return func(c *ServerConfig) {
c.FixedResponses.ListRepositoryWorkflowRuns = &Handler{ c.FixedResponses.ListRepositoryWorkflowRuns = &Handler{
Status: status, Status: status,
Body: body, Body: body,
Statuses: map[string]string{
"queued": queued,
"in_progress": in_progress,
},
} }
} }
} }

View File

@@ -2,6 +2,7 @@ package fake
import ( import (
"encoding/json" "encoding/json"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
"strconv" "strconv"
@@ -64,6 +65,20 @@ func (r *RunnersList) handleRemove() http.HandlerFunc {
} }
} }
func (r *RunnersList) Sync(runners []v1alpha1.Runner) {
r.runners = nil
for i, want := range runners {
r.Add(&github.Runner{
ID: github.Int64(int64(i)),
Name: github.String(want.Name),
OS: github.String("linux"),
Status: github.String("online"),
Busy: github.Bool(false),
})
}
}
func exists(runners []*github.Runner, runner *github.Runner) bool { func exists(runners []*github.Runner, runner *github.Runner) bool {
for _, r := range runners { for _, r := range runners {
if *r.Name == *runner.Name { if *r.Name == *runner.Name {

View File

@@ -11,6 +11,7 @@ import (
"github.com/bradleyfalzon/ghinstallation" "github.com/bradleyfalzon/ghinstallation"
"github.com/google/go-github/v33/github" "github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/github/metrics"
"golang.org/x/oauth2" "golang.org/x/oauth2"
) )
@@ -34,15 +35,9 @@ type Client struct {
// NewClient creates a Github Client // NewClient creates a Github Client
func (c *Config) NewClient() (*Client, error) { func (c *Config) NewClient() (*Client, error) {
var ( var transport http.RoundTripper
httpClient *http.Client
client *github.Client
)
githubBaseURL := "https://github.com/"
if len(c.Token) > 0 { if len(c.Token) > 0 {
httpClient = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource( transport = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(&oauth2.Token{AccessToken: c.Token})).Transport
&oauth2.Token{AccessToken: c.Token},
))
} else { } else {
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey) tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
if err != nil { if err != nil {
@@ -55,9 +50,13 @@ func (c *Config) NewClient() (*Client, error) {
} }
tr.BaseURL = githubAPIURL tr.BaseURL = githubAPIURL
} }
httpClient = &http.Client{Transport: tr} transport = tr
} }
transport = metrics.Transport{Transport: transport}
httpClient := &http.Client{Transport: transport}
var client *github.Client
var githubBaseURL string
if len(c.EnterpriseURL) > 0 { if len(c.EnterpriseURL) > 0 {
var err error var err error
client, err = github.NewEnterpriseClient(c.EnterpriseURL, c.EnterpriseURL, httpClient) client, err = github.NewEnterpriseClient(c.EnterpriseURL, c.EnterpriseURL, httpClient)
@@ -67,6 +66,7 @@ func (c *Config) NewClient() (*Client, error) {
githubBaseURL = fmt.Sprintf("%s://%s%s", client.BaseURL.Scheme, client.BaseURL.Host, strings.TrimSuffix(client.BaseURL.Path, "api/v3/")) githubBaseURL = fmt.Sprintf("%s://%s%s", client.BaseURL.Scheme, client.BaseURL.Host, strings.TrimSuffix(client.BaseURL.Path, "api/v3/"))
} else { } else {
client = github.NewClient(httpClient) client = github.NewClient(httpClient)
githubBaseURL = "https://github.com/"
} }
return &Client{ return &Client{
@@ -82,10 +82,13 @@ func (c *Client) GetRegistrationToken(ctx context.Context, enterprise, org, repo
c.mu.Lock() c.mu.Lock()
defer c.mu.Unlock() defer c.mu.Unlock()
key := getRegistrationKey(org, repo) key := getRegistrationKey(org, repo, enterprise)
rt, ok := c.regTokens[key] rt, ok := c.regTokens[key]
if ok && rt.GetExpiresAt().After(time.Now()) { // we like to give runners a chance that are just starting up and may miss the expiration date by a bit
runnerStartupTimeout := 3 * time.Minute
if ok && rt.GetExpiresAt().After(time.Now().Add(runnerStartupTimeout)) {
return rt, nil return rt, nil
} }
@@ -208,14 +211,32 @@ func (c *Client) listRunners(ctx context.Context, enterprise, org, repo string,
} }
func (c *Client) ListRepositoryWorkflowRuns(ctx context.Context, user string, repoName string) ([]*github.WorkflowRun, error) { func (c *Client) ListRepositoryWorkflowRuns(ctx context.Context, user string, repoName string) ([]*github.WorkflowRun, error) {
c.Client.Actions.ListRepositoryWorkflowRuns(ctx, user, repoName, nil) queued, err := c.listRepositoryWorkflowRuns(ctx, user, repoName, "queued")
if err != nil {
return nil, fmt.Errorf("listing queued workflow runs: %w", err)
}
inProgress, err := c.listRepositoryWorkflowRuns(ctx, user, repoName, "in_progress")
if err != nil {
return nil, fmt.Errorf("listing in_progress workflow runs: %w", err)
}
var workflowRuns []*github.WorkflowRun
workflowRuns = append(workflowRuns, queued...)
workflowRuns = append(workflowRuns, inProgress...)
return workflowRuns, nil
}
func (c *Client) listRepositoryWorkflowRuns(ctx context.Context, user string, repoName, status string) ([]*github.WorkflowRun, error) {
var workflowRuns []*github.WorkflowRun var workflowRuns []*github.WorkflowRun
opts := github.ListWorkflowRunsOptions{ opts := github.ListWorkflowRunsOptions{
ListOptions: github.ListOptions{ ListOptions: github.ListOptions{
PerPage: 100, PerPage: 100,
}, },
Status: status,
} }
for { for {
@@ -250,11 +271,8 @@ func getEnterpriseOrganisationAndRepo(enterprise, org, repo string) (string, str
return "", "", "", fmt.Errorf("enterprise, organization and repository are all empty") return "", "", "", fmt.Errorf("enterprise, organization and repository are all empty")
} }
func getRegistrationKey(org, repo string) string { func getRegistrationKey(org, repo, enterprise string) string {
if len(org) > 0 { return fmt.Sprintf("org=%s,repo=%s,enterprise=%s", org, repo, enterprise)
return org
}
return repo
} }
func splitOwnerAndRepo(repo string) (string, string, error) { func splitOwnerAndRepo(repo string) (string, string, error) {
@@ -291,6 +309,14 @@ func (e *RunnerNotFound) Error() string {
return fmt.Sprintf("runner %q not found", e.runnerName) return fmt.Sprintf("runner %q not found", e.runnerName)
} }
type RunnerOffline struct {
runnerName string
}
func (e *RunnerOffline) Error() string {
return fmt.Sprintf("runner %q offline", e.runnerName)
}
func (r *Client) IsRunnerBusy(ctx context.Context, enterprise, org, repo, name string) (bool, error) { func (r *Client) IsRunnerBusy(ctx context.Context, enterprise, org, repo, name string) (bool, error) {
runners, err := r.ListRunners(ctx, enterprise, org, repo) runners, err := r.ListRunners(ctx, enterprise, org, repo)
if err != nil { if err != nil {
@@ -299,6 +325,9 @@ func (r *Client) IsRunnerBusy(ctx context.Context, enterprise, org, repo, name s
for _, runner := range runners { for _, runner := range runners {
if runner.GetName() == name { if runner.GetName() == name {
if runner.GetStatus() == "offline" {
return false, &RunnerOffline{runnerName: name}
}
return runner.GetBusy(), nil return runner.GetBusy(), nil
} }
} }

View File

@@ -0,0 +1,63 @@
// Package metrics provides monitoring of the GitHub related metrics.
//
// This depends on the metrics exporter of kubebuilder.
// See https://book.kubebuilder.io/reference/metrics.html for details.
package metrics
import (
"net/http"
"strconv"
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
func init() {
metrics.Registry.MustRegister(metricRateLimit, metricRateLimitRemaining)
}
var (
// https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting
metricRateLimit = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "github_rate_limit",
Help: "The maximum number of requests you're permitted to make per hour",
},
)
metricRateLimitRemaining = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "github_rate_limit_remaining",
Help: "The number of requests remaining in the current rate limit window",
},
)
)
const (
// https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting
headerRateLimit = "X-RateLimit-Limit"
headerRateLimitRemaining = "X-RateLimit-Remaining"
)
// Transport wraps a transport with metrics monitoring
type Transport struct {
Transport http.RoundTripper
}
func (t Transport) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := t.Transport.RoundTrip(req)
if resp != nil {
parseResponse(resp)
}
return resp, err
}
func parseResponse(resp *http.Response) {
rateLimit, err := strconv.Atoi(resp.Header.Get(headerRateLimit))
if err == nil {
metricRateLimit.Set(float64(rateLimit))
}
rateLimitRemaining, err := strconv.Atoi(resp.Header.Get(headerRateLimitRemaining))
if err == nil {
metricRateLimitRemaining.Set(float64(rateLimitRemaining))
}
}

2
go.mod
View File

@@ -6,11 +6,13 @@ require (
github.com/bradleyfalzon/ghinstallation v1.1.1 github.com/bradleyfalzon/ghinstallation v1.1.1
github.com/davecgh/go-spew v1.1.1 github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v0.1.0 github.com/go-logr/logr v0.1.0
github.com/google/go-cmp v0.3.1
github.com/google/go-github/v33 v33.0.1-0.20210204004227-319dcffb518a github.com/google/go-github/v33 v33.0.1-0.20210204004227-319dcffb518a
github.com/gorilla/mux v1.8.0 github.com/gorilla/mux v1.8.0
github.com/kelseyhightower/envconfig v1.4.0 github.com/kelseyhightower/envconfig v1.4.0
github.com/onsi/ginkgo v1.8.0 github.com/onsi/ginkgo v1.8.0
github.com/onsi/gomega v1.5.0 github.com/onsi/gomega v1.5.0
github.com/prometheus/client_golang v0.9.2
github.com/stretchr/testify v1.4.0 // indirect github.com/stretchr/testify v1.4.0 // indirect
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45 golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
k8s.io/api v0.0.0-20190918155943-95b840bb6a1f k8s.io/api v0.0.0-20190918155943-95b840bb6a1f

31
main.go
View File

@@ -20,6 +20,7 @@ import (
"flag" "flag"
"fmt" "fmt"
"os" "os"
"strings"
"time" "time"
"github.com/kelseyhightower/envconfig" "github.com/kelseyhightower/envconfig"
@@ -62,6 +63,9 @@ func main() {
runnerImage string runnerImage string
dockerImage string dockerImage string
namespace string
commonRunnerLabels commaSeparatedStringSlice
) )
var c github.Config var c github.Config
@@ -80,6 +84,8 @@ func main() {
flag.Int64Var(&c.AppInstallationID, "github-app-installation-id", c.AppInstallationID, "The installation ID of GitHub App.") flag.Int64Var(&c.AppInstallationID, "github-app-installation-id", c.AppInstallationID, "The installation ID of GitHub App.")
flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App") flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change") flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.Var(&commonRunnerLabels, "common-runner-labels", "Runner labels in the K1=V1,K2=V2,... format that are inherited all the runners created by the controller. See https://github.com/summerwind/actions-runner-controller/issues/321 for more information")
flag.StringVar(&namespace, "watch-namespace", "", "The namespace to watch for custom resources. Set to empty for letting it watch for all namespaces.")
flag.Parse() flag.Parse()
logger := zap.New(func(o *zap.Options) { logger := zap.New(func(o *zap.Options) {
@@ -100,6 +106,7 @@ func main() {
LeaderElection: enableLeaderElection, LeaderElection: enableLeaderElection,
Port: 9443, Port: 9443,
SyncPeriod: &syncPeriod, SyncPeriod: &syncPeriod,
Namespace: namespace,
}) })
if err != nil { if err != nil {
setupLog.Error(err, "unable to start manager") setupLog.Error(err, "unable to start manager")
@@ -133,9 +140,10 @@ func main() {
} }
runnerDeploymentReconciler := &controllers.RunnerDeploymentReconciler{ runnerDeploymentReconciler := &controllers.RunnerDeploymentReconciler{
Client: mgr.GetClient(), Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"), Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"),
Scheme: mgr.GetScheme(), Scheme: mgr.GetScheme(),
CommonRunnerLabels: commonRunnerLabels,
} }
if err = runnerDeploymentReconciler.SetupWithManager(mgr); err != nil { if err = runnerDeploymentReconciler.SetupWithManager(mgr); err != nil {
@@ -176,3 +184,20 @@ func main() {
os.Exit(1) os.Exit(1)
} }
} }
type commaSeparatedStringSlice []string
func (s *commaSeparatedStringSlice) String() string {
return fmt.Sprintf("%v", *s)
}
func (s *commaSeparatedStringSlice) Set(value string) error {
for _, v := range strings.Split(value, ",") {
if v == "" {
continue
}
*s = append(*s, v)
}
return nil
}

View File

@@ -37,6 +37,7 @@ RUN apt update -y \
wget \ wget \
zip \ zip \
zstd \ zstd \
&& cd /usr/bin && ln -sf python3 python \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/*
RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
@@ -79,7 +80,7 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > .env \ RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > .env \
&& mkdir /opt/hostedtoolcache \ && mkdir /opt/hostedtoolcache \
&& chgrp runner /opt/hostedtoolcache \ && chgrp docker /opt/hostedtoolcache \
&& chmod g+rwx /opt/hostedtoolcache && chmod g+rwx /opt/hostedtoolcache
COPY entrypoint.sh / COPY entrypoint.sh /

View File

@@ -21,6 +21,7 @@ RUN apt update \
netcat \ netcat \
openssh-client \ openssh-client \
parallel \ parallel \
python-is-python3 \
rsync \ rsync \
shellcheck \ shellcheck \
sudo \ sudo \
@@ -88,7 +89,7 @@ RUN export ARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) \
RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > /runner.env \ RUN echo AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache > /runner.env \
&& mkdir /opt/hostedtoolcache \ && mkdir /opt/hostedtoolcache \
&& chgrp runner /opt/hostedtoolcache \ && chgrp docker /opt/hostedtoolcache \
&& chmod g+rwx /opt/hostedtoolcache && chmod g+rwx /opt/hostedtoolcache
COPY modprobe startup.sh /usr/local/bin/ COPY modprobe startup.sh /usr/local/bin/

View File

@@ -42,7 +42,7 @@ if [ -z "${RUNNER_TOKEN}" ]; then
exit 1 exit 1
fi fi
if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_ORG}" ] && [ -n "${RUNNER_GROUP}" ];then if [ -z "${RUNNER_REPO}" ] && [ -n "${RUNNER_GROUP}" ];then
RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}" RUNNER_GROUP_ARG="--runnergroup ${RUNNER_GROUP}"
fi fi