Compare commits

...

160 Commits

Author SHA1 Message Date
Yusuke Kuoka
8d3a83b07a Add CheckRun.Names scale-up trigger configuration (#390)
This allows you to trigger autoscaling depending on check_run names(i.e. actions job names). If you are willing to differentiate scale amount only for a specific job, or want to scale only on a specific job, try this.
2021-03-14 10:21:42 +09:00
callum-tait-pbx
a6270b44d5 docs: fix typos and add PR link (#379)
* docs: fix typos and add PR link

* docs: changes based on feedback

* docs: fixing numbers in list

* docs: grammer

* docs: better wording
2021-03-12 08:52:34 +09:00
Brandon Kimbrough
2273b198a1 Add ability to set the MTU size of the docker in docker container (#385)
* adding abilitiy to set docker in docker MTU size

* safeguards to only set MTU env var if it is set
2021-03-12 08:44:49 +09:00
Yusuke Kuoka
3d62e73f8c Fix PercentageRunnersBusy scaling not working (#386)
PercentageRunnerBusy seems to have regressed since #355 due to that RunnerDeployment.Spec.Selector is empty by default and the HRA controller was using that empty selector to query runners, which somehow returned 0 runners. This fixes that by using the newly added automatic `runner-deployment-name` label for the default runner label and the selector, which avoids querying with empty selector.

Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-795200205
2021-03-11 20:16:36 +09:00
Yusuke Kuoka
f5c639ae28 Make webhook-based autoscaler github event logs more operator-friendly (#384)
Adds fields like `pullRequest.base.ref` and `checkRun.status` that are useful for verifying the autoscaling behaviour without browsing GitHub.
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-794175312
2021-03-10 09:40:44 +09:00
Yusuke Kuoka
81016154c0 GITHUB_APP_PRIVATE_KEY can now be the content of the key (#383)
Resolves #382
2021-03-10 09:37:15 +09:00
Yusuke Kuoka
728829be7b Fix panic on scaling organizational runners (#381)
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793287133
2021-03-09 15:03:47 +09:00
Yusuke Kuoka
c0b8f9d483 Merge pull request #380 from summerwind/ns-flag
Use --watch-namespace flag to restrict the namespace to watch
2021-03-09 15:03:32 +09:00
Yusuke Kuoka
ced1c2321a Fix chart-testing failing due to conflict between authSecret and dummySecret 2021-03-09 14:54:55 +09:00
Yusuke Kuoka
1b8a656051 Use --watch-namespace flag to restrict the namespace to watch
Ref https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793172995
2021-03-09 09:46:21 +09:00
Rob Whitby
1753fa3530 handle GET requests in webhook hra (#378) 2021-03-09 08:46:27 +09:00
Johannes Nicolai
8c0f3dfc79 Set runner group for runners with enterprise scope (#376)
* so far, runner group parameter is only set for runners with org scope
* now set group for enterprise runners as well
* removed null check for org scope as either org or enterprise will be set
2021-03-08 09:18:23 +09:00
Rob Whitby
dbda292f54 fix typo in examples (#373) 2021-03-08 09:18:10 +09:00
callum-tait-pbx
550a864198 chore: bumping helm chart (#372)
PR 355 made changes to the CRDs but didn't bump the version
2021-03-05 20:27:52 +09:00
Yusuke Kuoka
4fa5315311 Fix possible flapping autoscale on runner update (#371)
Addresses https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r587199428
2021-03-05 10:21:20 +09:00
Hiroshi Muraoka
11e58fcc41 Manage runner with label (#355)
* Update RunnerDeploymentSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerReplicaSetSpec to have Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add CloneSelectorAndAddLabel to add Selector field

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix tests

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use label to find RunnerReplicaSet/Runner

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Update controller-gen versions in CRD

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update autoscaler to list Pods with labels

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Add debug log

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Modify RunnerDeployment tests

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify RunnerReplicaset test

Signed-off-by: binoue <banji-inoue@cybozu.co.jp>

* Modify integration test

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Use RunnerDeployment Template Labels as the default selector for backward compatibility

* Fix labeling

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update func in Eventually to return (int, error)

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Update RunnerDeployment controller not to use label selector

Signed-off-by: Hiroshi Muraoka <h.muraoka714@gmail.com>

* Fix potential replicaset controller breakage on replicaset created before v0.17.0

* Fix errors on existing runner replica sets

* Ensure RunnerReplicaSet Spec Selector addition does not break controller

* Ensure RunnerDeployment Template.Spec.Labels change does result in template hash change

* Fix comment

Co-authored-by: binoue <banji-inoue@cybozu.co.jp>
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-03-05 10:15:39 +09:00
Mike Perry
f220fefe92 Update README.md (#370) 2021-03-05 09:17:32 +09:00
Hidetake Iwata
56b4598d1d Fix helm template error when webhook server is enabled (#365)
* Fix include block in githubwebhook.deployment.yaml

* Fix include block in githubwebhook.secrets.yaml
2021-03-03 09:21:58 +09:00
Taehyun Kim
8f977dbe48 Fix various bugs in helm chart (#364)
* Fix wrong trim

* add missing MutatingWeghookConfiguration.webhooks[*].sideEffects

* fix missing admissionReviewVersions

* admissionregistration.k8s.io/v1 for kustomization manifests

* revert webhook config
2021-03-03 09:21:20 +09:00
Yusuke Kuoka
9ae3551744 Remove unnecessary GitHub API calls (#363)
The controller had the 2 extra and redundant calls to List Workflow Runs API.

Ref #362
2021-03-02 10:55:30 +09:00
Rolf Ahrenberg
05ad3f5469 Set default python (#361) 2021-03-01 09:45:13 +09:00
callum-tait-pbx
9c7372a8e0 docs: styling fixes (#359)
* docs: styling fixes

* docs: grammer fixes
2021-03-01 09:44:35 +09:00
Yusuke Kuoka
584590e97c Use patch instead of update to alleviate HRA conflict on webhook (#358)
We sometimes see that integration test fails due to runner replicas not meeting the expected number in a timely manner. After investigating a bit, this turned out to be due to that HRA updates on webhook-based autoscaler and HRA controller are conflicting. This changes the controllers to use Patch instead of Update to make conflicts less likely to happen.

I have also updated the hra controller to use Patch when updating RunnerDeployment, too.

Overall, these changes should make the webhook-based autoscaling more reliable due to less conflicts.
2021-02-26 10:17:09 +09:00
Yusuke Kuoka
d18884a0b9 Fix HRA expired cache entries not cleaned up (#357)
Fixes #356
2021-02-26 09:54:24 +09:00
callum-tait-pbx
f987571b64 Improve docs (#303) 2021-02-26 09:32:18 +09:00
Taehyun Kim
450e384c4c Update helm chart (#343)
* add replicaCount

* Add authSecret.existingSecret

* set image.tag null by default

* implement ingress for githubwebhook server

* fix deprecated and secretName template

* backward compat .authSecret.enabled

* existingSecret for github webhook secret

* use secretName template

* set default secret names

* do not use app version based image tag

* create and name variable for secrets
2021-02-26 09:26:51 +09:00
Yusuke Kuoka
e9eef04993 Fix old HRA capacity reservations not cleaned up (#354)
Similar to #348 for #346, but for HRA.Spec.CapacityReservations usually modified by the webhook-based autoscaler controller.
This patch tries to fix that by improving the webhook-based autoscaler controller to omit expired reservations on updating HRA spec.
2021-02-25 11:08:00 +09:00
Yusuke Kuoka
598dd1d9fe Fix incorrect DESIRED on `kubectl get hra (#353)
`kubectl get horizontalrunnerautoscalers.actions.summerwind.dev` shows HRA.status.desiredReplicas as the DESIRED count. However the value had been not taking capacityReservations into account, which resulted in showing incorrect count when you used webhook-based autoscaler, or capacityReservations API directly. This fixes that.
2021-02-25 10:32:09 +09:00
Yusuke Kuoka
9890a90e69 Improve webhook-based autoscaler log (#352)
The controller had been writing confusing messages like the below on missing scale target:

```
Found too many scale targets: It must be exactly one to avoid ambiguity. Either set WatchNamespace for the webhook-based autoscaler to let it only find HRAs in the namespace, or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.{"scaleTargets": ""}
```

This fixes that, while improving many kinds of messages written while reconcilation, so that the error message is more actionable.
2021-02-25 10:07:41 +09:00
Yusuke Kuoka
9da123ae5e Fix integration test flakiness (#351)
Ref https://github.com/summerwind/actions-runner-controller/pull/345#issuecomment-785015406
2021-02-25 09:30:32 +09:00
Johannes Nicolai
4d4137aa28 Avoid zombie runners that missed token expiration by a bit (#345)
* if a new runner pod was just scheduled to start up right before a 
registration expired, it will not get a new registration token and go in 
an infinite update loop (until #341) kicks in
* if registzration tokens got updated a little bit before they actually 
expired, just starting up pods will way more likely get a working token
2021-02-25 09:07:49 +09:00
Yusuke Kuoka
022007078e Compact excessive error message on runnerreplicaset status update conflict (#350)
We occasionally see logs like the below:

```
2021-02-24T02:48:26.769ZERRORFailed to update runner status{"runnerreplicaset": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
github.com/summerwind/actions-runner-controller/controllers.(*RunnerReplicaSetReconciler).Reconcile
/home/runner/work/actions-runner-controller/actions-runner-controller/controllers/runnerreplicaset_controller.go:207
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:256
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
2021-02-24T02:48:26.769ZERRORcontroller-runtime.controllerReconciler error{"controller": "testns-244olrunnerreplicaset", "request": "testns-244ol/example-runnerdeploy-j5wzf", "error": "Operation cannot be fulfilled on runnerreplicasets.actions.summerwind.dev \"example-runnerdeploy-j5wzf\": the object has been modified; please apply your changes to the latest version and try again"}
github.com/go-logr/zapr.(*zapLogger).Error
/home/runner/go/pkg/mod/github.com/go-logr/zapr@v0.1.0/zapr.go:128
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker
/home/runner/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:152
k8s.io/apimachinery/pkg/util/wait.JitterUntil
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:153
k8s.io/apimachinery/pkg/util/wait.Until
/home/runner/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20190913080033-27d36303b655/pkg/util/wait/wait.go:88
```

which can be compacted into one-liner, without the useless stack trace, without double-logging the same error from the logger and the controller.
2021-02-25 09:01:02 +09:00
Johannes Nicolai
31e5e61155 Log correct runner that was deleted (#349) 2021-02-25 08:38:55 +09:00
Aditya Purandare
1d1453c5f2 Fix user used for dind runner group permissions (#337) 2021-02-24 19:06:52 +09:00
Yusuke Kuoka
e44e53b88e Fix failure while saving HRA status after running controller for a while (#348)
Fixes #346
2021-02-24 11:20:21 +09:00
Yusuke Kuoka
398791241e Fix runner release workflow to do docker-push (#347)
Apparently I have mistakenly removed `push` option from the workflow in #323 which resulted in new runner build #323 not being pushed. This fixes that.
2021-02-24 11:08:33 +09:00
Yusuke Kuoka
991535e567 Fix panic on webhook for user-owned repository (#344)
* Fix panic on webhook for user-owned repository

Follow-up for #282 and #334
2021-02-23 08:05:25 +09:00
Johannes Nicolai
2d7fbbfb68 Handle offline runners gracefully (#341)
* if a runner pod starts up with an invalid token, it will go in an 
infinite retry loop, appearing as RUNNING from the outside
* normally, this error situation is detected because no corresponding 
runner objects exists in GitHub and the pod will get removed after 
registration timeout
* if the GitHub runner object already existed before - e.g. because a 
finalizer was not properly run as part of a partial Kubernetes crash, 
the runner will always stay in a running mode, even updating the 
registration token will not kill the problematic pod
* introducing RunnerOffline exception that can be handled in runner 
controller and replicaset controller
* as runners are offline when a pod is completed and marked for restart, 
only do additional restart checks if no restart was already decided, 
making code a bit cleaner and saving GitHub API calls after each job 
completion
2021-02-22 10:08:04 +09:00
Yusuke Kuoka
dd0b9f3e95 Merge pull request #340 from int128/integration-test-check-run
Fix index key to find HRA in GitHub webhook handler
2021-02-22 09:49:54 +09:00
Yusuke Kuoka
7cb2bc84c8 Merge pull request #334 from summerwind/integration-test-check-run
Add integration test for autoscaling on check_run webhook event
2021-02-22 09:38:07 +09:00
Hidetake Iwata
b0e74bebab Fix index key to find HRA in GitHub webhook handler 2021-02-20 21:25:23 +09:00
Hidetake Iwata
dfbe53dcca Fix webhook payload in integration test 2021-02-20 21:08:23 +09:00
Yusuke Kuoka
ebc3970b84 Add integration test for autoscaling on check_run webhook event 2021-02-19 10:33:04 +09:00
Hidetake Iwata
1ddcf6946a Fix nil pointer error on received check_run event (#331)
* Reproduce nil pointer error on received check_run event

* Fix nil pointer error on received check_run event
2021-02-18 20:22:36 +09:00
Yusuke Kuoka
cfbaad38c8 Merge pull request #328 from int128/fix-port-name-length
Changes:

1. Fix length of github-webhook-server port name
2. Add a cluster role binding for github-webhook-server
3. Remove --enable-leader-election from github-webhook-server
2021-02-18 20:20:39 +09:00
Yusuke Kuoka
67f6de010b feat: Common runner labels configurable per controller (#327)
* feat: Common runner labels configurable per controller

Ref #321
2021-02-18 20:19:08 +09:00
Hidetake Iwata
2db608879a Remove --enable-leader-election from github-webhook-server 2021-02-18 16:51:47 +09:00
Hidetake Iwata
2c4a6ca90b Add cluster role binding for github-webhook-server 2021-02-18 16:49:24 +09:00
Hidetake Iwata
829bf20449 Fix length of github-webhook-server port name 2021-02-18 16:42:15 +09:00
Reinier Timmer
be13322816 Update runner to 2.277.1 (#322)
* Update runner to 2.277.1

* Update build-and-release-runners.yml

* integration test condition

Don't run integration tests when only updating the runner image

* fixup! integration test condition

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-02-18 09:29:53 +09:00
Johannes Nicolai
7f4a76a39b Also log into DockerHub for release event (#326)
* so far, only push events would trigger the DockerHub login step
* hence, attempts to release would fail because of a permission problem (tested locally)
* adding OR condition to also login in case a release got published
2021-02-18 08:54:44 +09:00
callum-tait-pbx
0fce761686 fix: add trunate to ensure service kinds have valid names (#325)
* fix: adding truncate for service kinds

* chore : bumping chart version
2021-02-18 08:43:48 +09:00
Yusuke Kuoka
c88ff44518 Fix wip.yml workflow for building controller canary tags (#323)
In #306 we seem to have accidentally updated a wrong workflow, which was for runner builds. This updates the one for the controller.

Resolves #302
2021-02-18 08:42:24 +09:00
Yusuke Kuoka
2fdf35ac9d Refactor integration test to use helpers (#320)
This should make the test code a bit more DRY and readable.
2021-02-17 10:23:35 +09:00
Johannes Nicolai
6cce3fefc5 Add project to awesome-runners list (#319) 2021-02-17 09:14:42 +09:00
Yusuke Kuoka
eb2eaf8130 Fix TotalNumberOfQueuedAndInProgressWorkflowRuns to work with a lot of remaining completed jobs (#316)
I have heard from some user that they have hundred thousands of `status=completed` workflow runs in their repository which effectively blocked TotalNumberOfQueuedAndInProgressWorkflowRuns from working because of GitHub API rate limit due to excessive paginated requests.

This fixes that by separating list-workflow-runs calls to two - one for `queued` and one for `in_progress`, which can make the minimum API call from 1 to 2, but allows it to work regardless of number of remaining `completed` workflow runs.
2021-02-16 18:55:55 +09:00
callum-tait-pbx
7bf712d0d4 fix: duplicate name attribute (#318) 2021-02-16 18:52:08 +09:00
Yusuke Kuoka
7d024a6c05 Fix "duplicate metrics collector registration attempted" errors at startup (#317)
I have seen this error a lot in our integration test. It turned out due to https://github.com/kubernetes-sigs/controller-runtime/issues/484 and is being fixed with this change.
2021-02-16 18:51:33 +09:00
Yusuke Kuoka
434823bcb3 scale{Up,Down}Adjustment to add/remove constant number of replicas on scaling (#315)
* `scale{Up,Down}Adjustment` to add/remove constant number of replicas on scaling

Ref #305

* Bump chart version
2021-02-16 17:16:26 +09:00
Yusuke Kuoka
35d047db01 Fix enterprise runners misusing cached token (#314)
Follow-up for #290
2021-02-16 12:56:52 +09:00
Yusuke Kuoka
f1db6af1c5 Add repository runners support for PercentageRunnersBusy-based autoscaling (#313)
Resolves #258
2021-02-16 12:44:51 +09:00
Hidetake Iwata
4f3f2fb60d Add metrics for GitHub API rate limit (#312) 2021-02-16 09:58:09 +09:00
Johannes Nicolai
2623140c9a Make log message less scary (#311)
* the reconciliation loop is often much faster than the runner startup, 
so changing runner not found related messages to debug and also add the 
possibility that the runner just needs more time
2021-02-16 09:55:55 +09:00
Johannes Nicolai
1db9d9d574 Use ARM64 compatible kube-rbac-proxy from upstream (#310)
* as pointed out in #281 the currently used image for the 
kube-rbac-proxy - gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1" - does not 
have an ARM64 image
* hence, trying to use the standard deployment manifest / helm char will 
fail on ARM64 systems
* replaced image with quay.io/brancz/kube-rbac-proxy:v0.8.0 which is the 
latest version from the upstream maintainer 
(https://github.com/brancz/kube-rbac-proxy/blob/master/Makefile#L13)
* successfully tested on both AMD64 and ARM64 clusters
* fixes #281
2021-02-16 09:55:03 +09:00
callum-tait-pbx
d046350240 chore: bumping helm chart sematically (#296)
* chore: bumping helm chart sematically

* chore: removing the app version config
2021-02-16 09:45:56 +09:00
callum-tait-pbx
cca4d249e9 feat: create workflow for runner releases (#306) 2021-02-16 09:42:28 +09:00
Johannes Nicolai
bc8bc70f69 Fix rate limit and runner registration logic (#309)
* errors.Is compares all members of a struct to return true which never 
happened
* switched to type check instead of exact value check
* notRegistered was using double negation in if statement which lead to 
unregistering runners after the registration timeout
2021-02-15 09:36:49 +09:00
Johannes Nicolai
34c6c3d9cd Pod eviction policy examples (crashed nodes) (#308)
* ... otherwise it will take 40 seconds (until a node is detected as unreachable) + 5 minutes (until pods are evicted from unreachable/crashed nodes)
* pods stuck in "Terminating" status on unreachable nodes will only be freed once #307 gets merged
2021-02-15 09:33:01 +09:00
Johannes Nicolai
9c8d7305f1 Introduce pod deletion timeout and forcefully delete stuck pods (#307)
* if a k8s node becomes unresponsive, the kube controller will soft
delete all pods after the eviction time (default 5 mins)
* as long as the node stays unresponsive, the pod will never leave the
last status and hence the runner controller will assume that everything
is fine with the pod and will not try to create new pods
* this can result in a situation where a horizontal autoscaler thinks
that none of its runners are currently busy and will not schedule any
further runners / pods, resulting in a broken  runner deployment until the
runnerreplicaset is deleted or the node comes back online
* introducing a pod deletion timeout (1 minute) after which the runner
controller will try to reboot the runner and create a pod on a working
node
* use forceful deletion and requeue for pods that have been stuck for
more than one minute in terminating state
* gracefully handling race conditions if pod gets finally forcefully deleted within
2021-02-15 09:32:28 +09:00
Yusuke Kuoka
addcbfa7ee Fix runner registration timeout (#301)
Fixes #300
2021-02-12 10:00:20 +09:00
Yusuke Kuoka
bbb036e732 feat: Prevent blocking on transient runner registration failure (#297)
This enhances the controller to recreate the runner pod if the corresponding runner has failed to register itself to GitHub within 10 minutes(currently hard-coded).

It should alleviate #288 in case the root cause is some kind of transient failures(network unreliability, GitHub down, temporarly compute resource shortage, etc).

Formerly you had to manually detect and delete such pods or even force-delete corresponding runners to unblock the controller.

Since this enhancement, the controller does the pod deletion automatically after 10 minutes after pod creation, which result in the controller create another pod that might work.

Ref #288
2021-02-09 10:17:52 +09:00
Yusuke Kuoka
9301409aec fix: Paginate ListRepositoryWorkflowRuns (#295)
When we used `QueuedAndInProgressWorkflowRuns`-based autoscaling, it only fetched and considered only the first 30 workflow runs at the reconcilation time. This may have resulted in unreliable scaling behaviour, like scale-in/out not happening when it was expected.
2021-02-09 10:13:53 +09:00
Yusuke Kuoka
ab1c39de57 feat: HorizontalRunnerAutoscaler Webhook server (#282)
* feat: HorizontalRunnerAutoscaler Webhook server

This introduces a Webhook server that responds GitHub `check_run`, `pull_request`, and `push` events by scaling up matched HorizontalRunnerAutoscaler by 1 replica. This allows you to immediately add "resource slack" for future GitHub Actions job runs, without waiting next sync period to add insufficient runners.

This feature is highly inspired by https://github.com/philips-labs/terraform-aws-github-runner. terraform-aws-github-runner can manage one set of runners per deployment, where actions-runner-controller with this feature can manage as many sets of runners as you declare with HorizontalRunnerAutoscaler and RunnerDeployment pairs.

On each GitHub event received, the webhook server queries repository-wide and organizational runners from the cluster and searches for the single target to scale up. The webhook server tries to match HorizontalRunnerAutoscaler.Spec.ScaleUpTriggers[].GitHubEvent.[CheckRun|Push|PullRequest] against the event and if it finds only one HRA, it is the scale target. If none or two or more targets are found for repository-wide runners, it does the same on organizational runners.

Changes:

* Fix integration test
* Update manifests
* chart: Add support for github webhook server
* dockerfile: Include github-webhook-server binary
* Do not import unversioned go-github
* Update README
2021-02-07 17:37:27 +09:00
alex-mozejko
a4350d0fc2 bug-fix: patched dir owned by runner (#284)
* bug-fix: patched dir owned by runner

* always build with latest runner version

* Revert "always build with latest runner version"

This reverts commit e719724ae9fe92a12d4a087185cf2a2ff543a0dd.

* Also patch dindrunner.Dockerfile

Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
2021-02-07 17:21:10 +09:00
callum-tait-pbx
2146c62c9e chore: bumping Helm chart version (#294)
* chore: adding mac rubbish to gitignore

* chore: bumping chart version
2021-02-07 16:46:19 +09:00
Jesse Haka
28e80a2d28 Add support for enterprise runners (#290)
* Add support for enterprise runners

* update docs
2021-02-05 09:31:06 +09:00
Tom Bamford
831db9ee2a Added github.sha to DockerHub push (#286)
* Added GITHUB.RUN_NUMBER to DockerHub push

* switch run_number to sha on docker tag

* re-add mutable tags for backwards compatability

* truncate to short SHA (7 chars)

* behaviour workaround

* use ENV to define sha_short

* use ::set-output to define sha_short

* bump action
2021-02-04 09:29:32 +09:00
Donovan Muller
4d69e0806e Update GitHub runner version (#280) 2021-02-02 14:06:08 +09:00
Donovan Muller
d37cd69e9b feat/helm: Bump appVersion to 0.6.1 release (#272)
* feat/helm: Bump appVersion to 0.6.1 release

* Also bump chart version to trigger a new chart release

Co-authored-by: Yusuke Kuoka <c-ykuoka@zlab.co.jp>
2021-01-29 09:29:43 +09:00
Yusuke Kuoka
a2690aa5cb Update README.md
Follow-up for #275
2021-01-29 09:29:26 +09:00
Clément
da020df0fd docs: fix install installation method (#275) 2021-01-29 09:28:34 +09:00
Jonas Lergell
6c64ae6a01 Actually use 'dockerdContainerResources' to set resources on the dind container (#273) 2021-01-29 09:18:28 +09:00
Yusuke Kuoka
42c7d0489d chart: Bump to 0.2.0 2021-01-25 09:14:49 +09:00
Donovan Muller
b3bef6404c Add support for additional environment variables (#271) 2021-01-25 09:00:03 +09:00
David Young
1127c447c4 Add GitHub Actions to publish helm chart (#257)
* Add chart workflows (#1)

* Add chart workflows

* Fix publishing step in CI

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Update CI on push-to-master (#3)

* Put helm installation step in the correct CI job

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Put helm installation step in the correct CI job (#4)

* Update on-push-master-publish-chart.yml

* Remove references to certmanager dependency

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Add ability to customize kube-rbac-proxy image

Signed-off-by: David Young <davidy@funkypenguin.co.nz>

* Only install cert-manager if we're going to spin up KinD

Signed-off-by: David Young <davidy@funkypenguin.co.nz>
2021-01-24 15:37:01 +09:00
Yusuke Kuoka
ace95d72ab Fix self-update failuers due to /runner/externals mount (#253)
* Fix self-update failuers due to /runner/externals mount

Fixes #252

* Tested Self-update Fixes (#269)

Adding fixes to #253 as confirmed and tested in https://github.com/summerwind/actions-runner-controller/issues/264#issuecomment-764549833 by @jolestar, @achedeuzot and @hfuss 🙇 🍻

Co-authored-by: Hayden Fuss <wifu1234@gmail.com>
2021-01-24 10:58:35 +09:00
Johannes Nicolai
42493d5e01 Adding --name-space parameter in example (#259)
* when setting a GitHub Enterprise server URL without a namespace, an error occurs: "error: the server doesn't have a resource type "controller-manager"
* setting default namespace "actions-runner-system" makes the example work out of the box
2021-01-22 10:12:04 +09:00
Johannes Nicolai
94e8c6ffbf minReplicas <= desiredReplicas <= maxReplicas (#267)
* ensure that minReplicas <= desiredReplicas <= maxReplicas no matter what
* before this change, if the number of runners was much larger than the max number, the applied scale down factor might still result in a desired value > maxReplicas
* if for resource constraints in the cluster, runners would be permanently restarted, the number of runners could go up more than the reverse scale down factor until the next reconciliation round, resulting in a situation where the number of runners climbs up even though it should actually go down
* by checking whether the desiredReplicas is always <= maxReplicas, infinite scaling up loops can be prevented
2021-01-22 10:11:21 +09:00
callum-tait-pbx
563c79c1b9 feat/helm: add manager secret to Helm chart (#254)
* feat: adding maanger secret to Helm

* fix: correcting secret data format

* feat: adding in common labels

* fix: updating default values to have config

The auth config needs to be commented out by default as we don't want to deploy both configs empty. This may break stuff, so we want the user to actively uncomment the auth method they want instead

* chore: updating default format of cert

* chore: wording
2021-01-22 10:03:25 +09:00
Johannes Nicolai
cbb41cbd18 Updating custom container example (#260)
* use latest instead of outdated version
* use sudo for package install (required)
* use sudo for package meta data removal (required)
2021-01-22 09:57:42 +09:00
Johannes Nicolai
64a1a58acf GitHub runner groups have to be created first (#261)
* in contrast to runner labels, GitHub runner groups are not automatically created
2021-01-22 09:52:35 +09:00
Reinier Timmer
524cf1b379 Update runner to v2.275.1 (#239) 2020-12-18 08:38:39 +09:00
ZacharyBenamram
0dadddfc7d Update README for "PercentageRunnersBusy" HRA metric type (#237)
* adding readme for new hpa scheme

* callum's comments

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-17 10:21:27 +09:00
ZacharyBenamram
48923fec56 Autoscaling: Percentage runners busy - remove magic number used for round up (#235)
* remove magic number for autoscaling

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-15 14:38:01 +09:00
ZacharyBenamram
466b30728d Add "PercentageRunnersBusy" horizontal runner autoscaler metric type (#223)
* hpa scheme based off busy runners

* running make manifests

Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-12-13 08:48:19 +09:00
callum-tait-pbx
c13704d7e2 feat: custom labels (#231)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:33:04 +09:00
callum-tait-pbx
fb49bbda75 feat: adding helm config for dind sidecar (#232)
Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-12-13 08:31:24 +09:00
Reinier Timmer
8d6f77e07c Remove beta GitHub client implementations (#228) 2020-12-10 09:08:51 +09:00
Yusuke Kuoka
dfffd3fb62 feat: EKS IAM Roles for Service Accounts for Runner Pods (#226)
One of the pod recreation conditions has been modified to use hash of runner spec, so that the controller does not keep restarting pods mutated by admission webhooks. This naturally allows us, for example, to use IRSA for EKS that requires its admission webhook to mutate the runner pod to have additional, IRSA-related volumes, volume mounts and env.

Resolves #200
2020-12-08 17:56:06 +09:00
Juho Saarinen
f710a54110 Don't compare runner connetion token at restart need check (#227)
Fixes #143
2020-12-08 08:48:35 +09:00
Yusuke Kuoka
85c29a95f5 runner: Add support for ruby/setup-ruby (#224)
It turned out previous versions of runner images were unable to run actions that require `AGENT_TOOLSDIRECTORY` or `libyaml` to exist in the runner environment. One of notable examples of such actions is [`ruby/setup-ruby`](https://github.com/ruby/setup-ruby).

This change adds the support for those actions, by setting up AGENT_TOOLSDIRECTORY and installing libyaml-dev within runner images.
2020-12-06 11:53:38 +09:00
Erik Nobel
a2b335ad6a Github pkg: Bump github package to version 33 (#222) 2020-12-06 10:01:47 +09:00
Tom Bamford
56c57cbf71 ci: Replace deprecated crazy-max buildx action to use alternative docker actions (#197)
Deprecated action `crazy-max/setup-buildx-action@v1` has been replaced with:
  `docker/setup-qemu-action@v1`
  `docker/setup-buildx-action@v1`
  `docker/login-action@v1`
  `docker/build-push-action@v2`

See: https://github.com/crazy-max/ghaction-docker-buildx
2020-12-06 10:00:10 +09:00
Ahmad Hamade
837563c976 Adding priorityClassName to helm chart (#215)
* Adding priorityClassName to helm chart and README file

* removed README and revert chart version
2020-11-30 09:04:25 +09:00
ZacharyBenamram
df99f394b4 Remove 10 minute buffer to token expiration (#214)
Co-authored-by: Zachary Benamram <zacharybenamram@blend.com>
2020-11-30 09:03:27 +09:00
Shinnosuke Sawada
be25715e1e Use TLS for secure docker connection (#192) 2020-11-30 08:57:33 +09:00
Yusuke Kuoka
4ca825eef0 Publish runner images for v2.274.2
Ref #212
2020-11-27 08:49:58 +09:00
Yusuke Kuoka
e5101554b3 Fix release workflow to not use add-path
Fixes #208
2020-11-26 08:39:03 +09:00
Reinier Timmer
ee8fb5a388 parametrized working directory (#185)
* parametrized working directory

* manifests v3.0
2020-11-25 08:55:26 +09:00
Erik Nobel
4e93879b8f [BUG?]: Create mountpoint for /externals/ (#203)
* runner/controller: Add externals directory mount point

* Runner: Create hack for moving content of /runner/externals/ dir

* Externals dir Mount: mount examples for '__e/node12/bin/node' not found error
2020-11-25 08:53:47 +09:00
Shinnosuke Sawada
6ce6737f61 add dockerEnabled document (#193)
Follow-up for #191
2020-11-17 09:31:34 +09:00
Shinnosuke Sawada
4371de9733 add dockerEnabled option (#191)
Add dockerEnabled option for users who does not need docker and want not to run privileged container.
if `dockerEnabled == false`, dind container not run, and there are no privileged container.

Do the same as closed #96
2020-11-16 09:41:12 +09:00
Yusuke Kuoka
1fd752fca2 Use tcp DOCKER_HOST instead of sharing docker.sock (#177)
docker:dind container creates `/var/run/docker.sock` with root user and root group.
so, docker command in runner container needs root privileges to use docker.sock and docker action fails because lack of permission.

Use tcp connection between runner and docker container, so runner container doesn't need root privileges to run docker, and can run docker action.

Fixes #174
2020-11-16 09:32:29 +09:00
Yusuke Kuoka
ccd86dce69 chart: Update CRDs to make Runnner{Deployment,ReplicaSet} replicas optional (#189)
Ref #186
2020-11-14 22:09:06 +09:00
Yusuke Kuoka
3d531ffcdd refactor/sync-period (#188)
* refactor: adding explicit sync-period flag

* docs: adding mroe detail for sync period config

* docs: spelling 🙃

Co-authored-by: Callum Tait <callum.tait@PBXUK-HH-05772.photobox.priv>
2020-11-14 22:07:22 +09:00
Yusuke Kuoka
1658f51fcb Make Runner{Deployment,ReplicaSet} replicas actually optional (#186)
If omitted, it properly defaults to 1.

Fixes #64
2020-11-14 22:06:33 +09:00
Yusuke Kuoka
9a22bb5086 Initial version of Helm chart (#187)
Acceptance tests are passing with the chart. In addition to standard chart values, syncPeriod is supported.
Please use it as a foundation for further collaboration.

Ref #184
Inspired by #91
Related #61
2020-11-14 22:06:14 +09:00
Yusuke Kuoka
71436f0466 Final fixes before merging 2020-11-14 22:00:41 +09:00
Yusuke Kuoka
b63879f59f Ensure the chart is passing acceptance tests 2020-11-14 21:58:16 +09:00
Callum Tait
c623ce0765 docs: spelling 🙃 2020-11-14 12:32:14 +00:00
Yusuke Kuoka
01117041b8 chart: controller-manager should not be autoscaled 2020-11-14 20:37:40 +09:00
Yusuke Kuoka
42a272051d chart: Use the correct image 2020-11-14 20:37:22 +09:00
Yusuke Kuoka
6a4c29d30e Set ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm to run acceptance tests with chart 2020-11-14 20:31:37 +09:00
Yusuke Kuoka
ce48dc58e6 Add crds to the chart 2020-11-14 20:31:21 +09:00
Callum Tait
04dde518f0 docs: adding mroe detail for sync period config 2020-11-14 11:22:09 +00:00
Callum Tait
1b98c8f811 refactor: adding explicit sync-period flag 2020-11-14 11:19:16 +00:00
Yusuke Kuoka
14b34efa77 Start collaborating to develop a Helm chart
Ref #184
2020-11-14 20:15:42 +09:00
Yusuke Kuoka
bbfe03f02b Add acceptance test (#168)
To ease verifying the controller to work before submitting/merging PRs and releasing a new version of the controller.
2020-11-14 20:07:14 +09:00
Yusuke Kuoka
8ccf64080c Merge pull request #182 from callum-tait-pbx/docs/improvements
docs/improvements
2020-11-14 12:02:24 +09:00
Callum Tait
492ea4b583 docs: adding a common errors section 2020-11-13 08:38:25 +00:00
Callum Tait
3446d4d761 docs: reordering to be more logical 2020-11-13 08:27:18 +00:00
Javier de Pedro López
1f82032fe3 Improvements in the documentation for permissions and replication (#175) 2020-11-13 09:36:22 +09:00
Elliot Maincourt
5b2272d80a Add read-only permissions to actions for being able to autoscale (#180) 2020-11-13 09:27:01 +09:00
Reinier Timmer
0870250f9b Enable matrix build for runner image (#179) 2020-11-13 09:24:12 +09:00
Shinnosuke Sawada
a4061d0625 gofmt ed 2020-11-12 09:20:06 +09:00
Juho Saarinen
7846f26199 Increase memory limit (#173)
We saw controller quite often OOMing, so first help to increase limit a bit.
2020-11-12 09:14:33 +09:00
Reinier Timmer
8217ebcaac Updated Runner to 2.274.1 (#176)
* Updated Runner to 2.274.1

* Update image tag in workflow
2020-11-12 09:14:06 +09:00
Shinnosuke Sawada
83857ba7e0 use tcp DOCKER_HOST instead of sharing docker.sock 2020-11-12 08:07:52 +09:00
Yusuke Kuoka
e613219a89 Fix token registration broken since v0.11.0 (#167)
Fixes #166
2020-11-11 09:38:05 +09:00
Reinier Timmer
bc35bdfa85 fixed label argument in entrypoint (#162)
* fixed label argument in entrypoint

* Removed quotes from RUNNER_GROUP_ARG
2020-11-11 08:44:51 +09:00
Yusuke Kuoka
ece8fd8fe4 Bump Go to 1.15 (#160)
Closes #104
2020-11-10 17:16:04 +09:00
Dan Webb
dcf8524b5c Adds RUNNER_GROUP argument to the runner registration (#157)
* Adds RUNNER_GROUP argument to the runner registration

Adds the ability to register a runner to a predefined runner_group

Resolves #137

* Update README with runner group example

- Updates the README with instructions of how to add the runner to a
  group
- Fix code fencing for shell and yaml blocks in the README
- Use consistent bullet points (dash not asterisk)
2020-11-10 17:15:54 +09:00
Yusuke Kuoka
4eb45d3c7f Fix build error 2020-11-10 17:09:16 +09:00
Juho Saarinen
1c30bdf35b Add GHE URL to transport (#152)
Fixes #149
2020-11-10 17:05:09 +09:00
Yusuke Kuoka
3f335ca628 Fix panic on startup when misconfigured (#154)
Fixes #153
2020-11-10 17:03:33 +09:00
Juho Saarinen
f2a2ab7ede Check token validity only when creating new pod (#159)
Fixes #143
2020-11-10 17:02:30 +09:00
Juho Saarinen
40c5050978 Added support for other than public GitHub URL (#146)
Refactoring a bit
2020-10-28 22:15:53 +09:00
Juho Saarinen
99a53a6e79 Releasing latest controller from master push (#147)
Fixes #135
2020-10-28 22:13:35 +09:00
Yusuke Kuoka
6d78fb07b3 Fix permission error with the default setup since v0.9.4 (#142)
Fixes #138
2020-10-25 11:25:48 +09:00
Yusuke Kuoka
faaca10fba Rename Runner.Spec.dockerWithinRunnerContainer to docker"d"WithinRunnerContainer (#134)
* Rename Runner.Spec.dockerWithinRunnerContainer to dockerdWithinRunnerContainer

Ref https://github.com/summerwind/actions-runner-controller/pull/126#issuecomment-712501790
2020-10-21 21:32:40 +09:00
Juho Saarinen
d16dfac0f8 Restart if pod ends up succeeded (#136)
Fixes #132
2020-10-21 21:32:26 +09:00
Juho Saarinen
af483d83da Possibility to define resources for DIND container (#125)
Ref #119
2020-10-21 10:26:19 +09:00
Juho Saarinen
92920926fe Configurable "runner and DinD in a single container" (#126) 2020-10-20 08:48:28 +09:00
Brendan Galloway
7d0bfb77e3 Inject Env Vars into Runner defined Container Spec (#127)
The runner token is now injected into the `runner` container defined within Runner.Spec.Containers[]
2020-10-20 08:43:53 +09:00
Brendan Galloway
c4074130e8 Patch additional protocol instances during manifest generation (#129)
Fixes #128
2020-10-19 09:57:53 +09:00
Yusuke Kuoka
be2e61f209 Add "alternatives" section to README (#124)
Grow together :)
2020-10-15 08:40:02 +09:00
Juho Saarinen
da818a898a Manifests valid for K8s 1.18 and 1.19 (#123)
Fixes #113
Fixes #116
2020-10-15 08:39:46 +09:00
Jesse Newland
2d250d5e06 Fix release build (#122) 2020-10-14 08:43:14 +09:00
Jesse Newland
231cde1531 Update build-runner workflow to be compatible with forks, fix image push (#117)
Partly revert and enhances #115

This is a follow-up to #115 that replaces the hardcoded `summerwind` portion of the image name with `${{ github.repository_owner }}` to enable contributors to test the image pushing behavior and fixes image building by conditionally passing `--push` to the build step based on the event that triggered the workflow.

After setting the `DOCKER_ACCESS_TOKEN` Secret on my fork of this repository, I was able to use this updated workflow to [build and push](https://github.com/urcomputeringpal/actions-runner-controller/runs/1242793758?check_suite_focus=true) a [set of images](https://hub.docker.com/r/urcomputeringpal/actions-runner/tags) and confirm their functionality. I imagine this will be useful to future contributors who wish to help with the chore of keeping up with https://github.com/actions/runner/releases.
2020-10-13 21:14:36 +09:00
Hayden Fuss
c986c5553d Fixing Docker Build and Push for Runner Image (#115)
A new image tag for the runner stopped being published on master merges from changes in #86.

This fixes that in the following ways:
- Tests the GH workflow on PRs w/o pushing the images
- Runner and Docker version are moved from Makefile to GH Actions workflow and are passed in as build args
- GHA workflow runs on PRs, and if the workflow file itself is changed (i.e. version bump) or the runner Docker source changes (excluding the Makefile since thats just for local dev)
- Images are pushed on push (i.e. a merge)
2020-10-09 09:16:24 +09:00
109 changed files with 14607 additions and 17790 deletions

View File

@@ -0,0 +1,71 @@
name: Build and Release Runners
on:
pull_request:
branches:
- '**'
paths:
- 'runner/**'
- .github/workflows/build-and-release-runners.yml
push:
branches:
- master
paths:
- runner/patched/*
- runner/Dockerfile
- runner/dindrunner.Dockerfile
- runner/entrypoint.sh
- .github/workflows/build-and-release-runners.yml
jobs:
build:
runs-on: ubuntu-latest
name: Build ${{ matrix.name }}
strategy:
matrix:
include:
- name: actions-runner
dockerfile: Dockerfile
- name: actions-runner-dind
dockerfile: dindrunner.Dockerfile
env:
RUNNER_VERSION: 2.277.1
DOCKER_VERSION: 19.03.12
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Set outputs
id: vars
run: echo ::set-output name=sha_short::${GITHUB_SHA::7}
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
if: ${{ github.event_name == 'push' || github.event_name == 'release' }}
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
context: ./runner
file: ./runner/${{ matrix.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ github.event_name != 'pull_request' }}
build-args: |
RUNNER_VERSION=${{ env.RUNNER_VERSION }}
DOCKER_VERSION=${{ env.DOCKER_VERSION }}
tags: |
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:v${{ env.RUNNER_VERSION }}-${{ steps.vars.outputs.sha_short }}
${{ env.DOCKERHUB_USERNAME }}/${{ matrix.name }}:latest

View File

@@ -1,34 +0,0 @@
on:
push:
branches:
- master
paths:
- 'runner/**'
jobs:
build:
runs-on: ubuntu-latest
name: Build runner
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up Docker Buildx
id: buildx
uses: crazy-max/ghaction-docker-buildx@v1
with:
buildx-version: latest
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
env:
DOCKERHUB_USERNAME: summerwind
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build Container Image
working-directory: runner
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag summerwind/actions-runner:latest \
-f Dockerfile . --push

View File

@@ -1,45 +0,0 @@
name: Build
on:
push:
branches:
- master
paths-ignore:
- 'runner/**'
- '.github/**'
jobs:
build:
runs-on: ubuntu-latest
name: Build
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Install kubebuilder
run: |
curl -L -O https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.2.0/kubebuilder_2.2.0_linux_amd64.tar.gz
tar zxvf kubebuilder_2.2.0_linux_amd64.tar.gz
sudo mv kubebuilder_2.2.0_linux_amd64 /usr/local/kubebuilder
- name: Run tests
run: make test
- name: Set up Docker Buildx
id: buildx
uses: crazy-max/ghaction-docker-buildx@v1
with:
buildx-version: latest
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
env:
DOCKERHUB_USERNAME: summerwind
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build Container Image
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag summerwind/actions-runner-controller:latest \
-f Dockerfile . --push

View File

@@ -0,0 +1,75 @@
name: Lint and Test Charts
on:
push:
paths:
- 'charts/**'
- '.github/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.4.1
jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v1
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.0.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.0.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
run: ct install --config charts/.ci/ct-config.yaml

View File

@@ -0,0 +1,101 @@
name: Publish helm chart
on:
push:
branches:
- master
- main # assume that the branch name may change in future
paths:
- 'charts/**'
- '.github/**'
workflow_dispatch:
env:
KUBE_SCORE_VERSION: 1.10.0
HELM_VERSION: v3.4.1
jobs:
lint-chart:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set up Helm
uses: azure/setup-helm@v1
with:
version: ${{ env.HELM_VERSION }}
- name: Set up kube-score
run: |
wget https://github.com/zegl/kube-score/releases/download/v${{ env.KUBE_SCORE_VERSION }}/kube-score_${{ env.KUBE_SCORE_VERSION }}_linux_amd64 -O kube-score
chmod 755 kube-score
- name: Kube-score generated manifests
run: helm template --values charts/.ci/values-kube-score.yaml charts/* | ./kube-score score -
--ignore-test pod-networkpolicy
--ignore-test deployment-has-poddisruptionbudget
--ignore-test deployment-has-host-podantiaffinity
--ignore-test container-security-context
--ignore-test pod-probes
--ignore-test container-image-tag
--enable-optional-test container-security-context-privileged
--enable-optional-test container-security-context-readonlyrootfilesystem
# python is a requirement for the chart-testing action below (supports yamllint among other tests)
- uses: actions/setup-python@v2
with:
python-version: 3.7
- name: Set up chart-testing
uses: helm/chart-testing-action@v2.0.1
- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed --config charts/.ci/ct-config.yaml)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint --config charts/.ci/ct-config.yaml
- name: Create kind cluster
uses: helm/kind-action@v1.0.0
if: steps.list-changed.outputs.changed == 'true'
# We need cert-manager already installed in the cluster because we assume the CRDs exist
- name: Install cert-manager
run: |
helm repo add jetstack https://charts.jetstack.io --force-update
helm install cert-manager jetstack/cert-manager --set installCRDs=true --wait
if: steps.list-changed.outputs.changed == 'true'
- name: Run chart-testing (install)
run: ct install --config charts/.ci/ct-config.yaml
if: steps.list-changed.outputs.changed == 'true'
publish-chart:
runs-on: ubuntu-latest
needs: lint-chart
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "$GITHUB_ACTOR@users.noreply.github.com"
- name: Run chart-releaser
uses: helm/chart-releaser-action@v1.1.0
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

View File

@@ -6,7 +6,13 @@ jobs:
build:
runs-on: ubuntu-latest
name: Release
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Set outputs
id: vars
run: echo ::set-output name=sha_short::${GITHUB_SHA::7}
- name: Checkout
uses: actions/checkout@v2
@@ -22,28 +28,36 @@ jobs:
sudo mv ghr_v0.13.0_linux_amd64/ghr /usr/local/bin
- name: Set version
run: echo "::set-env name=VERSION::$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')"
run: echo "VERSION=$(cat ${GITHUB_EVENT_PATH} | jq -r '.release.tag_name')" >> $GITHUB_ENV
- name: Upload artifacts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: make github-release
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: crazy-max/ghaction-docker-buildx@v1
uses: docker/setup-buildx-action@v1
with:
buildx-version: latest
version: latest
- name: Login to GitHub Docker Registry
run: echo "${DOCKERHUB_PASSWORD}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin
env:
DOCKERHUB_USERNAME: summerwind
DOCKERHUB_PASSWORD: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:latest
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:${{ env.VERSION }}-${{ steps.vars.outputs.sha_short }}
- name: Build Container Image
run: |
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag summerwind/actions-runner-controller:${{ env.VERSION }} \
-f Dockerfile . --push

View File

@@ -6,6 +6,7 @@ on:
- master
paths-ignore:
- 'runner/**'
- .github/workflows/build-and-release-runners.yml
jobs:
test:

42
.github/workflows/wip.yml vendored Normal file
View File

@@ -0,0 +1,42 @@
on:
push:
branches:
- master
paths-ignore:
- "runner/**"
jobs:
build:
runs-on: ubuntu-latest
name: release-latest
env:
DOCKERHUB_USERNAME: ${{ github.repository_owner }}
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
version: latest
- name: Login to DockerHub
uses: docker/login-action@v1
with:
username: ${{ github.repository_owner }}
password: ${{ secrets.DOCKER_ACCESS_TOKEN }}
# Considered unstable builds
# See Issue #285, PR #286, and PR #323 for more information
- name: Build and Push
uses: docker/build-push-action@v2
with:
file: Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ env.DOCKERHUB_USERNAME }}/actions-runner-controller:canary

6
.gitignore vendored
View File

@@ -23,3 +23,9 @@ bin
*.swp
*.swo
*~
.envrc
*.pem
# OS
.DS_STORE

View File

@@ -1,5 +1,5 @@
# Build the manager binary
FROM golang:1.13 as builder
FROM golang:1.15 as builder
ARG TARGETPLATFORM
@@ -22,7 +22,8 @@ COPY . .
RUN export GOOS=$(echo ${TARGETPLATFORM} | cut -d / -f1) && \
export GOARCH=$(echo ${TARGETPLATFORM} | cut -d / -f2) && \
GOARM=$(echo ${TARGETPLATFORM} | cut -d / -f3 | cut -c2-) && \
go build -a -o manager main.go
go build -a -o manager main.go && \
go build -a -o github-webhook-server ./cmd/githubwebhookserver
# Use distroless as minimal base image to package the manager binary
# Refer to https://github.com/GoogleContainerTools/distroless for more details
@@ -31,6 +32,7 @@ FROM gcr.io/distroless/static:nonroot
WORKDIR /
COPY --from=builder /workspace/manager .
COPY --from=builder /workspace/github-webhook-server .
USER nonroot:nonroot

View File

@@ -1,5 +1,8 @@
NAME ?= summerwind/actions-runner-controller
VERSION ?= latest
# From https://github.com/VictoriaMetrics/operator/pull/44
YAML_DROP=$(YQ) delete --inplace
YAML_DROP_PREFIX=spec.validation.openAPIV3Schema.properties.spec.properties
# Produce CRDs that work back to Kubernetes 1.11 (no version conversion)
CRD_OPTIONS ?= "crd:trivialVersions=true"
@@ -56,9 +59,14 @@ deploy: manifests
kustomize build config/default | kubectl apply -f -
# Generate manifests e.g. CRD, RBAC etc.
manifests: controller-gen
manifests: manifests-118 fix118 chart-crds
manifests-118: controller-gen
$(CONTROLLER_GEN) $(CRD_OPTIONS) rbac:roleName=manager-role webhook paths="./..." output:crd:artifacts:config=config/crd/bases
chart-crds:
cp config/crd/bases/*.yaml charts/actions-runner-controller/crds/
# Run go fmt against code
fmt:
go fmt ./...
@@ -67,6 +75,22 @@ fmt:
vet:
go vet ./...
# workaround for CRD issue with k8s 1.18 & controller-gen
# ref: https://github.com/kubernetes/kubernetes/issues/91395
fix118: yq
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerreplicasets.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runnerdeployments.yaml $(YAML_DROP_PREFIX).template.properties.spec.properties.ephemeralContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).containers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).initContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).sidecarContainers.items.properties
$(YAML_DROP) config/crd/bases/actions.summerwind.dev_runners.yaml $(YAML_DROP_PREFIX).ephemeralContainers.items.properties
# Generate code
generate: controller-gen
$(CONTROLLER_GEN) object:headerFile=./hack/boilerplate.go.txt paths="./..."
@@ -97,6 +121,37 @@ release: manifests
mkdir -p release
kustomize build config/default > release/actions-runner-controller.yaml
.PHONY: release/clean
release/clean:
rm -rf release
.PHONY: acceptance
acceptance: release/clean docker-build docker-push release
ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=token make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
ACCEPTANCE_TEST_DEPLOYMENT_TOOL=helm ACCEPTANCE_TEST_SECRET_TYPE=app make acceptance/kind acceptance/setup acceptance/tests acceptance/teardown
acceptance/kind:
kind create cluster --name acceptance
kubectl cluster-info --context kind-acceptance
acceptance/setup:
kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v1.0.4/cert-manager.yaml #kubectl create namespace actions-runner-system
kubectl -n cert-manager wait deploy/cert-manager-cainjector --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager-webhook --for condition=available --timeout 60s
kubectl -n cert-manager wait deploy/cert-manager --for condition=available --timeout 60s
kubectl create namespace actions-runner-system || true
# Adhocly wait for some time until cert-manager's admission webhook gets ready
sleep 5
acceptance/teardown:
kind delete cluster --name acceptance
acceptance/tests:
acceptance/deploy.sh
acceptance/checks.sh
# Upload release file to GitHub.
github-release: release
ghr ${VERSION} release/
@@ -105,6 +160,7 @@ github-release: release
# download controller-gen if necessary
controller-gen:
ifeq (, $(shell which controller-gen))
ifeq (, $(wildcard $(GOBIN)/controller-gen))
@{ \
set -e ;\
CONTROLLER_GEN_TMP_DIR=$$(mktemp -d) ;\
@@ -113,7 +169,25 @@ ifeq (, $(shell which controller-gen))
go get sigs.k8s.io/controller-tools/cmd/controller-gen@v0.3.0 ;\
rm -rf $$CONTROLLER_GEN_TMP_DIR ;\
}
endif
CONTROLLER_GEN=$(GOBIN)/controller-gen
else
CONTROLLER_GEN=$(shell which controller-gen)
endif
# find or download yq
# download yq if necessary
# Use always go-version to get consistent line wraps etc.
yq:
ifeq (, $(wildcard $(GOBIN)/yq))
echo "Downloading yq"
@{ \
set -e ;\
YQ_TMP_DIR=$$(mktemp -d) ;\
cd $$YQ_TMP_DIR ;\
go mod init tmp ;\
go get github.com/mikefarah/yq/v3@3.4.0 ;\
rm -rf $$YQ_TMP_DIR ;\
}
endif
YQ=$(GOBIN)/yq

564
README.md
View File

@@ -1,7 +1,33 @@
# actions-runner-controller
[![awesome-runners](https://img.shields.io/badge/listed%20on-awesome--runners-blue.svg)](https://github.com/jonico/awesome-runners)
This controller operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
ToC:
- [Motivation](#motivation)
- [Installation](#installation)
- [GitHub Enterprise support](#github-enterprise-support)
- [Setting up authentication with GitHub API](#setting-up-authentication-with-github-api)
- [Deploying using GitHub App Authentication](#deploying-using-github-app-authentication)
- [Deploying using PAT Authentication](#deploying-using-pat-authentication)
- [Usage](#usage)
- [Repository Runners](#repository-runners)
- [Organization Runners](#organization-runners)
- [Runner Deployments](#runnerdeployments)
- [Autoscaling](#autoscaling)
- [Faster Autoscaling with GitHub Webhook](#faster-autoscaling-with-github-webhook)
- [Runner with DinD](#runner-with-dind)
- [Additional tweaks](#additional-tweaks)
- [Runner labels](#runner-labels)
- [Runner groups](#runner-groups)
- [Using EKS IAM role for service accounts](#using-eks-iam-role-for-service-accounts)
- [Software installed in the runner image](#software-installed-in-the-runner-image)
- [Common errors](#common-errors)
- [Developing](#developing)
- [Alternatives](#alternatives)
## Motivation
[GitHub Actions](https://github.com/features/actions) is a very useful tool for automating development. GitHub Actions jobs are run in the cloud by default, but you may want to run your jobs in your environment. [Self-hosted runner](https://github.com/actions/runner) can be used for such use cases, but requires the provisioning and configuration of a virtual machine instance. Instead if you already have a Kubernetes cluster, it makes more sense to run the self-hosted runner on top of it.
@@ -14,30 +40,89 @@ actions-runner-controller uses [cert-manager](https://cert-manager.io/docs/insta
- [Installing cert-manager on Kubernetes](https://cert-manager.io/docs/installation/kubernetes/)
Install the custom resource and actions-runner-controller itself. This will create actions-runner-system namespace in your Kubernetes and deploy the required resources.
Install the custom resource and actions-runner-controller with `kubectl` or `helm`. This will create actions-runner-system namespace in your Kubernetes and deploy the required resources.
`kubectl`:
```shell
# REPLACE "v0.17.0" with the version you wish to deploy
kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/download/v0.17.0/actions-runner-controller.yaml
```
$ kubectl apply -f https://github.com/summerwind/actions-runner-controller/releases/latest/download/actions-runner-controller.yaml
`helm`:
```shell
helm repo add actions-runner-controller https://summerwind.github.io/actions-runner-controller
helm upgrade --install -n actions-runner-system actions-runner-controller/actions-runner-controller
```
### Github Enterprise support
If you use either Github Enterprise Cloud or Server, you can use **actions-runner-controller** with those, too.
Authentication works same way as with public Github (repo and organization level).
The minimum version of Github Enterprise Server is 3.0.0 (or rc1/rc2).
__**NOTE : The maintainers do not have an Enterprise environment to be able to test changes and so are reliant on the community for testing, support is a best endeavors basis only and is community driven**__
```shell
kubectl set env deploy controller-manager -c manager GITHUB_ENTERPRISE_URL=<GHEC/S URL> --namespace actions-runner-system
```
#### Enterprise runners usage
In order to use enterprise runners you must have Admin access to Github Enterprise and you should do Personal Access Token (PAT)
with `enterprise:admin` access. Enterprise runners are not possible to run with Github APP or any other permission.
When you use enterprise runners those will get access to Github Organisations. However, access to the repositories is **NOT**
allowed by default. Each Github Organisation must allow Enterprise runner groups to be used in repositories.
This is needed only one time and is permanent after that.
Example:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: ghe-runner-deployment
spec:
replicas: 2
template:
spec:
enterprise: your-enterprise-name
dockerdWithinRunnerContainer: true
resources:
limits:
cpu: "4000m"
memory: "2Gi"
requests:
cpu: "200m"
memory: "200Mi"
volumeMounts:
- mountPath: /runner
name: runner
volumes:
- name: runner
emptyDir: {}
```
## Setting up authentication with GitHub API
There are two ways for actions-runner-controller to authenticate with the GitHub API:
There are two ways for actions-runner-controller to authenticate with the GitHub API (only 1 can be configured at a time however):
1. Using GitHub App.
2. Using Personal Access Token.
**NOTE: It is extremely important to only follow one of the sections below and not both.**
Functionality wise there isn't a difference between the 2 authentication methods. There are however some benefits to using a GitHub App for authentication over a PAT such as an [increased API quota](https://docs.github.com/en/developers/apps/rate-limits-for-github-apps), if you run into rate limiting consider deploying this solution using GitHub App authentication instead.
### Using GitHub App
### Deploying using GitHub App Authentication
You can create a GitHub App for either your account or any organization. If you want to create a GitHub App for your account, open the following link to the creation page, enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page.
- [Create GitHub Apps on your account](https://github.com/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write)
- [Create GitHub Apps on your account](https://github.com/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&actions=read)
If you want to create a GitHub App for your organization, replace the `:org` part of the following URL with your organization name before opening it. Then enter any unique name in the "GitHub App name" field, and hit the "Create GitHub App" button at the bottom of the page to create a GitHub App.
- [Create GitHub Apps on your organization](https://github.com/organizations/:org/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&organization_self_hosted_runners=write)
- [Create GitHub Apps on your organization](https://github.com/organizations/:org/settings/apps/new?url=http://github.com/summerwind/actions-runner-controller&webhook_active=false&public=false&administration=write&organization_self_hosted_runners=write&actions=read)
You will see an *App ID* on the page of the GitHub App you created as follows, the value of this App ID will be used later.
@@ -58,7 +143,7 @@ When the installation is complete, you will be taken to a URL in one of the foll
Finally, register the App ID (`APP_ID`), Installation ID (`INSTALLATION_ID`), and downloaded private key file (`PRIVATE_KEY_FILE_PATH`) to Kubernetes as Secret.
```
```shell
$ kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID} \
@@ -66,22 +151,32 @@ $ kubectl create secret generic controller-manager \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH}
```
### Using Personal Access Token
### Deploying using PAT Authentication
From an account that has `admin` privileges for the repository, create a [personal access token](https://github.com/settings/tokens) with `repo` scope. This token is used to register a self-hosted runner by *actions-runner-controller*.
Personal Acess Token can be used to register a self-hosted runner by *actions-runner-controller*.
Self-hosted runners in GitHub can either be connected to a single repository, or to a GitHub organization (so they are available to all repositories in the organization). This token is used to register a self-hosted runner by *actions-runner-controller*.
Self-hosted runners in GitHub can either be connected to a single repository, or to a GitHub organization (so they are available to all repositories in the organization). How you plan on using the runner will affect what scopes are needed for the token.
For adding a runner to a repository, the token should have `repo` scope. If the runner should be added to an organization, the token should have `admin:org` scope. Note that to use a Personal Access Token, you must issue the token with an account that has `admin` privileges (on the repository and/or the organization).
Log-in to a GitHub account that has `admin` privileges for the repository, and [create a personal access token](https://github.com/settings/tokens/new) with the appropriate scopes listed below:
Open the Create Token page from the following link, grant the `repo` and/or `admin:org` scope, and press the "Generate Token" button at the bottom of the page to create the token.
**Scopes for a Repository Runner**
- [Create personal access token](https://github.com/settings/tokens/new)
* repo (Full control)
Register the created token (`GITHUB_TOKEN`) as a Kubernetes secret.
**Scopes for a Organisation Runner**
```
$ kubectl create secret generic controller-manager \
* repo (Full control)
* admin:org (Full control)
* admin:public_key - read:public_key
* admin:repo_hook - read:repo_hook
* admin:org_hook
* notifications
* workflow
Once you have created the appropriate token, deploy it as a secret to your kubernetes cluster that you are going to deploy the solution on:
```shell
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN}
```
@@ -93,11 +188,11 @@ There are two ways to use this controller:
- Manage runners one by one with `Runner`.
- Manage a set of runners with `RunnerDeployment`.
### Repository runners
### Repository Runners
To launch a single self-hosted runner, you need to create a manifest file includes *Runner* resource as follows. This example launches a self-hosted runner with name *example-runner* for the *summerwind/actions-runner-controller* repository.
```
```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
@@ -110,14 +205,14 @@ spec:
Apply the created manifest file to your Kubernetes.
```
```shell
$ kubectl apply -f runner.yaml
runner.actions.summerwind.dev/example-runner created
```
You can see that the Runner resource has been created.
```
```shell
$ kubectl get runners
NAME REPOSITORY STATUS
example-runner summerwind/actions-runner-controller Running
@@ -125,7 +220,7 @@ example-runner summerwind/actions-runner-controller Running
You can also see that the runner pod has been running.
```
```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
example-runner 2/2 Running 0 1m
@@ -141,7 +236,7 @@ Now you can use your self-hosted runner. See the [official documentation](https:
To add the runner to an organization, you only need to replace the `repository` field with `organization`, so the runner will register itself to the organization.
```
```yaml
# runner.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: Runner
@@ -175,14 +270,14 @@ spec:
Apply the manifest file to your cluster:
```
```shell
$ kubectl apply -f runner.yaml
runnerdeployment.actions.summerwind.dev/example-runnerdeploy created
```
You can see that 2 runners have been created as specified by `replicas: 2`:
```
```shell
$ kubectl get runners
NAME REPOSITORY STATUS
example-runnerdeploy2475h595fr mumoshu/actions-runner-controller-ci Running
@@ -191,11 +286,35 @@ example-runnerdeploy2475ht2qbr mumoshu/actions-runner-controller-ci Running
#### Autoscaling
`RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields, depending on pending workflow runs.
A `RunnerDeployment` can scale the number of runners between `minReplicas` and `maxReplicas` fields based the chosen scaling metric as defined in the `metrics` attribute
In the below example, `actions-runner` checks for pending workflow runs for each sync period, and scale to e.g. 3 if there're 3 pending jobs at sync time.
**Scaling Metrics**
```
**TotalNumberOfQueuedAndInProgressWorkflowRuns**
In the below example, `actions-runner` will pole GitHub for all pending workflows with the pole period defined by the sync period configuration. It will then scale to e.g. 3 if there're 3 pending jobs at sync time.
With this scaling metric we are required to define a list of repositories within our metric.
The scale out performance is controlled via the manager containers startup `--sync-period` argument. The default value is set to 10 minutes to prevent default deployments rate limiting themselves from the GitHub API.
**Kustomize Config :** The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch<br />
**Helm Config :** `syncPeriod`
**Benefits of this metric**
1. Supports named repositories allowing you to restrict the runner to a specified set of repositories server side.
2. Scales the runner count based on the actual queue depth of the jobs meaning a more 1:1 scaling of runners to queued jobs.
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels).
**Drawbacks of this metric**
1. Repositories must be named within the scaling metric, maintaining a list of repositories may not be viable in larger environments or self-serve environments.
2. May not scale quick enough for some users needs. This metric is pull based and so the queue depth is polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
3. Relatively large amounts of API requests required to maintain this metric, you may run in API rate limiting issues depending on the size of your environment and how aggressive your sync period configuration is
Example `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
@@ -220,38 +339,215 @@ spec:
- summerwind/actions-runner-controller
```
Please also note that the sync period is set to 10 minutes by default and it's configurable via `--sync-period` flag.
Additionally, the `HorizontalRunnerAutoscaler` also has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured however by adding the setting `scaleDownDelaySecondsAfterScaleOut` in the `HorizontalRunnerAutoscaler` `spec`:
Additionally, the autoscaling feature has an anti-flapping option that prevents periodic loop of scaling up and down.
By default, it doesn't scale down until the grace period of 10 minutes passes after a scale up. The grace period can be configured by setting `scaleDownDelaySecondsAfterScaleUp`:
```
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runner-deployment
```yaml
spec:
template:
spec:
repository: summerwind/actions-runner-controller
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
scaleDownDelaySecondsAfterScaleOut: 60
metrics:
- type: TotalNumberOfQueuedAndInProgressWorkflowRuns
repositoryNames:
- summerwind/actions-runner-controller
```
## Additional tweaks
**PercentageRunnersBusy**
The `HorizontalRunnerAutoscaler` will pole GitHub based on the configuration sync period for the number of busy runners which live in the RunnerDeployment's namespace and scale based on the settings
**Kustomize Config :** The period can be customised in the `config/default/manager_auth_proxy_patch.yaml` patch<br />
**Helm Config :** `syncPeriod`
**Benefits of this metric**
1. Supports named repositories server side the same as the `TotalNumberOfQueuedAndInProgressWorkflowRuns` metric [#313](https://github.com/summerwind/actions-runner-controller/pull/313)
2. Supports GitHub organisation wide scaling without maintaining an explicit list of repositories, this is especially useful for those that are working at a larger scale. [#223](https://github.com/summerwind/actions-runner-controller/pull/223)
3. Like all scaling metrics, you can manage workflow allocation to the RunnerDeployment through the use of [Github labels](#runner-labels)
4. Supports scaling desired runner count on both a percentage increase / decrease basis as well as on a fixed increase / decrease count basis [#223](https://github.com/summerwind/actions-runner-controller/pull/223) [#315](https://github.com/summerwind/actions-runner-controller/pull/315)
**Drawbacks of this metric**
1. May not scale quick enough for some users needs. This metric is pull based and so the number of busy runners are polled as configured by the sync period, as a result scaling performance is bound by this sync period meaning there is a lag to scaling activity.
2. We are scaling up and down based on indicative information rather than a count of the actual number of queued jobs and so the desired runner count is likely to under provision new runners or overprovision them relative to actual job queue depth, this may or may not be a problem for you.
Examples of each scaling type implemented with a `RunnerDeployment` backed by a `HorizontalRunnerAutoscaler`:
```yaml
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: '0.3' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down
scaleUpFactor: '1.4' # The scale up multiplier factor applied to desired count
scaleDownFactor: '0.7' # The scale down multiplier factor applied to desired count
```
```yaml
---
apiVersion: actions.summerwind.dev/v1alpha1
kind: HorizontalRunnerAutoscaler
metadata:
name: example-runner-deployment-autoscaler
spec:
scaleTargetRef:
name: example-runner-deployment
minReplicas: 1
maxReplicas: 3
metrics:
- type: PercentageRunnersBusy
scaleUpThreshold: '0.75' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale up
scaleDownThreshold: '0.3' # The percentage of busy runners at which the number of desired runners are re-evaluated to scale down
ScaleUpAdjustment: '2' # The scale up runner count added to desired count
ScaleDownAdjustment: '1' # The scale down runner count subtracted from the desired count
```
Like the previous metric, the scale down factor respects the anti-flapping configuration is applied to the `HorizontalRunnerAutoscaler` as mentioned previously:
```yaml
spec:
scaleDownDelaySecondsAfterScaleOut: 60
```
#### Faster Autoscaling with GitHub Webhook
> This feature is an ADVANCED feature which may require more work to set up.
> Please get prepared to put some time and effort to learn and leverage this feature!
`actions-runner-controller` has an optional Webhook server that receives GitHub Webhook events and scale
[`RunnerDeployment`s](#runnerdeployments) by updating corresponding [`HorizontalRunnerAutoscaler`s](#autoscaling).
Today, the Webhook server can be configured to respond GitHub `check_run`, `pull_request`, and `push` events
by scaling up the matching `HorizontalRunnerAutoscaler` by N replica(s), where `N` is configurable within
`HorizontalRunerAutoscaler`'s `Spec`.
More concretely, you can configure the targeted GitHub event types and the `N` in
`scaleUpTriggers`:
```yaml
kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTriggers:
- githubEvent:
checkRun:
types: ["created"]
status: "queued"
amount: 1
duration: "5m"
```
With the above example, the webhook server scales `myrunners` by `1` replica for 5 minutes on each `check_run` event
with the type of `created` and the status of `queued` received.
The primary benefit of autoscaling on Webhook compared to the standard autoscaling is that this one allows you to
immediately add "resource slack" for future GitHub Actions job runs.
In contrast, the standard autoscaling requires you to wait next sync period to add
insufficient runners. You can definitely shorten the sync period to make the standard autoscaling more responsive.
But doing so eventually result in the controller not functional due to GitHub API rate limit.
> You can learn the implementation details in #282
To enable this feature, you firstly need to install the webhook server.
Currently, only our Helm chart has the ability install it.
```console
$ helm --upgrade install actions-runner-controller/actions-runner-controller \
githubWebhookServer.enabled=true \
githubWebhookServer.ports[0].nodePort=33080
```
The above command will result in exposing the node port 33080 for Webhook events. Usually, you need to create an
external loadbalancer targeted to the node port, and register the hostname or the IP address of the external loadbalancer
to the GitHub Webhook.
Once you were able to confirm that the Webhook server is ready and running from GitHub - this is usually verified by the
GitHub sending PING events to the Webhook server - create or update your `HorizontalRunnerAutoscaler` resources
by learning the following configuration examples.
- [Example 1: Scale up on each `check_run` event](#example-1-scale-up-on-each-check_run-event)
- [Example 2: Scale on each `pull_request` event against `develop` or `main` branches](#example-2-scale-on-each-pull_request-event-against-develop-or-main-branches)
##### Example 1: Scale up on each `check_run` event
> Note: This should work almost like https://github.com/philips-labs/terraform-aws-github-runner
To scale up replicas of the runners for `example/myrepo` by 1 for 5 minutes on each `check_run`, you write manifests like the below:
```yaml
kind: RunnerDeployment
metadata:
name: myrunners
spec:
repository: example/myrepo
---
kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTriggers:
- githubEvent:
checkRun:
types: ["created"]
status: "queued"
amount: 1
duration: "5m"
```
###### Example 2: Scale on each `pull_request` event against `develop` or `main` branches
```yaml
kind: RunnerDeployment:
metadata:
name: myrunners
spec:
repository: example/myrepo
---
kind: HorizontalRunnerAutoscaler
spec:
scaleTargetRef:
name: myrunners
scaleUpTriggers:
- githubEvent:
pullRequest:
types: ["synchronize"]
branches: ["main", "develop"]
amount: 1
duration: "5m"
```
See ["activity types"](https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request) for the list of valid values for `scaleUpTriggers[].githubEvent.pullRequest.types`.
### Runner with DinD
When using default runner, runner pod starts up 2 containers: runner and DinD (Docker-in-Docker). This might create issues if there's `LimitRange` set to namespace.
```yaml
# dindrunnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-dindrunnerdeploy
spec:
replicas: 2
template:
spec:
image: summerwind/actions-runner-dind
dockerdWithinRunnerContainer: true
repository: mumoshu/actions-runner-controller-ci
env: []
```
This also helps with resources, as you don't need to give resources separately to docker and runner.
### Additional tweaks
You can pass details through the spec selector. Here's an eg. of what you may like to do:
@@ -283,6 +579,28 @@ spec:
requests:
cpu: "2.0"
memory: "4Gi"
# Timeout after a node crashed or became unreachable to evict your pods somewhere else (default 5mins)
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 10
# true (default) = A privileged docker sidecar container is included in the runner pod.
# false = A docker sidecar container is not included in the runner pod and you can't use docker.
# If set to false, there are no privileged container and you cannot use docker.
dockerEnabled: false
# false (default) = Docker support is provided by a sidecar container deployed in the runner pod.
# true = No docker sidecar container is deployed in the runner pod but docker can be used within teh runner container instead. The image summerwind/actions-runner-dind is used by default.
dockerdWithinRunnerContainer: true
# Docker sidecar container image tweaks examples below, only applicable if dockerdWithinRunnerContainer = false
dockerdContainerResources:
limits:
cpu: "4.0"
memory: "8Gi"
requests:
cpu: "2.0"
memory: "4Gi"
# Additional N number of sidecar containers
sidecarContainers:
- name: mysql
image: mysql:5.7
@@ -291,9 +609,13 @@ spec:
value: abcd1234
securityContext:
runAsUser: 0
# workDir if not specified (default = /runner/_work)
# You can customise this setting allowing you to change the default working directory location
# for example, the below setting is the same as on the ubuntu-18.04 image
workDir: /home/runner/work
```
## Runner labels
### Runner labels
To run a workflow job on a self-hosted runner, you can use the following syntax in your workflow:
@@ -330,27 +652,72 @@ jobs:
Note that if you specify `self-hosted` in your workflow, then this will run your job on _any_ self-hosted runner, regardless of the labels that they have.
## Software installed in the runner image
### Runner Groups
The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md
Runner groups can be used to limit which repositories are able to use the GitHub Runner at an Organisation level. Runner groups have to be [created in GitHub first](https://docs.github.com/en/actions/hosting-your-own-runners/managing-access-to-self-hosted-runners-using-groups) before they can be referenced.
To add the runner to the group `NewGroup`, specify the group in your `Runner` or `RunnerDeployment` spec.
```yaml
# runnerdeployment.yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: custom-runner
spec:
replicas: 1
template:
spec:
group: NewGroup
```
### Using EKS IAM role for service accounts
`actions-runner-controller` v0.15.0 or later has support for EKS IAM role for service accounts.
As similar as for regular pods and deployments, you firstly need an existing service account with the IAM role associated.
Create one using e.g. `eksctl`. You can refer to [the EKS documentation](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) for more details.
Once you set up the service account, all you need is to add `serviceAccountName` and `fsGroup` to any pods that uses
the IAM-role enabled service account.
For `RunnerDeployment`, you can set those two fields under the runner spec at `RunnerDeployment.Spec.Template`:
```yaml
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
template:
spec:
repository: USER/REO
serviceAccountName: my-service-account
securityContext:
fsGroup: 1447
```
### Software installed in the runner image
The GitHub hosted runners include a large amount of pre-installed software packages. For Ubuntu 18.04, this list can be found at <https://github.com/actions/virtual-environments/blob/master/images/linux/Ubuntu1804-README.md>
The container image is based on Ubuntu 18.04, but it does not contain all of the software installed on the GitHub runners. It contains the following subset of packages from the GitHub runners:
* Basic CLI packages
* git (2.26)
* docker
* build-essentials
- Basic CLI packages
- git (2.26)
- docker
- build-essentials
The virtual environments from GitHub contain a lot more software packages (different versions of Java, Node.js, Golang, .NET, etc) which are not provided in the runner image. Most of these have dedicated setup actions which allow the tools to be installed on-demand in a workflow, for example: `actions/setup-java` or `actions/setup-node`
If there is a need to include packages in the runner image for which there is no setup action, then this can be achieved by building a custom container image for the runner. The easiest way is to start with the `summerwind/actions-runner` image and installing the extra dependencies directly in the docker image:
```yaml
FROM summerwind/actions-runner:v2.169.1
```shell
FROM summerwind/actions-runner:latest
RUN sudo apt update -y \
&& apt install YOUR_PACKAGE
&& rm -rf /var/lib/apt/lists/*
&& sudo apt install YOUR_PACKAGE
&& sudo rm -rf /var/lib/apt/lists/*
```
You can then configure the runner to use a custom docker image by configuring the `image` field of a `Runner` or `RunnerDeployment`:
@@ -364,3 +731,68 @@ spec:
repository: summerwind/actions-runner-controller
image: YOUR_CUSTOM_DOCKER_IMAGE
```
### Common Errors
#### invalid header field value
```json
2020-11-12T22:17:30.693Z ERROR controller-runtime.controller Reconciler error {"controller": "runner", "request": "actions-runner-system/runner-deployment-dk7q8-dk5c9", "error": "failed to create registration token: Post \"https://api.github.com/orgs/$YOUR_ORG_HERE/actions/runners/registration-token\": net/http: invalid header field value \"Bearer $YOUR_TOKEN_HERE\\n\" for key Authorization"}
```
**Solutions**<br />
Your base64'ed PAT token has a new line at the end, it needs to be created without a `\n` added
* `echo -n $TOKEN | base64`
* Create the secret as described in the docs using the shell and documeneted flags
# Developing
If you'd like to modify the controller to fork or contribute, I'd suggest using the following snippet for running
the acceptance test:
```shell
# This sets `VERSION` envvar to some appropriate value
. hack/make-env.sh
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
make docker-build docker-push acceptance
```
Please follow the instructions explained in [Using Personal Access Token](#using-personal-access-token) to obtain
`GITHUB_TOKEN`, and those in [Using GitHub App](#using-github-app) to obtain `APP_ID`, `INSTALLATION_ID`, and
`PRIAVTE_KEY_FILE_PATH`.
The test creates a one-off `kind` cluster, deploys `cert-manager` and `actions-runner-controller`,
creates a `RunnerDeployment` custom resource for a public Git repository to confirm that the
controller is able to bring up a runner pod with the actions runner registration token installed.
If you prefer to test in a non-kind cluster, you can instead run:
```shell script
KUBECONFIG=path/to/kubeconfig \
NAME=$DOCKER_USER/actions-runner-controller \
GITHUB_TOKEN=*** \
APP_ID=*** \
PRIVATE_KEY_FILE_PATH=path/to/pem/file \
INSTALLATION_ID=*** \
ACCEPTANCE_TEST_SECRET_TYPE=token \
make docker-build docker-push \
acceptance/setup acceptance/tests
```
# Alternatives
The following is a list of alternative solutions that may better fit you depending on your use-case:
- <https://github.com/evryfs/github-actions-runner-operator/>
- <https://github.com/philips-labs/terraform-aws-github-runner/>
Although the situation can change over time, as of writing this sentence, the benefits of using `actions-runner-controller` over the alternatives are:
- `actions-runner-controller` has the ability to autoscale runners based on number of pending/progressing jobs (#99)
- `actions-runner-controller` is able to gracefully stop runners (#103)
- `actions-runner-controller` has ARM support
- `actions-runner-controller` has GitHub Enterprise support (see [GitHub Enterprise support](#github-enterprise-support) section for caveats)

29
acceptance/checks.sh Executable file
View File

@@ -0,0 +1,29 @@
#!/usr/bin/env bash
set -e
runner_name=
while [ -z "${runner_name}" ]; do
echo Finding the runner... 1>&2
sleep 1
runner_name=$(kubectl get runner --output=jsonpath="{.items[*].metadata.name}")
done
echo Found runner ${runner_name}.
pod_name=
while [ -z "${pod_name}" ]; do
echo Finding the runner pod... 1>&2
sleep 1
pod_name=$(kubectl get pod --output=jsonpath="{.items[*].metadata.name}" | grep ${runner_name})
done
echo Found pod ${pod_name}.
echo Waiting for pod ${runner_name} to become ready... 1>&2
kubectl wait pod/${runner_name} --for condition=ready --timeout 180s
echo All tests passed. 1>&2

42
acceptance/deploy.sh Executable file
View File

@@ -0,0 +1,42 @@
#!/usr/bin/env bash
set -e
tpe=${ACCEPTANCE_TEST_SECRET_TYPE}
if [ "${tpe}" == "token" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_token=${GITHUB_TOKEN:?GITHUB_TOKEN must not be empty}
elif [ "${tpe}" == "app" ]; then
kubectl create secret generic controller-manager \
-n actions-runner-system \
--from-literal=github_app_id=${APP_ID:?must not be empty} \
--from-literal=github_app_installation_id=${INSTALLATION_ID:?must not be empty} \
--from-file=github_app_private_key=${PRIVATE_KEY_FILE_PATH:?must not be empty}
else
echo "ACCEPTANCE_TEST_SECRET_TYPE must be set to either \"token\" or \"app\"" 1>&2
exit 1
fi
tool=${ACCEPTANCE_TEST_DEPLOYMENT_TOOL}
if [ "${tool}" == "helm" ]; then
helm upgrade --install actions-runner-controller \
charts/actions-runner-controller \
-n actions-runner-system \
--create-namespace \
--set syncPeriod=5m
kubectl -n actions-runner-system wait deploy/actions-runner-controller --for condition=available
else
kubectl apply \
-n actions-runner-system \
-f release/actions-runner-controller.yaml
kubectl -n actions-runner-system wait deploy/controller-manager --for condition=available --timeout 60s
fi
# Adhocly wait for some time until actions-runner-controller's admission webhook gets ready
sleep 20
kubectl apply \
-f acceptance/testdata/runnerdeploy.yaml

9
acceptance/testdata/runnerdeploy.yaml vendored Normal file
View File

@@ -0,0 +1,9 @@
apiVersion: actions.summerwind.dev/v1alpha1
kind: RunnerDeployment
metadata:
name: example-runnerdeploy
spec:
# replicas: 1
template:
spec:
repository: mumoshu/actions-runner-controller-ci

View File

@@ -41,6 +41,62 @@ type HorizontalRunnerAutoscalerSpec struct {
// Metrics is the collection of various metric targets to calculate desired number of runners
// +optional
Metrics []MetricSpec `json:"metrics,omitempty"`
// ScaleUpTriggers is an experimental feature to increase the desired replicas by 1
// on each webhook requested received by the webhookBasedAutoscaler.
//
// This feature requires you to also enable and deploy the webhookBasedAutoscaler onto your cluster.
//
// Note that the added runners remain until the next sync period at least,
// and they may or may not be used by GitHub Actions depending on the timing.
// They are intended to be used to gain "resource slack" immediately after you
// receive a webhook from GitHub, so that you can loosely expect MinReplicas runners to be always available.
ScaleUpTriggers []ScaleUpTrigger `json:"scaleUpTriggers,omitempty"`
CapacityReservations []CapacityReservation `json:"capacityReservations,omitempty" patchStrategy:"merge" patchMergeKey:"name"`
}
type ScaleUpTrigger struct {
GitHubEvent *GitHubEventScaleUpTriggerSpec `json:"githubEvent,omitempty"`
Amount int `json:"amount,omitempty"`
Duration metav1.Duration `json:"duration,omitempty"`
}
type GitHubEventScaleUpTriggerSpec struct {
CheckRun *CheckRunSpec `json:"checkRun,omitempty"`
PullRequest *PullRequestSpec `json:"pullRequest,omitempty"`
Push *PushSpec `json:"push,omitempty"`
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
type CheckRunSpec struct {
Types []string `json:"types,omitempty"`
Status string `json:"status,omitempty"`
// Names is a list of GitHub Actions glob patterns.
// Any check_run event whose name matches one of patterns in the list can trigger autoscaling.
// Note that check_run name seem to equal to the job name you've defined in your actions workflow yaml file.
// So it is very likely that you can utilize this to trigger depending on the job.
Names []string `json:"names,omitempty"`
}
// https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
type PullRequestSpec struct {
Types []string `json:"types,omitempty"`
Branches []string `json:"branches,omitempty"`
}
// PushSpec is the condition for triggering scale-up on push event
// Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type PushSpec struct {
}
// CapacityReservation specifies the number of replicas temporarily added
// to the scale target until ExpirationTime.
type CapacityReservation struct {
Name string `json:"name,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
Replicas int `json:"replicas,omitempty"`
}
type ScaleTargetRef struct {
@@ -56,6 +112,36 @@ type MetricSpec struct {
// For example, a repository name is the REPO part of `github.com/USER/REPO`.
// +optional
RepositoryNames []string `json:"repositoryNames,omitempty"`
// ScaleUpThreshold is the percentage of busy runners greater than which will
// trigger the hpa to scale runners up.
// +optional
ScaleUpThreshold string `json:"scaleUpThreshold,omitempty"`
// ScaleDownThreshold is the percentage of busy runners less than which will
// trigger the hpa to scale the runners down.
// +optional
ScaleDownThreshold string `json:"scaleDownThreshold,omitempty"`
// ScaleUpFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be added.
// +optional
ScaleUpFactor string `json:"scaleUpFactor,omitempty"`
// ScaleDownFactor is the multiplicative factor applied to the current number of runners used
// to determine how many pods should be removed.
// +optional
ScaleDownFactor string `json:"scaleDownFactor,omitempty"`
// ScaleUpAdjustment is the number of runners added on scale-up.
// You can only specify either ScaleUpFactor or ScaleUpAdjustment.
// +optional
ScaleUpAdjustment int `json:"scaleUpAdjustment,omitempty"`
// ScaleDownAdjustment is the number of runners removed on scale-down.
// You can only specify either ScaleDownFactor or ScaleDownAdjustment.
// +optional
ScaleDownAdjustment int `json:"scaleDownAdjustment,omitempty"`
}
type HorizontalRunnerAutoscalerStatus struct {
@@ -71,6 +157,17 @@ type HorizontalRunnerAutoscalerStatus struct {
// +optional
LastSuccessfulScaleOutTime *metav1.Time `json:"lastSuccessfulScaleOutTime,omitempty"`
// +optional
CacheEntries []CacheEntry `json:"cacheEntries,omitempty"`
}
const CacheEntryKeyDesiredReplicas = "desiredReplicas"
type CacheEntry struct {
Key string `json:"key,omitempty"`
Value int `json:"value,omitempty"`
ExpirationTime metav1.Time `json:"expirationTime,omitempty"`
}
// +kubebuilder:object:root=true

View File

@@ -25,6 +25,10 @@ import (
// RunnerSpec defines the desired state of Runner
type RunnerSpec struct {
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Enterprise string `json:"enterprise,omitempty"`
// +optional
// +kubebuilder:validation:Pattern=`^[^/]+$`
Organization string `json:"organization,omitempty"`
@@ -36,9 +40,14 @@ type RunnerSpec struct {
// +optional
Labels []string `json:"labels,omitempty"`
// +optional
Group string `json:"group,omitempty"`
// +optional
Containers []corev1.Container `json:"containers,omitempty"`
// +optional
DockerdContainerResources corev1.ResourceRequirements `json:"dockerdContainerResources,omitempty"`
// +optional
Resources corev1.ResourceRequirements `json:"resources,omitempty"`
// +optional
VolumeMounts []corev1.VolumeMount `json:"volumeMounts,omitempty"`
@@ -54,6 +63,8 @@ type RunnerSpec struct {
// +optional
Volumes []corev1.Volume `json:"volumes,omitempty"`
// +optional
WorkDir string `json:"workDir,omitempty"`
// +optional
InitContainers []corev1.Container `json:"initContainers,omitempty"`
@@ -77,16 +88,32 @@ type RunnerSpec struct {
EphemeralContainers []corev1.EphemeralContainer `json:"ephemeralContainers,omitempty"`
// +optional
TerminationGracePeriodSeconds *int64 `json:"terminationGracePeriodSeconds,omitempty"`
// +optional
DockerdWithinRunnerContainer *bool `json:"dockerdWithinRunnerContainer,omitempty"`
// +optional
DockerEnabled *bool `json:"dockerEnabled,omitempty"`
// +optional
DockerMTU *int64 `json:"dockerMTU,omitempty"`
}
// ValidateRepository validates repository field.
func (rs *RunnerSpec) ValidateRepository() error {
// Organization and repository are both exclusive.
if len(rs.Organization) == 0 && len(rs.Repository) == 0 {
return errors.New("Spec needs organization or repository")
// Enterprise, Organization and repository are both exclusive.
foundCount := 0
if len(rs.Organization) > 0 {
foundCount += 1
}
if len(rs.Organization) > 0 && len(rs.Repository) > 0 {
return errors.New("Spec cannot have both organization and repository")
if len(rs.Repository) > 0 {
foundCount += 1
}
if len(rs.Enterprise) > 0 {
foundCount += 1
}
if foundCount == 0 {
return errors.New("Spec needs enterprise, organization or repository")
}
if foundCount > 1 {
return errors.New("Spec cannot have many fields defined enterprise, organization and repository")
}
return nil
@@ -102,6 +129,7 @@ type RunnerStatus struct {
// RunnerStatusRegistration contains runner registration status
type RunnerStatusRegistration struct {
Enterprise string `json:"enterprise,omitempty"`
Organization string `json:"organization,omitempty"`
Repository string `json:"repository,omitempty"`
Labels []string `json:"labels,omitempty"`
@@ -111,6 +139,7 @@ type RunnerStatusRegistration struct {
// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:printcolumn:JSONPath=".spec.enterprise",name=Enterprise,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.organization",name=Organization,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.repository",name=Repository,type=string
// +kubebuilder:printcolumn:JSONPath=".spec.labels",name=Labels,type=string

View File

@@ -22,14 +22,19 @@ import (
const (
AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns = "TotalNumberOfQueuedAndInProgressWorkflowRuns"
AutoscalingMetricTypePercentageRunnersBusy = "PercentageRunnersBusy"
)
// RunnerReplicaSetSpec defines the desired state of RunnerDeployment
// RunnerDeploymentSpec defines the desired state of RunnerDeployment
type RunnerDeploymentSpec struct {
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
}
type RunnerDeploymentStatus struct {

View File

@@ -22,9 +22,14 @@ import (
// RunnerReplicaSetSpec defines the desired state of RunnerReplicaSet
type RunnerReplicaSetSpec struct {
Replicas *int `json:"replicas"`
// +optional
// +nullable
Replicas *int `json:"replicas,omitempty"`
Template RunnerTemplate `json:"template"`
// +optional
// +nullable
Selector *metav1.LabelSelector `json:"selector"`
Template RunnerTemplate `json:"template"`
}
type RunnerReplicaSetStatus struct {

View File

@@ -22,9 +22,97 @@ package v1alpha1
import (
"k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
)
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CacheEntry) DeepCopyInto(out *CacheEntry) {
*out = *in
in.ExpirationTime.DeepCopyInto(&out.ExpirationTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CacheEntry.
func (in *CacheEntry) DeepCopy() *CacheEntry {
if in == nil {
return nil
}
out := new(CacheEntry)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CapacityReservation) DeepCopyInto(out *CapacityReservation) {
*out = *in
in.ExpirationTime.DeepCopyInto(&out.ExpirationTime)
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CapacityReservation.
func (in *CapacityReservation) DeepCopy() *CapacityReservation {
if in == nil {
return nil
}
out := new(CapacityReservation)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *CheckRunSpec) DeepCopyInto(out *CheckRunSpec) {
*out = *in
if in.Types != nil {
in, out := &in.Types, &out.Types
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Names != nil {
in, out := &in.Names, &out.Names
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new CheckRunSpec.
func (in *CheckRunSpec) DeepCopy() *CheckRunSpec {
if in == nil {
return nil
}
out := new(CheckRunSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopyInto(out *GitHubEventScaleUpTriggerSpec) {
*out = *in
if in.CheckRun != nil {
in, out := &in.CheckRun, &out.CheckRun
*out = new(CheckRunSpec)
(*in).DeepCopyInto(*out)
}
if in.PullRequest != nil {
in, out := &in.PullRequest, &out.PullRequest
*out = new(PullRequestSpec)
(*in).DeepCopyInto(*out)
}
if in.Push != nil {
in, out := &in.Push, &out.Push
*out = new(PushSpec)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new GitHubEventScaleUpTriggerSpec.
func (in *GitHubEventScaleUpTriggerSpec) DeepCopy() *GitHubEventScaleUpTriggerSpec {
if in == nil {
return nil
}
out := new(GitHubEventScaleUpTriggerSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *HorizontalRunnerAutoscaler) DeepCopyInto(out *HorizontalRunnerAutoscaler) {
*out = *in
@@ -110,6 +198,20 @@ func (in *HorizontalRunnerAutoscalerSpec) DeepCopyInto(out *HorizontalRunnerAuto
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.ScaleUpTriggers != nil {
in, out := &in.ScaleUpTriggers, &out.ScaleUpTriggers
*out = make([]ScaleUpTrigger, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
if in.CapacityReservations != nil {
in, out := &in.CapacityReservations, &out.CapacityReservations
*out = make([]CapacityReservation, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerSpec.
@@ -134,6 +236,13 @@ func (in *HorizontalRunnerAutoscalerStatus) DeepCopyInto(out *HorizontalRunnerAu
in, out := &in.LastSuccessfulScaleOutTime, &out.LastSuccessfulScaleOutTime
*out = (*in).DeepCopy()
}
if in.CacheEntries != nil {
in, out := &in.CacheEntries, &out.CacheEntries
*out = make([]CacheEntry, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new HorizontalRunnerAutoscalerStatus.
@@ -166,6 +275,46 @@ func (in *MetricSpec) DeepCopy() *MetricSpec {
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PullRequestSpec) DeepCopyInto(out *PullRequestSpec) {
*out = *in
if in.Types != nil {
in, out := &in.Types, &out.Types
*out = make([]string, len(*in))
copy(*out, *in)
}
if in.Branches != nil {
in, out := &in.Branches, &out.Branches
*out = make([]string, len(*in))
copy(*out, *in)
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PullRequestSpec.
func (in *PullRequestSpec) DeepCopy() *PullRequestSpec {
if in == nil {
return nil
}
out := new(PullRequestSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *PushSpec) DeepCopyInto(out *PushSpec) {
*out = *in
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PushSpec.
func (in *PushSpec) DeepCopy() *PushSpec {
if in == nil {
return nil
}
out := new(PushSpec)
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Runner) DeepCopyInto(out *Runner) {
*out = *in
@@ -260,6 +409,11 @@ func (in *RunnerDeploymentSpec) DeepCopyInto(out *RunnerDeploymentSpec) {
*out = new(int)
**out = **in
}
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
}
@@ -392,6 +546,11 @@ func (in *RunnerReplicaSetSpec) DeepCopyInto(out *RunnerReplicaSetSpec) {
*out = new(int)
**out = **in
}
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
*out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
}
@@ -435,6 +594,7 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
(*in)[i].DeepCopyInto(&(*out)[i])
}
}
in.DockerdContainerResources.DeepCopyInto(&out.DockerdContainerResources)
in.Resources.DeepCopyInto(&out.Resources)
if in.VolumeMounts != nil {
in, out := &in.VolumeMounts, &out.VolumeMounts
@@ -524,6 +684,21 @@ func (in *RunnerSpec) DeepCopyInto(out *RunnerSpec) {
*out = new(int64)
**out = **in
}
if in.DockerdWithinRunnerContainer != nil {
in, out := &in.DockerdWithinRunnerContainer, &out.DockerdWithinRunnerContainer
*out = new(bool)
**out = **in
}
if in.DockerEnabled != nil {
in, out := &in.DockerEnabled, &out.DockerEnabled
*out = new(bool)
**out = **in
}
if in.DockerMTU != nil {
in, out := &in.DockerMTU, &out.DockerMTU
*out = new(int64)
**out = **in
}
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new RunnerSpec.
@@ -604,3 +779,24 @@ func (in *ScaleTargetRef) DeepCopy() *ScaleTargetRef {
in.DeepCopyInto(out)
return out
}
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ScaleUpTrigger) DeepCopyInto(out *ScaleUpTrigger) {
*out = *in
if in.GitHubEvent != nil {
in, out := &in.GitHubEvent, &out.GitHubEvent
*out = new(GitHubEventScaleUpTriggerSpec)
(*in).DeepCopyInto(*out)
}
out.Duration = in.Duration
}
// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ScaleUpTrigger.
func (in *ScaleUpTrigger) DeepCopy() *ScaleUpTrigger {
if in == nil {
return nil
}
out := new(ScaleUpTrigger)
in.DeepCopyInto(out)
return out
}

View File

@@ -0,0 +1,4 @@
# This file defines the config for "ct" (chart tester) used by the helm linting GitHub workflow
lint-conf: charts/.ci/lint-config.yaml
chart-repos:
- jetstack=https://charts.jetstack.io

View File

@@ -0,0 +1,6 @@
rules:
# One blank line is OK
empty-lines:
max-start: 1
max-end: 1
max: 1

View File

@@ -0,0 +1,3 @@
#!/bin/bash
docker run --rm -it -w /repo -v $(pwd):/repo quay.io/helmpack/chart-testing ct lint --all --config charts/.ci/ct-config.yaml

View File

@@ -0,0 +1,15 @@
#!/bin/bash
for chart in `ls charts`;
do
helm template --values charts/$chart/ci/ci-values.yaml charts/$chart | kube-score score - \
--ignore-test pod-networkpolicy \
--ignore-test deployment-has-poddisruptionbudget \
--ignore-test deployment-has-host-podantiaffinity \
--ignore-test pod-probes \
--ignore-test container-image-tag \
--enable-optional-test container-security-context-privileged \
--enable-optional-test container-security-context-readonlyrootfilesystem \
--ignore-test container-security-context
done

View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

View File

@@ -0,0 +1,31 @@
apiVersion: v2
name: actions-runner-controller
description: A Kubernetes controller that operates self-hosted runners for GitHub Actions on your Kubernetes cluster.
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.9.0
home: https://github.com/summerwind/actions-runner-controller
sources:
- https://github.com/summerwind/actions-runner-controller
maintainers:
- name: summerwind
email: contact@summerwind.jp
url: https://github.com/summerwind
- name: funkypenguin
email: davidy@funkypenguin.co.nz
url: https://www.funkypenguin.co.nz

View File

@@ -0,0 +1,30 @@
# This file sets some opinionated values for kube-score to use
# when parsing the chart
image:
pullPolicy: Always
podSecurityContext:
fsGroup: 2000
securityContext:
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2000
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 100m
memory: 128Mi
authSecret:
create: false
# Set the following to true to create a dummy secret, allowing the manager pod to start
# This is only useful in CI
createDummySecret: true

View File

@@ -0,0 +1,229 @@
---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.3.0
creationTimestamp: null
name: horizontalrunnerautoscalers.actions.summerwind.dev
spec:
additionalPrinterColumns:
- JSONPath: .spec.minReplicas
name: Min
type: number
- JSONPath: .spec.maxReplicas
name: Max
type: number
- JSONPath: .status.desiredReplicas
name: Desired
type: number
group: actions.summerwind.dev
names:
kind: HorizontalRunnerAutoscaler
listKind: HorizontalRunnerAutoscalerList
plural: horizontalrunnerautoscalers
singular: horizontalrunnerautoscaler
scope: Namespaced
subresources:
status: {}
validation:
openAPIV3Schema:
description: HorizontalRunnerAutoscaler is the Schema for the horizontalrunnerautoscaler
API
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
kind:
description: 'Kind is a string value representing the REST resource this
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
metadata:
type: object
spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of
HorizontalRunnerAutoscaler
properties:
capacityReservations:
items:
description: CapacityReservation specifies the number of replicas
temporarily added to the scale target until ExpirationTime.
properties:
expirationTime:
format: date-time
type: string
name:
type: string
replicas:
type: integer
type: object
type: array
maxReplicas:
description: MinReplicas is the maximum number of replicas the deployment
is allowed to scale
type: integer
metrics:
description: Metrics is the collection of various metric targets to
calculate desired number of runners
items:
properties:
repositoryNames:
description: RepositoryNames is the list of repository names to
be used for calculating the metric. For example, a repository
name is the REPO part of `github.com/USER/REPO`.
items:
type: string
type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed
on scale-down. You can only specify either ScaleDownFactor or
ScaleDownAdjustment.
type: integer
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down.
type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added
on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners
greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
type: string
type: object
type: array
minReplicas:
description: MinReplicas is the minimum number of replicas the deployment
is allowed to scale
type: integer
scaleDownDelaySecondsAfterScaleOut:
description: ScaleDownDelaySecondsAfterScaleUp is the approximate delay
for a scale down followed by a scale up Used to prevent flapping (down->up->down->...
loop)
type: integer
scaleTargetRef:
description: ScaleTargetRef sis the reference to scaled resource like
RunnerDeployment
properties:
name:
type: string
type: object
scaleUpTriggers:
description: "ScaleUpTriggers is an experimental feature to increase
the desired replicas by 1 on each webhook requested received by the
webhookBasedAutoscaler. \n This feature requires you to also enable
and deploy the webhookBasedAutoscaler onto your cluster. \n Note that
the added runners remain until the next sync period at least, and
they may or may not be used by GitHub Actions depending on the timing.
They are intended to be used to gain \"resource slack\" immediately
after you receive a webhook from GitHub, so that you can loosely expect
MinReplicas runners to be always available."
items:
properties:
amount:
type: integer
duration:
type: string
githubEvent:
properties:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns.
Any check_run event whose name matches one of patterns
in the list can trigger autoscaling. Note that check_run
name seem to equal to the job name you've defined in
your actions workflow yaml file. So it is very likely
that you can utilize this to trigger depending on the
job.
items:
type: string
type: array
status:
type: string
types:
items:
type: string
type: array
type: object
pullRequest:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
properties:
branches:
items:
type: string
type: array
types:
items:
type: string
type: array
type: object
push:
description: PushSpec is the condition for triggering scale-up
on push event Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type: object
type: object
type: object
type: array
type: object
status:
properties:
cacheEntries:
items:
properties:
expirationTime:
format: date-time
type: string
key:
type: string
value:
type: integer
type: object
type: array
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated
and latest pods to be set for the primary RunnerSet This doesn't include
outdated pods while upgrading the deployment and replacing the runnerset.
type: integer
lastSuccessfulScaleOutTime:
format: date-time
type: string
observedGeneration:
description: ObservedGeneration is the most recent generation observed
for the target. It corresponds to e.g. RunnerDeployment's generation,
which is updated on mutation by the API Server.
format: int64
type: integer
type: object
type: object
version: v1alpha1
versions:
- name: v1alpha1
served: true
storage: true
status:
acceptedNames:
kind: ""
plural: ""
conditions: []
storedVersions: []

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,22 @@
1. Get the application URL by running these commands:
{{- if .Values.githubWebhookServer.ingress.enabled }}
{{- range $host := .Values.githubWebhookServer.ingress.hosts }}
{{- range .paths }}
http{{ if $.Values.githubWebhookServer.ingress.tls }}s{{ end }}://{{ $host.host }}{{ . }}
{{- end }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ include "actions-runner-controller.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get --namespace {{ .Release.Namespace }} svc -w {{ include "actions-runner-controller.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ include "actions-runner-controller.fullname" . }} --template "{{"{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}"}}")
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app.kubernetes.io/name={{ include "actions-runner-controller.name" . }},app.kubernetes.io/instance={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
export CONTAINER_PORT=$(kubectl get pod --namespace {{ .Release.Namespace }} $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace {{ .Release.Namespace }} port-forward $POD_NAME 8080:$CONTAINER_PORT
{{- end }}

View File

@@ -0,0 +1,56 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller-github-webhook-server.name" -}}
{{- default .Chart.Name .Values.githubWebhookServer.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.instance" -}}
{{- printf "%s-%s" .Release.Name "github-webhook-server" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller-github-webhook-server.fullname" -}}
{{- if .Values.githubWebhookServer.fullnameOverride }}
{{- .Values.githubWebhookServer.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.githubWebhookServer.nameOverride }}
{{- $instance := include "actions-runner-controller-github-webhook-server.instance" . }}
{{- if contains $name $instance }}
{{- $instance | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s-%s" .Release.Name $name "github-webhook-server" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller-github-webhook-server.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller-github-webhook-server.name" . }}
app.kubernetes.io/instance: {{ include "actions-runner-controller-github-webhook-server.instance" . }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller-github-webhook-server.serviceAccountName" -}}
{{- if .Values.githubWebhookServer.serviceAccount.create }}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.githubWebhookServer.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.secretName" -}}
{{- default (include "actions-runner-controller-github-webhook-server.fullname" .) .Values.githubWebhookServer.secret.name }}
{{- end }}
{{- define "actions-runner-controller-github-webhook-server.roleName" -}}
{{- include "actions-runner-controller-github-webhook-server.fullname" . }}
{{- end }}

View File

@@ -0,0 +1,105 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "actions-runner-controller.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "actions-runner-controller.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "actions-runner-controller.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "actions-runner-controller.labels" -}}
helm.sh/chart: {{ include "actions-runner-controller.chart" . }}
{{ include "actions-runner-controller.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- range $k, $v := .Values.labels }}
{{ $k }}: {{ $v }}
{{- end }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "actions-runner-controller.selectorLabels" -}}
app.kubernetes.io/name: {{ include "actions-runner-controller.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "actions-runner-controller.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "actions-runner-controller.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{- define "actions-runner-controller.secretName" -}}
{{- default (include "actions-runner-controller.fullname" .) .Values.authSecret.name -}}
{{- end }}
{{- define "actions-runner-controller.leaderElectionRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-leader-election
{{- end }}
{{- define "actions-runner-controller.authProxyRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-proxy
{{- end }}
{{- define "actions-runner-controller.managerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-manager
{{- end }}
{{- define "actions-runner-controller.runnerEditorRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-editor
{{- end }}
{{- define "actions-runner-controller.runnerViewerRoleName" -}}
{{- include "actions-runner-controller.fullname" . }}-runner-viewer
{{- end }}
{{- define "actions-runner-controller.webhookServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 55 }}-webhook
{{- end }}
{{- define "actions-runner-controller.authProxyServiceName" -}}
{{- include "actions-runner-controller.fullname" . | trunc 47 }}-metrics-service
{{- end }}
{{- define "actions-runner-controller.selfsignedIssuerName" -}}
{{- include "actions-runner-controller.fullname" . }}-selfsigned-issuer
{{- end }}
{{- define "actions-runner-controller.servingCertName" -}}
{{- include "actions-runner-controller.fullname" . }}-serving-cert
{{- end }}

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
rules:
- apiGroups: ["authentication.k8s.io"]
resources:
- tokenreviews
verbs: ["create"]
- apiGroups: ["authorization.k8s.io"]
resources:
- subjectaccessreviews
verbs: ["create"]

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.authProxyRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,14 @@
apiVersion: v1
kind: Service
metadata:
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
name: {{ include "actions-runner-controller.authProxyServiceName" . }}
namespace: {{ .Release.Namespace }}
spec:
ports:
- name: https
port: 8443
targetPort: https
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,24 @@
# The following manifests contain a self-signed issuer CR and a certificate CR.
# More document can be found at https://docs.cert-manager.io
# WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for breaking changes
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
namespace: {{ .Namespace }}
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: {{ include "actions-runner-controller.servingCertName" . }}
namespace: {{ .Namespace }}
spec:
dnsNames:
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc
- {{ include "actions-runner-controller.webhookServiceName" . }}.{{ .Release.Namespace }}.svc.cluster.local
issuerRef:
kind: Issuer
name: {{ include "actions-runner-controller.selfsignedIssuerName" . }}
secretName: webhook-server-cert # this secret will not be prefixed, since it's not managed by kustomize

View File

@@ -0,0 +1,10 @@
# This template only exists to facilitate CI testing of the chart, since
# a secret is expected to be found in the namespace by the controller manager
{{ if .Values.createDummySecret -}}
apiVersion: v1
data:
github_token: dGVzdA==
kind: Secret
metadata:
name: controller-manager
{{- end }}

View File

@@ -0,0 +1,125 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
{{- with .Values.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
- "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"
- "--sync-period={{ .Values.syncPeriod }}"
- "--docker-image={{ .Values.image.dindSidecarRepositoryAndTag }}"
{{- if .Values.scope.singleNamespace }}
- "--watch-namespace={{ default .Release.Namespace .Values.scope.watchNamespace }}"
{{- end }}
command:
- "/manager"
env:
- name: GITHUB_TOKEN
valueFrom:
secretKeyRef:
key: github_token
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_ID
valueFrom:
secretKeyRef:
key: github_app_id
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_INSTALLATION_ID
valueFrom:
secretKeyRef:
key: github_app_installation_id
name: {{ include "actions-runner-controller.secretName" . }}
optional: true
- name: GITHUB_APP_PRIVATE_KEY
value: /etc/actions-runner-controller/github_app_private_key
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: manager
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 9443
name: webhook-server
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
volumeMounts:
- mountPath: "/etc/actions-runner-controller"
name: secret
readOnly: true
- mountPath: /tmp
name: tmp
- mountPath: /tmp/k8s-webhook-server/serving-certs
name: cert
readOnly: true
- args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.kube_rbac_proxy.image.repository }}:{{ .Values.kube_rbac_proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8443
name: https
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
terminationGracePeriodSeconds: 10
volumes:
- name: secret
secret:
secretName: {{ include "actions-runner-controller.secretName" . }}
- name: cert
secret:
defaultMode: 420
secretName: webhook-server-cert
- name: tmp
emptyDir: {}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}

View File

@@ -0,0 +1,89 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.githubWebhookServer.replicaCount }}
selector:
matchLabels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.githubWebhookServer.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.githubWebhookServer.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.githubWebhookServer.podSecurityContext | nindent 8 }}
{{- with .Values.githubWebhookServer.priorityClassName }}
priorityClassName: "{{ . }}"
{{- end }}
containers:
- args:
- "--metrics-addr=127.0.0.1:8080"
- "--sync-period={{ .Values.githubWebhookServer.syncPeriod }}"
command:
- "/github-webhook-server"
env:
- name: GITHUB_WEBHOOK_SECRET_TOKEN
valueFrom:
secretKeyRef:
key: github_webhook_secret_token
name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
optional: true
{{- range $key, $val := .Values.githubWebhookServer.env }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
name: github-webhook-server
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8000
name: http
protocol: TCP
resources:
{{- toYaml .Values.githubWebhookServer.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.githubWebhookServer.securityContext | nindent 12 }}
- args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
- "--logtostderr=true"
- "--v=10"
image: "{{ .Values.kube_rbac_proxy.image.repository }}:{{ .Values.kube_rbac_proxy.image.tag }}"
name: kube-rbac-proxy
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 8443
name: https
resources:
{{- toYaml .Values.resources | nindent 12 }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
terminationGracePeriodSeconds: 10
{{- with .Values.githubWebhookServer.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.githubWebhookServer.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.githubWebhookServer.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,41 @@
{{- if .Values.githubWebhookServer.ingress.enabled -}}
{{- $fullName := include "actions-runner-controller-github-webhook-server.fullname" . -}}
{{- $svcPort := .Values.githubWebhookServer.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.githubWebhookServer.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.githubWebhookServer.ingress.tls }}
tls:
{{- range .Values.githubWebhookServer.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.githubWebhookServer.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,70 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
{{- end }}

View File

@@ -0,0 +1,14 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller-github-webhook-server.roleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
{{- end }}

View File

@@ -0,0 +1,16 @@
{{- if .Values.githubWebhookServer.enabled }}
{{- if .Values.githubWebhookServer.secret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.githubWebhookServer.secret.github_webhook_secret_token }}
github_webhook_secret_token: {{ .Values.githubWebhookServer.secret.github_webhook_secret_token | toString | b64enc }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.githubWebhookServer.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.fullname" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
type: {{ .Values.githubWebhookServer.service.type }}
ports:
{{ range $_, $port := .Values.githubWebhookServer.service.ports -}}
- {{ $port | toYaml | nindent 6 }}
{{- end }}
selector:
{{- include "actions-runner-controller-github-webhook-server.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,15 @@
{{- if .Values.githubWebhookServer.enabled -}}
{{- if .Values.githubWebhookServer.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller-github-webhook-server.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.githubWebhookServer.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,33 @@
# permissions to do leader election.
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
rules:
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- list
- watch
- create
- update
- patch
- delete
- apiGroups:
- ""
resources:
- configmaps/status
verbs:
- get
- update
- patch
- apiGroups:
- ""
resources:
- events
verbs:
- create

View File

@@ -0,0 +1,13 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
namespace: {{ .Release.Namespace }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: {{ include "actions-runner-controller.leaderElectionRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,165 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.managerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- horizontalrunnerautoscalers/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerdeployments/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runnerreplicasets/status
verbs:
- get
- patch
- update
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- ""
resources:
- pods
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- ""
resources:
- pods/finalizers
verbs:
- create
- delete
- get
- list
- patch
- update
- watch

View File

@@ -0,0 +1,12 @@
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: {{ include "actions-runner-controller.managerRoleName" . }}
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ include "actions-runner-controller.managerRoleName" . }}
subjects:
- kind: ServiceAccount
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}

View File

@@ -0,0 +1,23 @@
{{- if .Values.authSecret.create }}
apiVersion: v1
kind: Secret
metadata:
name: {{ include "actions-runner-controller.secretName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
type: Opaque
data:
{{- if .Values.authSecret.github_app_id }}
github_app_id: {{ .Values.authSecret.github_app_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_installation_id }}
github_app_installation_id: {{ .Values.authSecret.github_app_installation_id | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_app_private_key }}
github_app_private_key: {{ .Values.authSecret.github_app_private_key | toString | b64enc }}
{{- end }}
{{- if .Values.authSecret.github_token }}
github_token: {{ .Values.authSecret.github_token | toString | b64enc }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,26 @@
# permissions to do edit runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerEditorRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- create
- delete
- get
- list
- patch
- update
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get
- patch
- update

View File

@@ -0,0 +1,20 @@
# permissions to do viewer runners.
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ include "actions-runner-controller.runnerViewerRoleName" . }}
rules:
- apiGroups:
- actions.summerwind.dev
resources:
- runners
verbs:
- get
- list
- watch
- apiGroups:
- actions.summerwind.dev
resources:
- runners/status
verbs:
- get

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "actions-runner-controller.serviceAccountName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,128 @@
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-mutating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: mutate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: mutate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /mutate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: mutate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
creationTimestamp: null
name: {{ include "actions-runner-controller.fullname" . }}-validating-webhook-configuration
annotations:
cert-manager.io/inject-ca-from: {{ .Release.Namespace }}/{{ include "actions-runner-controller.servingCertName" . }}
webhooks:
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runner
failurePolicy: Fail
name: validate.runner.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runners
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerdeployment
failurePolicy: Fail
name: validate.runnerdeployment.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerdeployments
- clientConfig:
caBundle: Cg==
service:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
path: /validate-actions-summerwind-dev-v1alpha1-runnerreplicaset
failurePolicy: Fail
name: validate.runnerreplicaset.actions.summerwind.dev
rules:
- apiGroups:
- actions.summerwind.dev
apiVersions:
- v1alpha1
operations:
- CREATE
- UPDATE
resources:
- runnerreplicasets

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: {{ include "actions-runner-controller.webhookServiceName" . }}
namespace: {{ .Release.Namespace }}
labels:
{{- include "actions-runner-controller.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: 443
targetPort: 9443
protocol: TCP
name: https
selector:
{{- include "actions-runner-controller.selectorLabels" . | nindent 4 }}

View File

@@ -0,0 +1,160 @@
# Default values for actions-runner-controller.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
labels: {}
replicaCount: 1
syncPeriod: 10m
# Only 1 authentication method can be deployed at a time
# Uncomment the configuration you are applying and fill in the details
authSecret:
create: true
name: "controller-manager"
### GitHub Apps Configuration
#github_app_id: ""
#github_app_installation_id: ""
#github_app_private_key: |
### GitHub PAT Configuration
#github_token: ""
image:
repository: summerwind/actions-runner-controller
tag: "v0.17.0"
dindSidecarRepositoryAndTag: "docker:dind"
pullPolicy: IfNotPresent
kube_rbac_proxy:
image:
repository: quay.io/brancz/kube-rbac-proxy
tag: v0.8.0
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext:
{}
# fsGroup: 2000
securityContext:
{}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: ClusterIP
port: 443
resources:
{}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
# Leverage a PriorityClass to ensure your pods survive resource shortages
# ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
# PriorityClass: system-cluster-critical
priorityClassName: ""
env:
{}
# http_proxy: "proxy.com:8080"
# https_proxy: "proxy.com:8080"
# no_proxy: ""
scope:
# If true, the controller will only watch custom resources in a single namespace
singleNamespace: false
# If `scope.singleNamespace=true`, the controller will only watch custom resources in this namespace
# The default value is "", which means the namespace of the controller
watchNamespace: ""
githubWebhookServer:
enabled: false
labels: {}
replicaCount: 1
syncPeriod: 10m
secret:
create: true
name: "github-webhook-server"
### GitHub Webhook Configuration
#github_webhook_secret_token: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
serviceAccount:
# Specifies whether a service account should be created
create: true
# Annotations to add to the service account
annotations: {}
# The name of the service account to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
resources: {}
nodeSelector: {}
tolerations: []
affinity: {}
priorityClassName: ""
service:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: http
#nodePort: someFixedPortForUseWithTerraformCdkCfnEtc
ingress:
enabled: false
annotations:
{}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local

View File

@@ -0,0 +1,169 @@
/*
Copyright 2021 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package main
import (
"context"
"errors"
"flag"
"net/http"
"os"
"sync"
"time"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/controllers"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
_ "k8s.io/client-go/plugin/pkg/client/auth/exec"
_ "k8s.io/client-go/plugin/pkg/client/auth/gcp"
_ "k8s.io/client-go/plugin/pkg/client/auth/oidc"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/log/zap"
// +kubebuilder:scaffold:imports
)
var (
scheme = runtime.NewScheme()
setupLog = ctrl.Log.WithName("setup")
)
func init() {
_ = clientgoscheme.AddToScheme(scheme)
_ = actionsv1alpha1.AddToScheme(scheme)
// +kubebuilder:scaffold:scheme
}
func main() {
var (
err error
webhookAddr string
metricsAddr string
// The secret token of the GitHub Webhook. See https://docs.github.com/en/developers/webhooks-and-events/securing-your-webhooks
webhookSecretToken string
watchNamespace string
enableLeaderElection bool
syncPeriod time.Duration
)
webhookSecretToken = os.Getenv("GITHUB_WEBHOOK_SECRET_TOKEN")
flag.StringVar(&webhookAddr, "webhook-addr", ":8000", "The address the metric endpoint binds to.")
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.StringVar(&watchNamespace, "watch-namespace", "", "The namespace to watch for HorizontalRunnerAutoscaler's to scale on Webhook. Set to empty for letting it watch for all namespaces.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.Parse()
if webhookSecretToken == "" {
setupLog.Info("-webhook-secret-token is missing or empty. Create one following https://docs.github.com/en/developers/webhooks-and-events/securing-your-webhooks")
}
if watchNamespace == "" {
setupLog.Info("-watch-namespace is empty. HorizontalRunnerAutoscalers in all the namespaces are watched, cached, and considered as scale targets.")
} else {
setupLog.Info("-watch-namespace is %q. Only HorizontalRunnerAutoscalers in %q are watched, cached, and considered as scale targets.")
}
logger := zap.New(func(o *zap.Options) {
o.Development = true
})
ctrl.SetLogger(logger)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
SyncPeriod: &syncPeriod,
LeaderElection: enableLeaderElection,
Namespace: watchNamespace,
MetricsBindAddress: metricsAddr,
Port: 9443,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
os.Exit(1)
}
hraGitHubWebhook := &controllers.HorizontalRunnerAutoscalerGitHubWebhook{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("Runner"),
Recorder: nil,
Scheme: mgr.GetScheme(),
SecretKeyBytes: []byte(webhookSecretToken),
Namespace: watchNamespace,
}
if err = hraGitHubWebhook.SetupWithManager(mgr); err != nil {
setupLog.Error(err, "unable to create controller", "controller", "Runner")
os.Exit(1)
}
var wg sync.WaitGroup
ctx, cancel := context.WithCancel(context.Background())
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
setupLog.Info("starting webhook server")
if err := mgr.Start(ctx.Done()); err != nil {
setupLog.Error(err, "problem running manager")
os.Exit(1)
}
}()
mux := http.NewServeMux()
mux.HandleFunc("/", hraGitHubWebhook.Handle)
srv := http.Server{
Addr: webhookAddr,
Handler: mux,
}
wg.Add(1)
go func() {
defer cancel()
defer wg.Done()
go func() {
<-ctx.Done()
srv.Shutdown(context.Background())
}()
if err := srv.ListenAndServe(); err != nil {
if !errors.Is(err, http.ErrServerClosed) {
setupLog.Error(err, "problem running http server")
}
}
}()
go func() {
<-ctrl.SetupSignalHandler()
cancel()
}()
wg.Wait()
}

View File

@@ -48,6 +48,20 @@ spec:
description: HorizontalRunnerAutoscalerSpec defines the desired state of
HorizontalRunnerAutoscaler
properties:
capacityReservations:
items:
description: CapacityReservation specifies the number of replicas
temporarily added to the scale target until ExpirationTime.
properties:
expirationTime:
format: date-time
type: string
name:
type: string
replicas:
type: integer
type: object
type: array
maxReplicas:
description: MinReplicas is the maximum number of replicas the deployment
is allowed to scale
@@ -64,6 +78,33 @@ spec:
items:
type: string
type: array
scaleDownAdjustment:
description: ScaleDownAdjustment is the number of runners removed
on scale-down. You can only specify either ScaleDownFactor or
ScaleDownAdjustment.
type: integer
scaleDownFactor:
description: ScaleDownFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be removed.
type: string
scaleDownThreshold:
description: ScaleDownThreshold is the percentage of busy runners
less than which will trigger the hpa to scale the runners down.
type: string
scaleUpAdjustment:
description: ScaleUpAdjustment is the number of runners added
on scale-up. You can only specify either ScaleUpFactor or ScaleUpAdjustment.
type: integer
scaleUpFactor:
description: ScaleUpFactor is the multiplicative factor applied
to the current number of runners used to determine how many
pods should be added.
type: string
scaleUpThreshold:
description: ScaleUpThreshold is the percentage of busy runners
greater than which will trigger the hpa to scale runners up.
type: string
type:
description: Type is the type of metric to be used for autoscaling.
The only supported Type is TotalNumberOfQueuedAndInProgressWorkflowRuns
@@ -86,9 +127,79 @@ spec:
name:
type: string
type: object
scaleUpTriggers:
description: "ScaleUpTriggers is an experimental feature to increase
the desired replicas by 1 on each webhook requested received by the
webhookBasedAutoscaler. \n This feature requires you to also enable
and deploy the webhookBasedAutoscaler onto your cluster. \n Note that
the added runners remain until the next sync period at least, and
they may or may not be used by GitHub Actions depending on the timing.
They are intended to be used to gain \"resource slack\" immediately
after you receive a webhook from GitHub, so that you can loosely expect
MinReplicas runners to be always available."
items:
properties:
amount:
type: integer
duration:
type: string
githubEvent:
properties:
checkRun:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#check_run
properties:
names:
description: Names is a list of GitHub Actions glob patterns.
Any check_run event whose name matches one of patterns
in the list can trigger autoscaling. Note that check_run
name seem to equal to the job name you've defined in
your actions workflow yaml file. So it is very likely
that you can utilize this to trigger depending on the
job.
items:
type: string
type: array
status:
type: string
types:
items:
type: string
type: array
type: object
pullRequest:
description: https://docs.github.com/en/actions/reference/events-that-trigger-workflows#pull_request
properties:
branches:
items:
type: string
type: array
types:
items:
type: string
type: array
type: object
push:
description: PushSpec is the condition for triggering scale-up
on push event Also see https://docs.github.com/en/actions/reference/events-that-trigger-workflows#push
type: object
type: object
type: object
type: array
type: object
status:
properties:
cacheEntries:
items:
properties:
expirationTime:
format: date-time
type: string
key:
type: string
value:
type: integer
type: object
type: array
desiredReplicas:
description: DesiredReplicas is the total number of desired, non-terminated
and latest pods to be set for the primary RunnerSet This doesn't include

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -10,7 +10,7 @@ spec:
spec:
containers:
- name: kube-rbac-proxy
image: gcr.io/kubebuilder/kube-rbac-proxy:v0.4.1
image: quay.io/brancz/kube-rbac-proxy:v0.8.0
args:
- "--secure-listen-address=0.0.0.0:8443"
- "--upstream=http://127.0.0.1:8080/"
@@ -23,3 +23,4 @@ spec:
args:
- "--metrics-addr=127.0.0.1:8080"
- "--enable-leader-election"
- "--sync-period=10m"

View File

@@ -57,7 +57,7 @@ spec:
resources:
limits:
cpu: 100m
memory: 30Mi
memory: 100Mi
requests:
cpu: 100m
memory: 20Mi

View File

@@ -4,11 +4,65 @@ import (
"context"
"errors"
"fmt"
"math"
"strconv"
"strings"
"time"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
kerrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"sigs.k8s.io/controller-runtime/pkg/client"
)
const (
defaultScaleUpThreshold = 0.8
defaultScaleDownThreshold = 0.3
defaultScaleUpFactor = 1.3
defaultScaleDownFactor = 0.7
)
func getValueAvailableAt(now time.Time, from, to *time.Time, reservedValue int) *int {
if to != nil && now.After(*to) {
return nil
}
if from != nil && now.Before(*from) {
return nil
}
return &reservedValue
}
func (r *HorizontalRunnerAutoscalerReconciler) getDesiredReplicasFromCache(hra v1alpha1.HorizontalRunnerAutoscaler) *int {
var entry *v1alpha1.CacheEntry
for i := range hra.Status.CacheEntries {
ent := hra.Status.CacheEntries[i]
if ent.Key != v1alpha1.CacheEntryKeyDesiredReplicas {
continue
}
if !time.Now().Before(ent.ExpirationTime.Time) {
continue
}
entry = &ent
break
}
if entry != nil {
v := getValueAvailableAt(time.Now(), nil, &entry.ExpirationTime.Time, entry.Value)
if v != nil {
return v
}
}
return nil
}
func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
if hra.Spec.MinReplicas == nil {
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing minReplicas", hra.Namespace, hra.Name)
@@ -16,8 +70,20 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
return nil, fmt.Errorf("horizontalrunnerautoscaler %s/%s is missing maxReplicas", hra.Namespace, hra.Name)
}
var repos [][]string
metrics := hra.Spec.Metrics
if len(metrics) == 0 || metrics[0].Type == v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
return r.calculateReplicasByQueuedAndInProgressWorkflowRuns(rd, hra)
} else if metrics[0].Type == v1alpha1.AutoscalingMetricTypePercentageRunnersBusy {
return r.calculateReplicasByPercentageRunnersBusy(rd, hra)
} else {
return nil, fmt.Errorf("validting autoscaling metrics: unsupported metric type %q", metrics[0].Type)
}
}
func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByQueuedAndInProgressWorkflowRuns(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
var repos [][]string
metrics := hra.Spec.Metrics
repoID := rd.Spec.Template.Spec.Repository
if repoID == "" {
orgName := rd.Spec.Template.Spec.Organization
@@ -25,13 +91,14 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
return nil, fmt.Errorf("asserting runner deployment spec to detect bug: spec.template.organization should not be empty on this code path")
}
metrics := hra.Spec.Metrics
// In case it's an organizational runners deployment without any scaling metrics defined,
// we assume that the desired replicas should always be `minReplicas + capacityReservedThroughWebhook`.
// See https://github.com/summerwind/actions-runner-controller/issues/377#issuecomment-793372693
if len(metrics) == 0 {
return nil, fmt.Errorf("validating autoscaling metrics: one or more metrics is required")
} else if tpe := metrics[0].Type; tpe != v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns {
return nil, fmt.Errorf("validting autoscaling metrics: unsupported metric type %q: only supported value is %s", tpe, v1alpha1.AutoscalingMetricTypeTotalNumberOfQueuedAndInProgressWorkflowRuns)
} else if len(metrics[0].RepositoryNames) == 0 {
return hra.Spec.MinReplicas, nil
}
if len(metrics[0].RepositoryNames) == 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment")
}
@@ -80,12 +147,12 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
for _, repo := range repos {
user, repoName := repo[0], repo[1]
list, _, err := r.GitHubClient.Actions.ListRepositoryWorkflowRuns(context.TODO(), user, repoName, nil)
workflowRuns, err := r.GitHubClient.ListRepositoryWorkflowRuns(context.TODO(), user, repoName)
if err != nil {
return nil, err
}
for _, run := range list.WorkflowRuns {
for _, run := range workflowRuns {
total++
// In May 2020, there are only 3 statuses.
@@ -131,7 +198,195 @@ func (r *HorizontalRunnerAutoscalerReconciler) determineDesiredReplicas(rd v1alp
"workflow_runs_in_progress", inProgress,
"workflow_runs_queued", queued,
"workflow_runs_unknown", unknown,
"namespace", hra.Namespace,
"runner_deployment", rd.Name,
"horizontal_runner_autoscaler", hra.Name,
)
return &replicas, nil
}
func (r *HorizontalRunnerAutoscalerReconciler) calculateReplicasByPercentageRunnersBusy(rd v1alpha1.RunnerDeployment, hra v1alpha1.HorizontalRunnerAutoscaler) (*int, error) {
ctx := context.Background()
minReplicas := *hra.Spec.MinReplicas
maxReplicas := *hra.Spec.MaxReplicas
metrics := hra.Spec.Metrics[0]
scaleUpThreshold := defaultScaleUpThreshold
scaleDownThreshold := defaultScaleDownThreshold
scaleUpFactor := defaultScaleUpFactor
scaleDownFactor := defaultScaleDownFactor
if metrics.ScaleUpThreshold != "" {
sut, err := strconv.ParseFloat(metrics.ScaleUpThreshold, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpThreshold cannot be parsed into a float64")
}
scaleUpThreshold = sut
}
if metrics.ScaleDownThreshold != "" {
sdt, err := strconv.ParseFloat(metrics.ScaleDownThreshold, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownThreshold cannot be parsed into a float64")
}
scaleDownThreshold = sdt
}
scaleUpAdjustment := metrics.ScaleUpAdjustment
if scaleUpAdjustment != 0 {
if metrics.ScaleUpAdjustment < 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpAdjustment cannot be lower than 0")
}
if metrics.ScaleUpFactor != "" {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[]: scaleUpAdjustment and scaleUpFactor cannot be specified together")
}
} else if metrics.ScaleUpFactor != "" {
suf, err := strconv.ParseFloat(metrics.ScaleUpFactor, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleUpFactor cannot be parsed into a float64")
}
scaleUpFactor = suf
}
scaleDownAdjustment := metrics.ScaleDownAdjustment
if scaleDownAdjustment != 0 {
if metrics.ScaleDownAdjustment < 0 {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownAdjustment cannot be lower than 0")
}
if metrics.ScaleDownFactor != "" {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[]: scaleDownAdjustment and scaleDownFactor cannot be specified together")
}
} else if metrics.ScaleDownFactor != "" {
sdf, err := strconv.ParseFloat(metrics.ScaleDownFactor, 64)
if err != nil {
return nil, errors.New("validating autoscaling metrics: spec.autoscaling.metrics[].scaleDownFactor cannot be parsed into a float64")
}
scaleDownFactor = sdf
}
// return the list of runners in namespace. Horizontal Runner Autoscaler should only be responsible for scaling resources in its own ns.
var runnerList v1alpha1.RunnerList
var opts []client.ListOption
opts = append(opts, client.InNamespace(rd.Namespace))
selector, err := metav1.LabelSelectorAsSelector(getSelector(&rd))
if err != nil {
return nil, err
}
opts = append(opts, client.MatchingLabelsSelector{Selector: selector})
r.Log.V(2).Info("Finding runners with selector", "ns", rd.Namespace)
if err := r.List(
ctx,
&runnerList,
opts...,
); err != nil {
if !kerrors.IsNotFound(err) {
return nil, err
}
}
runnerMap := make(map[string]struct{})
for _, items := range runnerList.Items {
runnerMap[items.Name] = struct{}{}
}
var (
enterprise = rd.Spec.Template.Spec.Enterprise
organization = rd.Spec.Template.Spec.Organization
repository = rd.Spec.Template.Spec.Repository
)
// ListRunners will return all runners managed by GitHub - not restricted to ns
runners, err := r.GitHubClient.ListRunners(
ctx,
enterprise,
organization,
repository)
if err != nil {
return nil, err
}
var desiredReplicasBefore int
if v := rd.Spec.Replicas; v == nil {
desiredReplicasBefore = 1
} else {
desiredReplicasBefore = *v
}
var (
numRunners int
numRunnersRegistered int
numRunnersBusy int
)
numRunners = len(runnerList.Items)
for _, runner := range runners {
if _, ok := runnerMap[*runner.Name]; ok {
numRunnersRegistered++
if runner.GetBusy() {
numRunnersBusy++
}
}
}
var desiredReplicas int
fractionBusy := float64(numRunnersBusy) / float64(desiredReplicasBefore)
if fractionBusy >= scaleUpThreshold {
if scaleUpAdjustment > 0 {
desiredReplicas = desiredReplicasBefore + scaleUpAdjustment
} else {
desiredReplicas = int(math.Ceil(float64(desiredReplicasBefore) * scaleUpFactor))
}
} else if fractionBusy < scaleDownThreshold {
if scaleDownAdjustment > 0 {
desiredReplicas = desiredReplicasBefore - scaleDownAdjustment
} else {
desiredReplicas = int(float64(desiredReplicasBefore) * scaleDownFactor)
}
} else {
desiredReplicas = *rd.Spec.Replicas
}
if desiredReplicas < minReplicas {
desiredReplicas = minReplicas
} else if desiredReplicas > maxReplicas {
desiredReplicas = maxReplicas
}
// NOTES for operators:
//
// - num_runners can be as twice as large as replicas_desired_before while
// the runnerdeployment controller is replacing RunnerReplicaSet for runner update.
r.Log.V(1).Info(
"Calculated desired replicas",
"replicas_min", minReplicas,
"replicas_max", maxReplicas,
"replicas_desired_before", desiredReplicasBefore,
"replicas_desired", desiredReplicas,
"num_runners", numRunners,
"num_runners_registered", numRunnersRegistered,
"num_runners_busy", numRunnersBusy,
"namespace", hra.Namespace,
"runner_deployment", rd.Name,
"horizontal_runner_autoscaler", hra.Name,
"enterprise", enterprise,
"organization", organization,
"repository", repository,
)
rd.Status.Replicas = &desiredReplicas
replicas := desiredReplicas
return &replicas, nil
}

View File

@@ -16,7 +16,10 @@ import (
)
func newGithubClient(server *httptest.Server) *github.Client {
client, err := github.NewClientWithAccessToken("token")
c := github.Config{
Token: "token",
}
client, err := c.NewClient()
if err != nil {
panic(err)
}
@@ -37,14 +40,18 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
metav1Now := metav1.Now()
testcases := []struct {
repo string
org string
fixed *int
max *int
min *int
sReplicas *int
sTime *metav1.Time
workflowRuns string
repo string
org string
fixed *int
max *int
min *int
sReplicas *int
sTime *metav1.Time
workflowRuns string
workflowRuns_queued string
workflowRuns_in_progress string
workflowJobs map[int]string
want int
err string
@@ -52,87 +59,107 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
// Legacy functionality
// 3 demanded, max at 3
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
},
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
sReplicas: intPtr(3),
sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
sReplicas: intPtr(3),
sTime: &metav1Now,
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
},
// 3 demanded, max at 2
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
repo: "test/valid",
min: intPtr(2),
max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
},
// 2 demanded, min at 2
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// 1 demanded, min at 2
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 2,
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
},
// 1 demanded, min at 2
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
repo: "test/valid",
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// 1 demanded, min at 1
{
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 1,
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 1,
},
// 1 demanded, min at 1
{
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 1,
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
},
// fixed at 3
{
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
fixed: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
repo: "test/valid",
min: intPtr(1),
max: intPtr(3),
fixed: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
},
// Job-level autoscaling
// 5 requested from 3 workflows
{
repo: "test/valid",
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
repo: "test/valid",
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
@@ -154,7 +181,11 @@ func TestDetermineDesiredReplicas_RepositoryRunner(t *testing.T) {
_ = v1alpha1.AddToScheme(scheme)
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer(fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs))
server := fake.NewServer(
fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns, tc.workflowRuns_queued, tc.workflowRuns_in_progress),
fake.WithListWorkflowJobsResponse(200, tc.workflowJobs),
fake.WithListRunnersResponse(200, fake.RunnersListBody),
)
defer server.Close()
client := newGithubClient(server)
@@ -221,129 +252,157 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
metav1Now := metav1.Now()
testcases := []struct {
repos []string
org string
fixed *int
max *int
min *int
sReplicas *int
sTime *metav1.Time
workflowRuns string
repos []string
org string
fixed *int
max *int
min *int
sReplicas *int
sTime *metav1.Time
workflowRuns string
workflowRuns_queued string
workflowRuns_in_progress string
workflowJobs map[int]string
want int
err string
}{
// 3 demanded, max at 3
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 3,
},
// 2 demanded, max at 3, currently 3, delay scaling down due to grace period
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
sReplicas: intPtr(3),
sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
sReplicas: intPtr(3),
sTime: &metav1Now,
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 3,
},
// 3 demanded, max at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(2),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}]}"`,
want: 2,
},
// 2 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 3, "workflow_runs":[{"status":"queued"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// 1 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 2,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 2,
},
// 1 demanded, min at 2
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 2,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 2,
},
// 1 demanded, min at 1
{
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
want: 1,
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"queued"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 0, "workflow_runs":[]}"`,
want: 1,
},
// 1 demanded, min at 1
{
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
want: 1,
org: "test",
repos: []string{"valid"},
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
want: 1,
},
// fixed at 3
{
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
},
// org runner, fixed at 3
{
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
want: 3,
org: "test",
repos: []string{"valid"},
fixed: intPtr(1),
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"status":"in_progress"}, {"status":"in_progress"}, {"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 3, "workflow_runs":[{"status":"in_progress"},{"status":"in_progress"},{"status":"in_progress"}]}"`,
want: 3,
},
// org runner, 1 demanded, min at 1, no repos
{
org: "test",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
err: "validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment",
org: "test",
min: intPtr(1),
max: intPtr(3),
workflowRuns: `{"total_count": 2, "workflow_runs":[{"status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 0, "workflow_runs":[]}"`,
workflowRuns_in_progress: `{"total_count": 1, "workflow_runs":[{"status":"in_progress"}]}"`,
err: "validating autoscaling metrics: spec.autoscaling.metrics[].repositoryNames is required and must have one more more entries for organizational runner deployment",
},
// Job-level autoscaling
// 5 requested from 3 workflows
{
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
org: "test",
repos: []string{"valid"},
min: intPtr(2),
max: intPtr(10),
workflowRuns: `{"total_count": 4, "workflow_runs":[{"id": 1, "status":"queued"}, {"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowRuns_queued: `{"total_count": 1, "workflow_runs":[{"id": 1, "status":"queued"}]}"`,
workflowRuns_in_progress: `{"total_count": 2, "workflow_runs":[{"id": 2, "status":"in_progress"}, {"id": 3, "status":"in_progress"}, {"status":"completed"}]}"`,
workflowJobs: map[int]string{
1: `{"jobs": [{"status":"queued"}, {"status":"queued"}]}`,
2: `{"jobs": [{"status": "in_progress"}, {"status":"completed"}]}`,
@@ -365,7 +424,11 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
_ = v1alpha1.AddToScheme(scheme)
t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) {
server := fake.NewServer(fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns), fake.WithListWorkflowJobsResponse(200, tc.workflowJobs))
server := fake.NewServer(
fake.WithListRepositoryWorkflowRunsResponse(200, tc.workflowRuns, tc.workflowRuns_queued, tc.workflowRuns_in_progress),
fake.WithListWorkflowJobsResponse(200, tc.workflowJobs),
fake.WithListRunnersResponse(200, fake.RunnersListBody),
)
defer server.Close()
client := newGithubClient(server)
@@ -380,7 +443,17 @@ func TestDetermineDesiredReplicas_OrganizationalRunner(t *testing.T) {
Name: "testrd",
},
Spec: v1alpha1.RunnerDeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: v1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: v1alpha1.RunnerSpec{
Organization: tc.org,
},

View File

@@ -0,0 +1,458 @@
/*
Copyright 2020 The actions-runner-controller authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package controllers
import (
"context"
"fmt"
"io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/types"
"net/http"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
"strings"
"time"
"github.com/go-logr/logr"
gogithub "github.com/google/go-github/v33/github"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
const (
scaleTargetKey = "scaleTarget"
)
// HorizontalRunnerAutoscalerGitHubWebhook autoscales a HorizontalRunnerAutoscaler and the RunnerDeployment on each
// GitHub Webhook received
type HorizontalRunnerAutoscalerGitHubWebhook struct {
client.Client
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
// SecretKeyBytes is the byte representation of the Webhook secret token
// the administrator is generated and specified in GitHub Web UI.
SecretKeyBytes []byte
// Namespace is the namespace to watch for HorizontalRunnerAutoscaler's to be
// scaled on Webhook.
// Set to empty for letting it watch for all namespaces.
Namespace string
Name string
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Reconcile(request reconcile.Request) (reconcile.Result, error) {
return ctrl.Result{}, nil
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers/finalizers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=horizontalrunnerautoscalers/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=core,resources=events,verbs=create;patch
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) Handle(w http.ResponseWriter, r *http.Request) {
var (
ok bool
err error
)
defer func() {
if !ok {
w.WriteHeader(http.StatusInternalServerError)
if err != nil {
msg := err.Error()
if written, err := w.Write([]byte(msg)); err != nil {
autoscaler.Log.Error(err, "failed writing http error response", "msg", msg, "written", written)
}
}
}
}()
defer func() {
if r.Body != nil {
r.Body.Close()
}
}()
// respond ok to GET / e.g. for health check
if r.Method == http.MethodGet {
fmt.Fprintln(w, "webhook server is running")
return
}
var payload []byte
if len(autoscaler.SecretKeyBytes) > 0 {
payload, err = gogithub.ValidatePayload(r, autoscaler.SecretKeyBytes)
if err != nil {
autoscaler.Log.Error(err, "error validating request body")
return
}
} else {
payload, err = ioutil.ReadAll(r.Body)
if err != nil {
autoscaler.Log.Error(err, "error reading request body")
return
}
}
webhookType := gogithub.WebHookType(r)
event, err := gogithub.ParseWebHook(webhookType, payload)
if err != nil {
var s string
if payload != nil {
s = string(payload)
}
autoscaler.Log.Error(err, "could not parse webhook", "webhookType", webhookType, "payload", s)
return
}
var target *ScaleTarget
log := autoscaler.Log.WithValues(
"event", webhookType,
"hookID", r.Header.Get("X-GitHub-Hook-ID"),
"delivery", r.Header.Get("X-GitHub-Delivery"),
)
switch e := event.(type) {
case *gogithub.PushEvent:
target, err = autoscaler.getScaleUpTarget(
context.TODO(),
log,
e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchPushEvent(e),
)
case *gogithub.PullRequestEvent:
target, err = autoscaler.getScaleUpTarget(
context.TODO(),
log,
e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchPullRequestEvent(e),
)
if pullRequest := e.PullRequest; pullRequest != nil {
log = log.WithValues(
"pullRequest.base.ref", e.PullRequest.Base.GetRef(),
"action", e.GetAction(),
)
}
case *gogithub.CheckRunEvent:
target, err = autoscaler.getScaleUpTarget(
context.TODO(),
log,
e.Repo.GetName(),
e.Repo.Owner.GetLogin(),
e.Repo.Owner.GetType(),
autoscaler.MatchCheckRunEvent(e),
)
if checkRun := e.GetCheckRun(); checkRun != nil {
log = log.WithValues(
"checkRun.status", checkRun.GetStatus(),
"action", e.GetAction(),
)
}
case *gogithub.PingEvent:
ok = true
w.WriteHeader(http.StatusOK)
msg := "pong"
if written, err := w.Write([]byte(msg)); err != nil {
log.Error(err, "failed writing http response", "msg", msg, "written", written)
}
log.Info("received ping event")
return
default:
log.Info("unknown event type", "eventType", webhookType)
return
}
if err != nil {
log.Error(err, "handling check_run event")
return
}
if target == nil {
log.Info(
"Scale target not found. If this is unexpected, ensure that there is exactly one repository-wide or organizational runner deployment that matches this webhook event",
)
msg := "no horizontalrunnerautoscaler to scale for this github event"
ok = true
w.WriteHeader(http.StatusOK)
if written, err := w.Write([]byte(msg)); err != nil {
log.Error(err, "failed writing http response", "msg", msg, "written", written)
}
return
}
if err := autoscaler.tryScaleUp(context.TODO(), target); err != nil {
log.Error(err, "could not scale up")
return
}
ok = true
w.WriteHeader(http.StatusOK)
msg := fmt.Sprintf("scaled %s by 1", target.Name)
autoscaler.Log.Info(msg)
if written, err := w.Write([]byte(msg)); err != nil {
log.Error(err, "failed writing http response", "msg", msg, "written", written)
}
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) findHRAsByKey(ctx context.Context, value string) ([]v1alpha1.HorizontalRunnerAutoscaler, error) {
ns := autoscaler.Namespace
var defaultListOpts []client.ListOption
if ns != "" {
defaultListOpts = append(defaultListOpts, client.InNamespace(ns))
}
var hras []v1alpha1.HorizontalRunnerAutoscaler
if value != "" {
opts := append([]client.ListOption{}, defaultListOpts...)
opts = append(opts, client.MatchingFields{scaleTargetKey: value})
if autoscaler.Namespace != "" {
opts = append(opts, client.InNamespace(autoscaler.Namespace))
}
var hraList v1alpha1.HorizontalRunnerAutoscalerList
if err := autoscaler.List(ctx, &hraList, opts...); err != nil {
return nil, err
}
for _, d := range hraList.Items {
hras = append(hras, d)
}
}
return hras, nil
}
func matchTriggerConditionAgainstEvent(types []string, eventAction *string) bool {
if len(types) == 0 {
return true
}
if eventAction == nil {
return false
}
for _, tpe := range types {
if tpe == *eventAction {
return true
}
}
return false
}
type ScaleTarget struct {
v1alpha1.HorizontalRunnerAutoscaler
v1alpha1.ScaleUpTrigger
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) searchScaleTargets(hras []v1alpha1.HorizontalRunnerAutoscaler, f func(v1alpha1.ScaleUpTrigger) bool) []ScaleTarget {
var matched []ScaleTarget
for _, hra := range hras {
if !hra.ObjectMeta.DeletionTimestamp.IsZero() {
continue
}
for _, scaleUpTrigger := range hra.Spec.ScaleUpTriggers {
if !f(scaleUpTrigger) {
continue
}
matched = append(matched, ScaleTarget{
HorizontalRunnerAutoscaler: hra,
ScaleUpTrigger: scaleUpTrigger,
})
}
}
return matched
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleTarget(ctx context.Context, name string, f func(v1alpha1.ScaleUpTrigger) bool) (*ScaleTarget, error) {
hras, err := autoscaler.findHRAsByKey(ctx, name)
if err != nil {
return nil, err
}
targets := autoscaler.searchScaleTargets(hras, f)
n := len(targets)
if n == 0 {
return nil, nil
}
if n > 1 {
var scaleTargetIDs []string
for _, t := range targets {
scaleTargetIDs = append(scaleTargetIDs, t.HorizontalRunnerAutoscaler.Name)
}
autoscaler.Log.Info(
"Found too many scale targets: "+
"It must be exactly one to avoid ambiguity. "+
"Either set Namespace for the webhook-based autoscaler to let it only find HRAs in the namespace, "+
"or update Repository or Organization fields in your RunnerDeployment resources to fix the ambiguity.",
"scaleTargets", strings.Join(scaleTargetIDs, ","))
return nil, nil
}
return &targets[0], nil
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) getScaleUpTarget(ctx context.Context, log logr.Logger, repo, owner, ownerType string, f func(v1alpha1.ScaleUpTrigger) bool) (*ScaleTarget, error) {
repositoryRunnerKey := owner + "/" + repo
if target, err := autoscaler.getScaleTarget(ctx, repositoryRunnerKey, f); err != nil {
autoscaler.Log.Info("finding repository-wide runner", "repository", repositoryRunnerKey)
return nil, err
} else if target != nil {
autoscaler.Log.Info("scale up target is repository-wide runners", "repository", repo)
return target, nil
}
if ownerType == "User" {
return nil, nil
}
if target, err := autoscaler.getScaleTarget(ctx, owner, f); err != nil {
log.Info("finding organizational runner", "organization", owner)
return nil, err
} else if target != nil {
log.Info("scale up target is organizational runners", "organization", owner)
return target, nil
}
return nil, nil
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) tryScaleUp(ctx context.Context, target *ScaleTarget) error {
if target == nil {
return nil
}
copy := target.HorizontalRunnerAutoscaler.DeepCopy()
amount := 1
if target.ScaleUpTrigger.Amount > 0 {
amount = target.ScaleUpTrigger.Amount
}
capacityReservations := getValidCapacityReservations(copy)
copy.Spec.CapacityReservations = append(capacityReservations, v1alpha1.CapacityReservation{
ExpirationTime: metav1.Time{Time: time.Now().Add(target.ScaleUpTrigger.Duration.Duration)},
Replicas: amount,
})
if err := autoscaler.Client.Patch(ctx, copy, client.MergeFrom(&target.HorizontalRunnerAutoscaler)); err != nil {
return fmt.Errorf("patching horizontalrunnerautoscaler to add capacity reservation: %w", err)
}
return nil
}
func getValidCapacityReservations(autoscaler *v1alpha1.HorizontalRunnerAutoscaler) []v1alpha1.CapacityReservation {
var capacityReservations []v1alpha1.CapacityReservation
now := time.Now()
for _, reservation := range autoscaler.Spec.CapacityReservations {
if reservation.ExpirationTime.Time.After(now) {
capacityReservations = append(capacityReservations, reservation)
}
}
return capacityReservations
}
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) SetupWithManager(mgr ctrl.Manager) error {
name := "webhookbasedautoscaler"
if autoscaler.Name != "" {
name = autoscaler.Name
}
autoscaler.Recorder = mgr.GetEventRecorderFor(name)
if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.HorizontalRunnerAutoscaler{}, scaleTargetKey, func(rawObj runtime.Object) []string {
hra := rawObj.(*v1alpha1.HorizontalRunnerAutoscaler)
if hra.Spec.ScaleTargetRef.Name == "" {
return nil
}
var rd v1alpha1.RunnerDeployment
if err := autoscaler.Client.Get(context.Background(), types.NamespacedName{Namespace: hra.Namespace, Name: hra.Spec.ScaleTargetRef.Name}, &rd); err != nil {
return nil
}
return []string{rd.Spec.Template.Spec.Repository, rd.Spec.Template.Spec.Organization}
}); err != nil {
return err
}
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.HorizontalRunnerAutoscaler{}).
Named(name).
Complete(autoscaler)
}

View File

@@ -0,0 +1,43 @@
package controllers
import (
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/pkg/actionsglob"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchCheckRunEvent(event *github.CheckRunEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
return func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
g := scaleUpTrigger.GitHubEvent
if g == nil {
return false
}
cr := g.CheckRun
if cr == nil {
return false
}
if !matchTriggerConditionAgainstEvent(cr.Types, event.Action) {
return false
}
if cr.Status != "" && (event.CheckRun == nil || event.CheckRun.Status == nil || *event.CheckRun.Status != cr.Status) {
return false
}
if checkRun := event.CheckRun; checkRun != nil && len(cr.Names) > 0 {
for _, pat := range cr.Names {
if r := actionsglob.Match(pat, checkRun.GetName()); r {
return true
}
}
return false
}
return true
}
}

View File

@@ -0,0 +1,32 @@
package controllers
import (
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPullRequestEvent(event *github.PullRequestEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
return func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
g := scaleUpTrigger.GitHubEvent
if g == nil {
return false
}
pr := g.PullRequest
if pr == nil {
return false
}
if !matchTriggerConditionAgainstEvent(pr.Types, event.Action) {
return false
}
if !matchTriggerConditionAgainstEvent(pr.Branches, event.PullRequest.Base.Ref) {
return false
}
return true
}
}

View File

@@ -0,0 +1,24 @@
package controllers
import (
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
func (autoscaler *HorizontalRunnerAutoscalerGitHubWebhook) MatchPushEvent(event *github.PushEvent) func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
return func(scaleUpTrigger v1alpha1.ScaleUpTrigger) bool {
g := scaleUpTrigger.GitHubEvent
if g == nil {
return false
}
push := g.Push
if push == nil {
return false
}
return true
}
}

View File

@@ -0,0 +1,314 @@
package controllers
import (
"bytes"
"encoding/json"
"fmt"
"github.com/go-logr/logr"
"github.com/google/go-github/v33/github"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"io"
"io/ioutil"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
clientgoscheme "k8s.io/client-go/kubernetes/scheme"
"net/http"
"net/http/httptest"
"net/url"
"os"
"sigs.k8s.io/controller-runtime/pkg/client/fake"
"testing"
"time"
)
var (
sc = runtime.NewScheme()
)
func init() {
_ = clientgoscheme.AddToScheme(sc)
_ = actionsv1alpha1.AddToScheme(sc)
}
func TestOrgWebhookCheckRun(t *testing.T) {
f, err := os.Open("testdata/org_webhook_check_run_payload.json")
if err != nil {
t.Fatalf("could not open the fixture: %s", err)
}
defer f.Close()
var e github.CheckRunEvent
if err := json.NewDecoder(f).Decode(&e); err != nil {
t.Fatalf("invalid json: %s", err)
}
testServer(t,
"check_run",
&e,
200,
"no horizontalrunnerautoscaler to scale for this github event",
)
}
func TestRepoWebhookCheckRun(t *testing.T) {
f, err := os.Open("testdata/repo_webhook_check_run_payload.json")
if err != nil {
t.Fatalf("could not open the fixture: %s", err)
}
defer f.Close()
var e github.CheckRunEvent
if err := json.NewDecoder(f).Decode(&e); err != nil {
t.Fatalf("invalid json: %s", err)
}
testServer(t,
"check_run",
&e,
200,
"no horizontalrunnerautoscaler to scale for this github event",
)
}
func TestWebhookPullRequest(t *testing.T) {
testServer(t,
"pull_request",
&github.PullRequestEvent{
PullRequest: &github.PullRequest{
Base: &github.PullRequestBranch{
Ref: github.String("main"),
},
},
Repo: &github.Repository{
Name: github.String("myorg/myrepo"),
Organization: &github.Organization{
Name: github.String("myorg"),
},
},
Action: github.String("created"),
},
200,
"no horizontalrunnerautoscaler to scale for this github event",
)
}
func TestWebhookPush(t *testing.T) {
testServer(t,
"push",
&github.PushEvent{
Repo: &github.PushEventRepository{
Name: github.String("myrepo"),
Organization: github.String("myorg"),
},
},
200,
"no horizontalrunnerautoscaler to scale for this github event",
)
}
func TestWebhookPing(t *testing.T) {
testServer(t,
"ping",
&github.PingEvent{
Zen: github.String("zen"),
},
200,
"pong",
)
}
func TestGetRequest(t *testing.T) {
hra := HorizontalRunnerAutoscalerGitHubWebhook{}
request, _ := http.NewRequest(http.MethodGet, "/", nil)
recorder := httptest.ResponseRecorder{}
hra.Handle(&recorder, request)
response := recorder.Result()
if response.StatusCode != http.StatusOK {
t.Errorf("want %d, got %d", http.StatusOK, response.StatusCode)
}
}
func TestGetValidCapacityReservations(t *testing.T) {
now := time.Now()
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
Spec: actionsv1alpha1.HorizontalRunnerAutoscalerSpec{
CapacityReservations: []actionsv1alpha1.CapacityReservation{
{
ExpirationTime: metav1.Time{Time: now.Add(-time.Second)},
Replicas: 1,
},
{
ExpirationTime: metav1.Time{Time: now},
Replicas: 2,
},
{
ExpirationTime: metav1.Time{Time: now.Add(time.Second)},
Replicas: 3,
},
},
},
}
revs := getValidCapacityReservations(hra)
var count int
for _, r := range revs {
count += r.Replicas
}
want := 3
if count != want {
t.Errorf("want %d, got %d", want, count)
}
}
func installTestLogger(webhook *HorizontalRunnerAutoscalerGitHubWebhook) *bytes.Buffer {
logs := &bytes.Buffer{}
log := testLogger{
name: "testlog",
writer: logs,
}
webhook.Log = &log
return logs
}
func testServer(t *testing.T, eventType string, event interface{}, wantCode int, wantBody string) {
t.Helper()
hraWebhook := &HorizontalRunnerAutoscalerGitHubWebhook{}
var initObjs []runtime.Object
client := fake.NewFakeClientWithScheme(sc, initObjs...)
logs := installTestLogger(hraWebhook)
defer func() {
if t.Failed() {
t.Logf("diagnostics: %s", logs.String())
}
}()
hraWebhook.Client = client
mux := http.NewServeMux()
mux.HandleFunc("/", hraWebhook.Handle)
server := httptest.NewServer(mux)
defer server.Close()
resp, err := sendWebhook(server, eventType, event)
if err != nil {
t.Fatal(err)
}
defer func() {
if resp != nil {
resp.Body.Close()
}
}()
if resp.StatusCode != wantCode {
t.Error("status:", resp.StatusCode)
}
respBody, err := ioutil.ReadAll(resp.Body)
if err != nil {
t.Fatal(err)
}
if string(respBody) != wantBody {
t.Fatal("body:", string(respBody))
}
}
func sendWebhook(server *httptest.Server, eventType string, event interface{}) (*http.Response, error) {
jsonBuf := &bytes.Buffer{}
enc := json.NewEncoder(jsonBuf)
enc.SetIndent(" ", "")
err := enc.Encode(event)
if err != nil {
return nil, fmt.Errorf("[bug in test] encoding event to json: %+v", err)
}
reqBody := jsonBuf.Bytes()
u, err := url.Parse(server.URL)
if err != nil {
return nil, fmt.Errorf("parsing server url: %v", err)
}
req := &http.Request{
Method: http.MethodPost,
URL: u,
Header: map[string][]string{
"X-GitHub-Event": {eventType},
"Content-Type": {"application/json"},
},
Body: ioutil.NopCloser(bytes.NewBuffer(reqBody)),
}
return http.DefaultClient.Do(req)
}
// testLogger is a sample logr.Logger that logs in-memory.
// It's only for testing log outputs.
type testLogger struct {
name string
keyValues map[string]interface{}
writer io.Writer
}
var _ logr.Logger = &testLogger{}
func (l *testLogger) Info(msg string, kvs ...interface{}) {
fmt.Fprintf(l.writer, "%s] %s\t", l.name, msg)
for k, v := range l.keyValues {
fmt.Fprintf(l.writer, "%s=%+v ", k, v)
}
for i := 0; i < len(kvs); i += 2 {
fmt.Fprintf(l.writer, "%s=%+v ", kvs[i], kvs[i+1])
}
fmt.Fprintf(l.writer, "\n")
}
func (_ *testLogger) Enabled() bool {
return true
}
func (l *testLogger) Error(err error, msg string, kvs ...interface{}) {
kvs = append(kvs, "error", err)
l.Info(msg, kvs...)
}
func (l *testLogger) V(_ int) logr.InfoLogger {
return l
}
func (l *testLogger) WithName(name string) logr.Logger {
return &testLogger{
name: l.name + "." + name,
keyValues: l.keyValues,
writer: l.writer,
}
}
func (l *testLogger) WithValues(kvs ...interface{}) logr.Logger {
newMap := make(map[string]interface{}, len(l.keyValues)+len(kvs)/2)
for k, v := range l.keyValues {
newMap[k] = v
}
for i := 0; i < len(kvs); i += 2 {
newMap[kvs[i].(string)] = kvs[i+1]
}
return &testLogger{
name: l.name,
keyValues: newMap,
writer: l.writer,
}
}

View File

@@ -18,6 +18,7 @@ package controllers
import (
"context"
"fmt"
"time"
"github.com/summerwind/actions-runner-controller/github"
@@ -46,6 +47,9 @@ type HorizontalRunnerAutoscalerReconciler struct {
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
CacheDuration time.Duration
Name string
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;update;patch
@@ -79,13 +83,23 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
return ctrl.Result{}, nil
}
replicas, err := r.computeReplicas(rd, hra)
if err != nil {
r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
var replicas *int
log.Error(err, "Could not compute replicas")
replicasFromCache := r.getDesiredReplicasFromCache(hra)
return ctrl.Result{}, err
if replicasFromCache != nil {
replicas = replicasFromCache
} else {
var err error
replicas, err = r.computeReplicas(rd, hra)
if err != nil {
r.Recorder.Event(&hra, corev1.EventTypeNormal, "RunnerAutoscalingFailure", err.Error())
log.Error(err, "Could not compute replicas")
return ctrl.Result{}, err
}
}
const defaultReplicas = 1
@@ -93,46 +107,96 @@ func (r *HorizontalRunnerAutoscalerReconciler) Reconcile(req ctrl.Request) (ctrl
currentDesiredReplicas := getIntOrDefault(rd.Spec.Replicas, defaultReplicas)
newDesiredReplicas := getIntOrDefault(replicas, defaultReplicas)
now := time.Now()
for _, reservation := range hra.Spec.CapacityReservations {
if reservation.ExpirationTime.Time.After(now) {
newDesiredReplicas += reservation.Replicas
}
}
if hra.Spec.MaxReplicas != nil && *hra.Spec.MaxReplicas < newDesiredReplicas {
newDesiredReplicas = *hra.Spec.MaxReplicas
}
// Please add more conditions that we can in-place update the newest runnerreplicaset without disruption
if currentDesiredReplicas != newDesiredReplicas {
copy := rd.DeepCopy()
copy.Spec.Replicas = &newDesiredReplicas
if err := r.Client.Update(ctx, copy); err != nil {
log.Error(err, "Failed to update runnerderployment resource")
return ctrl.Result{}, err
if err := r.Client.Patch(ctx, copy, client.MergeFrom(&rd)); err != nil {
return ctrl.Result{}, fmt.Errorf("patching runnerdeployment to have %d replicas: %w", newDesiredReplicas, err)
}
return ctrl.Result{}, err
}
if hra.Status.DesiredReplicas == nil || *hra.Status.DesiredReplicas != *replicas {
updated := hra.DeepCopy()
var updated *v1alpha1.HorizontalRunnerAutoscaler
if (hra.Status.DesiredReplicas == nil && *replicas > 1) ||
(hra.Status.DesiredReplicas != nil && *replicas > *hra.Status.DesiredReplicas) {
if hra.Status.DesiredReplicas == nil || *hra.Status.DesiredReplicas != newDesiredReplicas {
updated = hra.DeepCopy()
if (hra.Status.DesiredReplicas == nil && newDesiredReplicas > 1) ||
(hra.Status.DesiredReplicas != nil && newDesiredReplicas > *hra.Status.DesiredReplicas) {
updated.Status.LastSuccessfulScaleOutTime = &metav1.Time{Time: time.Now()}
}
updated.Status.DesiredReplicas = replicas
updated.Status.DesiredReplicas = &newDesiredReplicas
}
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update horizontalrunnerautoscaler status")
if replicasFromCache == nil {
if updated == nil {
updated = hra.DeepCopy()
}
return ctrl.Result{}, err
cacheEntries := getValidCacheEntries(updated, now)
var cacheDuration time.Duration
if r.CacheDuration > 0 {
cacheDuration = r.CacheDuration
} else {
cacheDuration = 10 * time.Minute
}
updated.Status.CacheEntries = append(cacheEntries, v1alpha1.CacheEntry{
Key: v1alpha1.CacheEntryKeyDesiredReplicas,
Value: *replicas,
ExpirationTime: metav1.Time{Time: time.Now().Add(cacheDuration)},
})
}
if updated != nil {
if err := r.Status().Patch(ctx, updated, client.MergeFrom(&hra)); err != nil {
return ctrl.Result{}, fmt.Errorf("patching horizontalrunnerautoscaler status to add cache entry: %w", err)
}
}
return ctrl.Result{}, nil
}
func getValidCacheEntries(hra *v1alpha1.HorizontalRunnerAutoscaler, now time.Time) []v1alpha1.CacheEntry {
var cacheEntries []v1alpha1.CacheEntry
for _, ent := range hra.Status.CacheEntries {
if ent.ExpirationTime.After(now) {
cacheEntries = append(cacheEntries, ent)
}
}
return cacheEntries
}
func (r *HorizontalRunnerAutoscalerReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("horizontalrunnerautoscaler-controller")
name := "horizontalrunnerautoscaler-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.HorizontalRunnerAutoscaler{}).
Named(name).
Complete(r)
}

View File

@@ -0,0 +1,49 @@
package controllers
import (
"github.com/google/go-cmp/cmp"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"testing"
"time"
)
func TestGetValidCacheEntries(t *testing.T) {
now := time.Now()
hra := &actionsv1alpha1.HorizontalRunnerAutoscaler{
Status: actionsv1alpha1.HorizontalRunnerAutoscalerStatus{
CacheEntries: []actionsv1alpha1.CacheEntry{
{
Key: "foo",
Value: 1,
ExpirationTime: metav1.Time{Time: now.Add(-time.Second)},
},
{
Key: "foo",
Value: 2,
ExpirationTime: metav1.Time{Time: now},
},
{
Key: "foo",
Value: 3,
ExpirationTime: metav1.Time{Time: now.Add(time.Second)},
},
},
},
}
revs := getValidCacheEntries(hra, now)
counts := map[string]int{}
for _, r := range revs {
counts[r.Key] += r.Value
}
want := map[string]int{"foo": 3}
if d := cmp.Diff(want, counts); d != "" {
t.Errorf("%s", d)
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -18,12 +18,15 @@ package controllers
import (
"context"
"errors"
"fmt"
"reflect"
gogithub "github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/hash"
"strings"
"time"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/api/errors"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
@@ -39,6 +42,10 @@ import (
const (
containerName = "runner"
finalizerName = "runner.actions.summerwind.dev"
LabelKeyPodTemplateHash = "pod-template-hash"
retryDelayOnGitHubAPIRateLimitError = 30 * time.Second
)
// RunnerReconciler reconciles a Runner object
@@ -93,9 +100,22 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
if removed {
if len(runner.Status.Registration.Token) > 0 {
ok, err := r.unregisterRunner(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
ok, err := r.unregisterRunner(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
log.Error(err, "Failed to unregister runner")
if errors.Is(err, &gogithub.RateLimitError{}) {
// We log the underlying error when we failed calling GitHub API to list or unregisters,
// or the runner is still busy.
log.Error(
err,
fmt.Sprintf(
"Failed to unregister runner due to GitHub API rate limits. Delaying retry for %s to avoid excessive GitHub API calls",
retryDelayOnGitHubAPIRateLimitError,
),
)
return ctrl.Result{RequeueAfter: retryDelayOnGitHubAPIRateLimitError}, err
}
return ctrl.Result{}, err
}
@@ -120,39 +140,18 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil
}
if !runner.IsRegisterable() {
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
r.Recorder.Event(&runner, corev1.EventTypeWarning, "FailedUpdateRegistrationToken", "Updating registration token failed")
log.Error(err, "Failed to get new registration token")
return ctrl.Result{}, err
}
updated := runner.DeepCopy()
updated.Status.Registration = v1alpha1.RunnerStatusRegistration{
Organization: runner.Spec.Organization,
Repository: runner.Spec.Repository,
Labels: runner.Spec.Labels,
Token: rt.GetToken(),
ExpiresAt: metav1.NewTime(rt.GetExpiresAt().Time),
}
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status")
return ctrl.Result{}, err
}
r.Recorder.Event(&runner, corev1.EventTypeNormal, "RegistrationTokenUpdated", "Successfully update registration token")
log.Info("Updated registration token", "repository", runner.Spec.Repository)
return ctrl.Result{}, nil
}
var pod corev1.Pod
if err := r.Get(ctx, req.NamespacedName, &pod); err != nil {
if !errors.IsNotFound(err) {
if !kerrors.IsNotFound(err) {
return ctrl.Result{}, err
}
if updated, err := r.updateRegistrationToken(ctx, runner); err != nil {
return ctrl.Result{}, err
} else if updated {
return ctrl.Result{Requeue: true}, nil
}
newPod, err := r.newPod(runner)
if err != nil {
log.Error(err, "Could not create pod")
@@ -167,7 +166,11 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodCreated", fmt.Sprintf("Created pod '%s'", newPod.Name))
log.Info("Created runner pod", "repository", runner.Spec.Repository)
} else {
if runner.Status.Phase != string(pod.Status.Phase) {
// If pod has ended up succeeded we need to restart it
// Happens e.g. when dind is in runner and run completes
restart := pod.Status.Phase == corev1.PodSucceeded
if !restart && runner.Status.Phase != string(pod.Status.Phase) {
updated := runner.DeepCopy()
updated.Status.Phase = string(pod.Status.Phase)
updated.Status.Reason = pod.Status.Reason
@@ -182,10 +185,40 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
}
if !pod.ObjectMeta.DeletionTimestamp.IsZero() {
return ctrl.Result{}, err
}
deletionTimeout := 1 * time.Minute
currentTime := time.Now()
deletionDidTimeout := currentTime.Sub(pod.DeletionTimestamp.Add(deletionTimeout)) > 0
restart := false
if deletionDidTimeout {
log.Info(
"Pod failed to delete itself in a timely manner. "+
"This is typically the case when a Kubernetes node became unreachable "+
"and the kube controller started evicting nodes. Forcefully deleting the pod to not get stuck.",
"podDeletionTimestamp", pod.DeletionTimestamp,
"currentTime", currentTime,
"configuredDeletionTimeout", deletionTimeout,
)
var force int64 = 0
// forcefully delete runner as we would otherwise get stuck if the node stays unreachable
if err := r.Delete(ctx, &pod, &client.DeleteOptions{GracePeriodSeconds: &force}); err != nil {
// probably
if !kerrors.IsNotFound(err) {
log.Error(err, "Failed to forcefully delete pod resource ...")
return ctrl.Result{}, err
}
// forceful deletion finally succeeded
return ctrl.Result{Requeue: true}, nil
}
r.Recorder.Event(&runner, corev1.EventTypeNormal, "PodDeleted", fmt.Sprintf("Forcefully deleted pod '%s'", pod.Name))
log.Info("Forcefully deleted runner pod", "repository", runner.Spec.Repository)
// give kube manager a little time to forcefully delete the stuck pod
return ctrl.Result{RequeueAfter: 3 * time.Second}, err
} else {
return ctrl.Result{}, err
}
}
if pod.Status.Phase == corev1.PodRunning {
for _, status := range pod.Status.ContainerStatuses {
@@ -199,26 +232,105 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
}
}
if updated, err := r.updateRegistrationToken(ctx, runner); err != nil {
return ctrl.Result{}, err
} else if updated {
return ctrl.Result{Requeue: true}, nil
}
newPod, err := r.newPod(runner)
if err != nil {
log.Error(err, "Could not create pod")
return ctrl.Result{}, err
}
runnerBusy, err := r.isRunnerBusy(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
log.Error(err, "Failed to check if runner is busy")
// all checks done below only decide whether a restart is needed
// if a restart was already decided before, there is no need for the checks
// saving API calls and scary log messages
if !restart {
notRegistered := false
offline := false
runnerBusy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
var notFoundException *github.RunnerNotFound
var offlineException *github.RunnerOffline
if errors.As(err, &notFoundException) {
log.V(1).Info("Failed to check if runner is busy. Either this runner has never been successfully registered to GitHub or it still needs more time.", "runnerName", runner.Name)
notRegistered = true
} else if errors.As(err, &offlineException) {
log.V(1).Info("GitHub runner appears to be offline, waiting for runner to get online ...", "runnerName", runner.Name)
offline = true
} else {
var e *gogithub.RateLimitError
if errors.As(err, &e) {
// We log the underlying error when we failed calling GitHub API to list or unregisters,
// or the runner is still busy.
log.Error(
err,
fmt.Sprintf(
"Failed to check if runner is busy due to Github API rate limit. Retrying in %s to avoid excessive GitHub API calls",
retryDelayOnGitHubAPIRateLimitError,
),
)
return ctrl.Result{RequeueAfter: retryDelayOnGitHubAPIRateLimitError}, err
}
return ctrl.Result{}, err
}
}
// See the `newPod` function called above for more information
// about when this hash changes.
curHash := pod.Labels[LabelKeyPodTemplateHash]
newHash := newPod.Labels[LabelKeyPodTemplateHash]
if !runnerBusy && curHash != newHash {
restart = true
}
registrationTimeout := 10 * time.Minute
currentTime := time.Now()
registrationDidTimeout := currentTime.Sub(pod.CreationTimestamp.Add(registrationTimeout)) > 0
if notRegistered && registrationDidTimeout {
log.Info(
"Runner failed to register itself to GitHub in timely manner. "+
"Recreating the pod to see if it resolves the issue. "+
"CAUTION: If you see this a lot, you should investigate the root cause. "+
"See https://github.com/summerwind/actions-runner-controller/issues/288",
"podCreationTimestamp", pod.CreationTimestamp,
"currentTime", currentTime,
"configuredRegistrationTimeout", registrationTimeout,
)
restart = true
}
if offline && registrationDidTimeout {
log.Info(
"Already existing GitHub runner still appears offline . "+
"Recreating the pod to see if it resolves the issue. "+
"CAUTION: If you see this a lot, you should investigate the root cause. ",
"podCreationTimestamp", pod.CreationTimestamp,
"currentTime", currentTime,
"configuredRegistrationTimeout", registrationTimeout,
)
restart = true
}
}
// Don't do anything if there's no need to restart the runner
if !restart {
return ctrl.Result{}, nil
}
if !runnerBusy && (!reflect.DeepEqual(pod.Spec.Containers[0].Env, newPod.Spec.Containers[0].Env) || pod.Spec.Containers[0].Image != newPod.Spec.Containers[0].Image) {
restart = true
}
if !restart {
return ctrl.Result{}, err
}
// Delete current pod if recreation is needed
if err := r.Delete(ctx, &pod); err != nil {
log.Error(err, "Failed to delete pod resource")
return ctrl.Result{}, err
@@ -231,23 +343,8 @@ func (r *RunnerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
return ctrl.Result{}, nil
}
func (r *RunnerReconciler) isRunnerBusy(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
if err != nil {
return false, err
}
for _, runner := range runners {
if runner.GetName() == name {
return runner.GetBusy(), nil
}
}
return false, fmt.Errorf("runner not found")
}
func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
func (r *RunnerReconciler) unregisterRunner(ctx context.Context, enterprise, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, enterprise, org, repo)
if err != nil {
return false, err
}
@@ -267,17 +364,52 @@ func (r *RunnerReconciler) unregisterRunner(ctx context.Context, org, repo, name
return false, nil
}
if err := r.GitHubClient.RemoveRunner(ctx, org, repo, id); err != nil {
if err := r.GitHubClient.RemoveRunner(ctx, enterprise, org, repo, id); err != nil {
return false, err
}
return true, nil
}
func (r *RunnerReconciler) updateRegistrationToken(ctx context.Context, runner v1alpha1.Runner) (bool, error) {
if runner.IsRegisterable() {
return false, nil
}
log := r.Log.WithValues("runner", runner.Name)
rt, err := r.GitHubClient.GetRegistrationToken(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
r.Recorder.Event(&runner, corev1.EventTypeWarning, "FailedUpdateRegistrationToken", "Updating registration token failed")
log.Error(err, "Failed to get new registration token")
return false, err
}
updated := runner.DeepCopy()
updated.Status.Registration = v1alpha1.RunnerStatusRegistration{
Organization: runner.Spec.Organization,
Repository: runner.Spec.Repository,
Labels: runner.Spec.Labels,
Token: rt.GetToken(),
ExpiresAt: metav1.NewTime(rt.GetExpiresAt().Time),
}
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status")
return false, err
}
r.Recorder.Event(&runner, corev1.EventTypeNormal, "RegistrationTokenUpdated", "Successfully update registration token")
log.Info("Updated registration token", "repository", runner.Spec.Repository)
return true, nil
}
func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
var (
privileged bool = true
group int64 = 0
privileged bool = true
dockerdInRunner bool = runner.Spec.DockerdWithinRunnerContainer != nil && *runner.Spec.DockerdWithinRunnerContainer
dockerEnabled bool = runner.Spec.DockerEnabled == nil || *runner.Spec.DockerEnabled
)
runnerImage := runner.Spec.Image
@@ -285,6 +417,11 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
runnerImage = r.RunnerImage
}
workDir := runner.Spec.WorkDir
if workDir == "" {
workDir = "/runner/_work"
}
runnerImagePullPolicy := runner.Spec.ImagePullPolicy
if runnerImagePullPolicy == "" {
runnerImagePullPolicy = corev1.PullAlways
@@ -303,22 +440,75 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
Name: "RUNNER_REPO",
Value: runner.Spec.Repository,
},
{
Name: "RUNNER_ENTERPRISE",
Value: runner.Spec.Enterprise,
},
{
Name: "RUNNER_LABELS",
Value: strings.Join(runner.Spec.Labels, ","),
},
{
Name: "RUNNER_GROUP",
Value: runner.Spec.Group,
},
{
Name: "RUNNER_TOKEN",
Value: runner.Status.Registration.Token,
},
{
Name: "DOCKERD_IN_RUNNER",
Value: fmt.Sprintf("%v", dockerdInRunner),
},
{
Name: "GITHUB_URL",
Value: r.GitHubClient.GithubBaseURL,
},
{
Name: "RUNNER_WORKDIR",
Value: workDir,
},
}
env = append(env, runner.Spec.Env...)
labels := map[string]string{}
for k, v := range runner.Labels {
labels[k] = v
}
// This implies that...
//
// (1) We recreate the runner pod whenever the runner has changes in:
// - metadata.labels (excluding "runner-template-hash" added by the parent RunnerReplicaSet
// - metadata.annotations
// - metadata.spec (including image, env, organization, repository, group, and so on)
// - GithubBaseURL setting of the controller (can be configured via GITHUB_ENTERPRISE_URL)
//
// (2) We don't recreate the runner pod when there are changes in:
// - runner.status.registration.token
// - This token expires and changes hourly, but you don't need to recreate the pod due to that.
// It's the opposite.
// An unexpired token is required only when the runner agent is registering itself on launch.
//
// In other words, the registered runner doesn't get invalidated on registration token expiration.
// A registered runner's session and the a registration token seem to have two different and independent
// lifecycles.
//
// See https://github.com/summerwind/actions-runner-controller/issues/143 for more context.
labels[LabelKeyPodTemplateHash] = hash.FNVHashStringObjects(
filterLabels(runner.Labels, LabelKeyRunnerTemplateHash),
runner.Annotations,
runner.Spec,
r.GitHubClient.GithubBaseURL,
)
pod := corev1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: runner.Name,
Namespace: runner.Namespace,
Labels: runner.Labels,
Labels: labels,
Annotations: runner.Annotations,
},
Spec: corev1.PodSpec{
@@ -330,58 +520,125 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
ImagePullPolicy: runnerImagePullPolicy,
Env: env,
EnvFrom: runner.Spec.EnvFrom,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
RunAsGroup: &group,
// Runner need to run privileged if it contains DinD
Privileged: runner.Spec.DockerdWithinRunnerContainer,
},
Resources: runner.Spec.Resources,
},
{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: "/runner/_work",
},
{
Name: "docker",
MountPath: "/var/run",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
},
},
Volumes: []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "docker",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
},
},
}
if mtu := runner.Spec.DockerMTU; mtu != nil && dockerdInRunner {
pod.Spec.Containers[0].Env = append(pod.Spec.Containers[0].Env, []corev1.EnvVar{
{
Name: "MTU",
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
},
}...)
}
if !dockerdInRunner && dockerEnabled {
runnerVolumeName := "runner"
runnerVolumeMountPath := "/runner"
pod.Spec.Volumes = []corev1.Volume{
{
Name: "work",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: runnerVolumeName,
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
{
Name: "certs-client",
VolumeSource: corev1.VolumeSource{
EmptyDir: &corev1.EmptyDirVolumeSource{},
},
},
}
pod.Spec.Containers[0].VolumeMounts = []corev1.VolumeMount{
{
Name: "work",
MountPath: workDir,
},
{
Name: runnerVolumeName,
MountPath: runnerVolumeMountPath,
},
{
Name: "certs-client",
MountPath: "/certs/client",
ReadOnly: true,
},
}
pod.Spec.Containers[0].Env = append(pod.Spec.Containers[0].Env, []corev1.EnvVar{
{
Name: "DOCKER_HOST",
Value: "tcp://localhost:2376",
},
{
Name: "DOCKER_TLS_VERIFY",
Value: "1",
},
{
Name: "DOCKER_CERT_PATH",
Value: "/certs/client",
},
}...)
pod.Spec.Containers = append(pod.Spec.Containers, corev1.Container{
Name: "docker",
Image: r.DockerImage,
VolumeMounts: []corev1.VolumeMount{
{
Name: "work",
MountPath: workDir,
},
{
Name: runnerVolumeName,
MountPath: runnerVolumeMountPath,
},
{
Name: "certs-client",
MountPath: "/certs/client",
},
},
Env: []corev1.EnvVar{
{
Name: "DOCKER_TLS_CERTDIR",
Value: "/certs",
},
},
SecurityContext: &corev1.SecurityContext{
Privileged: &privileged,
},
Resources: runner.Spec.DockerdContainerResources,
})
if mtu := runner.Spec.DockerMTU; mtu != nil {
pod.Spec.Containers[1].Env = append(pod.Spec.Containers[1].Env, []corev1.EnvVar{
{
Name: "DOCKERD_ROOTLESS_ROOTLESSKIT_MTU",
Value: fmt.Sprintf("%d", *runner.Spec.DockerMTU),
},
}...)
}
}
if len(runner.Spec.Containers) != 0 {
pod.Spec.Containers = runner.Spec.Containers
for i := 0; i < len(pod.Spec.Containers); i++ {
if pod.Spec.Containers[i].Name == containerName {
pod.Spec.Containers[i].Env = append(pod.Spec.Containers[i].Env, env...)
}
}
}
if len(runner.Spec.VolumeMounts) != 0 {
@@ -441,11 +698,14 @@ func (r *RunnerReconciler) newPod(runner v1alpha1.Runner) (corev1.Pod, error) {
}
func (r *RunnerReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runner-controller")
name := "runner-controller"
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.Runner{}).
Owns(&corev1.Pod{}).
Named(name).
Complete(r)
}

View File

@@ -20,6 +20,7 @@ import (
"context"
"fmt"
"hash/fnv"
"reflect"
"sort"
"time"
@@ -40,7 +41,8 @@ import (
)
const (
LabelKeyRunnerTemplateHash = "runner-template-hash"
LabelKeyRunnerTemplateHash = "runner-template-hash"
LabelKeyRunnerDeploymentName = "runner-deployment-name"
runnerSetOwnerKey = ".metadata.controller"
)
@@ -48,9 +50,11 @@ const (
// RunnerDeploymentReconciler reconciles a Runner object
type RunnerDeploymentReconciler struct {
client.Client
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
Log logr.Logger
Recorder record.EventRecorder
Scheme *runtime.Scheme
CommonRunnerLabels []string
Name string
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerdeployments,verbs=get;list;watch;create;update;patch;delete
@@ -141,6 +145,28 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
}
if !reflect.DeepEqual(newestSet.Spec.Selector, desiredRS.Spec.Selector) {
updateSet := newestSet.DeepCopy()
updateSet.Spec = *desiredRS.Spec.DeepCopy()
// A selector update change doesn't trigger replicaset replacement,
// but we still need to update the existing replicaset with it.
// Otherwise selector-based runner query will never work on replicasets created before the controller v0.17.0
// See https://github.com/summerwind/actions-runner-controller/pull/355#discussion_r585379259
if err := r.Client.Update(ctx, updateSet); err != nil {
log.Error(err, "Failed to update runnerreplicaset resource")
return ctrl.Result{}, err
}
// At this point, we are already sure that there's no need to create a new replicaset
// as the runner template hash is not changed.
//
// But we still need to requeue for the (possibly rare) cases that there are still old replicasets that needs
// to be cleaned up.
return ctrl.Result{RequeueAfter: 5 * time.Second}, nil
}
const defaultReplicas = 1
currentDesiredReplicas := getIntOrDefault(newestSet.Spec.Replicas, defaultReplicas)
@@ -177,7 +203,7 @@ func (r *RunnerDeploymentReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
rs := oldSets[i]
if err := r.Client.Delete(ctx, &rs); err != nil {
log.Error(err, "Failed to delete runner resource")
log.Error(err, "Failed to delete runnerreplicaset resource")
return ctrl.Result{}, err
}
@@ -256,28 +282,94 @@ func CloneAndAddLabel(labels map[string]string, labelKey, labelValue string) map
return newLabels
}
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
newRSTemplate := *rd.Spec.Template.DeepCopy()
templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector.
labels := CloneAndAddLabel(rd.Spec.Template.Labels, LabelKeyRunnerTemplateHash, templateHash)
// Clones the given selector and returns a new selector with the given key and value added.
// Returns the given selector, if labelKey is empty.
//
// Proudly copied from k8s.io/kubernetes/pkg/util/labels.CloneSelectorAndAddLabel
func CloneSelectorAndAddLabel(selector *metav1.LabelSelector, labelKey, labelValue string) *metav1.LabelSelector {
if labelKey == "" {
// Don't need to add a label.
return selector
}
newRSTemplate.Labels = labels
// Clone.
newSelector := new(metav1.LabelSelector)
newSelector.MatchLabels = make(map[string]string)
if selector.MatchLabels != nil {
for key, val := range selector.MatchLabels {
newSelector.MatchLabels[key] = val
}
}
newSelector.MatchLabels[labelKey] = labelValue
if selector.MatchExpressions != nil {
newMExps := make([]metav1.LabelSelectorRequirement, len(selector.MatchExpressions))
for i, me := range selector.MatchExpressions {
newMExps[i].Key = me.Key
newMExps[i].Operator = me.Operator
if me.Values != nil {
newMExps[i].Values = make([]string, len(me.Values))
copy(newMExps[i].Values, me.Values)
} else {
newMExps[i].Values = nil
}
}
newSelector.MatchExpressions = newMExps
} else {
newSelector.MatchExpressions = nil
}
return newSelector
}
func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeployment) (*v1alpha1.RunnerReplicaSet, error) {
return newRunnerReplicaSet(&rd, r.CommonRunnerLabels, r.Scheme)
}
func getSelector(rd *v1alpha1.RunnerDeployment) *metav1.LabelSelector {
selector := rd.Spec.Selector
if selector == nil {
selector = &metav1.LabelSelector{MatchLabels: map[string]string{LabelKeyRunnerDeploymentName: rd.Name}}
}
return selector
}
func newRunnerReplicaSet(rd *v1alpha1.RunnerDeployment, commonRunnerLabels []string, scheme *runtime.Scheme) (*v1alpha1.RunnerReplicaSet, error) {
newRSTemplate := *rd.Spec.Template.DeepCopy()
for _, l := range commonRunnerLabels {
newRSTemplate.Spec.Labels = append(newRSTemplate.Spec.Labels, l)
}
templateHash := ComputeHash(&newRSTemplate)
// Add template hash label to selector.
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerTemplateHash, templateHash)
// This label selector is used by default when rd.Spec.Selector is empty.
newRSTemplate.ObjectMeta.Labels = CloneAndAddLabel(newRSTemplate.ObjectMeta.Labels, LabelKeyRunnerDeploymentName, rd.Name)
selector := getSelector(rd)
newRSSelector := CloneSelectorAndAddLabel(selector, LabelKeyRunnerTemplateHash, templateHash)
rs := v1alpha1.RunnerReplicaSet{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
GenerateName: rd.ObjectMeta.Name + "-",
Namespace: rd.ObjectMeta.Namespace,
Labels: labels,
Labels: newRSTemplate.ObjectMeta.Labels,
},
Spec: v1alpha1.RunnerReplicaSetSpec{
Replicas: rd.Spec.Replicas,
Selector: newRSSelector,
Template: newRSTemplate,
},
}
if err := ctrl.SetControllerReference(&rd, &rs, r.Scheme); err != nil {
if err := ctrl.SetControllerReference(rd, &rs, scheme); err != nil {
return &rs, err
}
@@ -285,7 +377,12 @@ func (r *RunnerDeploymentReconciler) newRunnerReplicaSet(rd v1alpha1.RunnerDeplo
}
func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runnerdeployment-controller")
name := "runnerdeployment-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
if err := mgr.GetFieldIndexer().IndexField(&v1alpha1.RunnerReplicaSet{}, runnerSetOwnerKey, func(rawObj runtime.Object) []string {
runnerSet := rawObj.(*v1alpha1.RunnerReplicaSet)
@@ -306,5 +403,6 @@ func (r *RunnerDeploymentReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.RunnerDeployment{}).
Owns(&v1alpha1.RunnerReplicaSet{}).
Named(name).
Complete(r)
}

View File

@@ -2,8 +2,13 @@ package controllers
import (
"context"
"fmt"
"testing"
"time"
"github.com/google/go-cmp/cmp"
"k8s.io/apimachinery/pkg/runtime"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme"
@@ -18,6 +23,103 @@ import (
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
)
func TestNewRunnerReplicaSet(t *testing.T) {
scheme := runtime.NewScheme()
if err := actionsv1alpha1.AddToScheme(scheme); err != nil {
t.Fatalf("%v", err)
}
r := &RunnerDeploymentReconciler{
CommonRunnerLabels: []string{"dev"},
Scheme: scheme,
}
rd := actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: "example",
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Labels: []string{"project1"},
},
},
},
}
rs, err := r.newRunnerReplicaSet(rd)
if err != nil {
t.Fatalf("%v", err)
}
if val, ok := rs.Labels["foo"]; ok {
if val != "bar" {
t.Errorf("foo label does not have bar but %v", val)
}
} else {
t.Errorf("foo label does not exist")
}
hash1, ok := rs.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
runnerLabel := []string{"project1", "dev"}
if d := cmp.Diff(runnerLabel, rs.Spec.Template.Spec.Labels); d != "" {
t.Errorf("%s", d)
}
rd2 := rd.DeepCopy()
rd2.Spec.Template.Spec.Labels = []string{"project2"}
rs2, err := r.newRunnerReplicaSet(*rd2)
if err != nil {
t.Fatalf("%v", err)
}
hash2, ok := rs2.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
if hash1 == hash2 {
t.Errorf(
"runner replica sets from runner deployments with varying labels must have different template hash, but got %s and %s",
hash1, hash2,
)
}
rd3 := rd.DeepCopy()
rd3.Spec.Template.Labels["foo"] = "baz"
rs3, err := r.newRunnerReplicaSet(*rd3)
if err != nil {
t.Fatalf("%v", err)
}
hash3, ok := rs3.Labels[LabelKeyRunnerTemplateHash]
if !ok {
t.Errorf("missing runner-template-hash label")
}
if hash1 == hash3 {
t.Errorf(
"runner replica sets from runner deployments with varying meta labels must have different template hash, but got %s and %s",
hash1, hash3,
)
}
}
// SetupDeploymentTest will set up a testing environment.
// This includes:
// * creating a Namespace to be used during the test
@@ -45,6 +147,7 @@ func SetupDeploymentTest(ctx context.Context) *corev1.Namespace {
Scheme: scheme.Scheme,
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
Name: "runnerdeployment-" + ns.Name,
}
err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -74,7 +177,113 @@ var _ = Context("Inside of a new namespace", func() {
Describe("when no existing resources exist", func() {
It("should create a new RunnerReplicaSet resource from the specified template, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
name := "example-runnerdeploy"
name := "example-runnerdeploy-1"
{
rs := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar",
Image: "bar",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
err := k8sClient.Create(ctx, rs)
Expect(err).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil {
return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
}
{
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
var rd actionsv1alpha1.RunnerDeployment
Eventually(func() error {
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
if err != nil {
return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
}
rd.Spec.Replicas = intPtr(2)
return k8sClient.Update(ctx, &rd)
},
time.Second*1, time.Millisecond*500).Should(BeNil())
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
if err != nil {
return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
}
})
It("should create a new RunnerReplicaSet resource from the specified template without labels and selector, add a another RunnerReplicaSet on template modification, and eventually removes old runnerreplicasets", func() {
name := "example-runnerdeploy-2"
{
rs := &actionsv1alpha1.RunnerDeployment{
@@ -103,29 +312,25 @@ var _ = Context("Inside of a new namespace", func() {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil {
logf.Log.Error(err, "list runner sets")
return 0, err
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
logf.Log.Error(err, "list runner sets")
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
if len(runnerSets.Items) == 0 {
logf.Log.Info("No runnerreplicasets exist yet")
return -1
}
return *runnerSets.Items[0].Spec.Replicas
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
}
@@ -134,13 +339,12 @@ var _ = Context("Inside of a new namespace", func() {
// We wrap the update in the Eventually block to avoid the below error that occurs due to concurrent modification
// made by the controller to update .Status.AvailableReplicas and .Status.ReadyReplicas
// Operation cannot be fulfilled on runnersets.actions.summerwind.dev "example-runnerset": the object has been modified; please apply your changes to the latest version and try again
var rd actionsv1alpha1.RunnerDeployment
Eventually(func() error {
var rd actionsv1alpha1.RunnerDeployment
err := k8sClient.Get(ctx, types.NamespacedName{Namespace: ns.Name, Name: name}, &rd)
Expect(err).NotTo(HaveOccurred(), "failed to get test RunnerReplicaSet resource")
if err != nil {
return fmt.Errorf("failed to get test RunnerReplicaSet resource: %v\n", err)
}
rd.Spec.Replicas = intPtr(2)
return k8sClient.Update(ctx, &rd)
@@ -150,27 +354,126 @@ var _ = Context("Inside of a new namespace", func() {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
func() (int, error) {
selector, err := metav1.LabelSelectorAsSelector(rd.Spec.Selector)
if err != nil {
logf.Log.Error(err, "list runner sets")
return 0, err
}
err = k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
return 0, err
}
if len(runnerSets.Items) != 1 {
return 0, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return len(runnerSets.Items)
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(1))
Eventually(
func() int {
err := k8sClient.List(ctx, &runnerSets, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runner sets")
}
return *runnerSets.Items[0].Spec.Replicas
return *runnerSets.Items[0].Spec.Replicas, nil
},
time.Second*5, time.Millisecond*500).Should(BeEquivalentTo(2))
}
})
It("should adopt RunnerReplicaSet created before 0.18.0 to have Spec.Selector", func() {
name := "example-runnerdeploy-2"
{
rd := &actionsv1alpha1.RunnerDeployment{
ObjectMeta: metav1.ObjectMeta{
Name: name,
Namespace: ns.Name,
},
Spec: actionsv1alpha1.RunnerDeploymentSpec{
Replicas: intPtr(1),
Template: actionsv1alpha1.RunnerTemplate{
Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar",
Image: "bar",
Env: []corev1.EnvVar{
{Name: "FOO", Value: "FOOVALUE"},
},
},
},
},
}
createRDErr := k8sClient.Create(ctx, rd)
Expect(createRDErr).NotTo(HaveOccurred(), "failed to create test RunnerReplicaSet resource")
Eventually(
func() (int, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return 0, err
}
return len(runnerSets.Items), nil
},
time.Second*1, time.Millisecond*500).Should(BeEquivalentTo(1))
var rs17 *actionsv1alpha1.RunnerReplicaSet
Consistently(
func() (*metav1.LabelSelector, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return nil, err
}
if len(runnerSets.Items) != 1 {
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
rs17 = &runnerSets.Items[0]
return runnerSets.Items[0].Spec.Selector, nil
},
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
// We simulate the old, pre 0.18.0 RunnerReplicaSet by updating it.
// I've tried to use controllerutil.Set{Owner,Controller}Reference and k8sClient.Create(rs17)
// but it didn't work due to missing RD UID, where UID is generated on K8s API server on k8sCLient.Create(rd)
rs17.Spec.Selector = nil
updateRSErr := k8sClient.Update(ctx, rs17)
Expect(updateRSErr).NotTo(HaveOccurred())
Eventually(
func() (*metav1.LabelSelector, error) {
runnerSets := actionsv1alpha1.RunnerReplicaSetList{Items: []actionsv1alpha1.RunnerReplicaSet{}}
err := k8sClient.List(
ctx,
&runnerSets,
client.InNamespace(ns.Name),
)
if err != nil {
return nil, err
}
if len(runnerSets.Items) != 1 {
return nil, fmt.Errorf("runnerreplicasets is not 1 but %d", len(runnerSets.Items))
}
return runnerSets.Items[0].Spec.Selector, nil
},
time.Second*1, time.Millisecond*500).Should(Not(BeNil()))
}
})
})
})

View File

@@ -18,10 +18,14 @@ package controllers
import (
"context"
"errors"
"fmt"
"time"
gogithub "github.com/google/go-github/v33/github"
"github.com/go-logr/logr"
"k8s.io/apimachinery/pkg/api/errors"
kerrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/tools/record"
ctrl "sigs.k8s.io/controller-runtime"
@@ -41,6 +45,7 @@ type RunnerReplicaSetReconciler struct {
Recorder record.EventRecorder
Scheme *runtime.Scheme
GitHubClient *github.Client
Name string
}
// +kubebuilder:rbac:groups=actions.summerwind.dev,resources=runnerreplicasets,verbs=get;list;watch;create;update;patch;delete
@@ -52,7 +57,7 @@ type RunnerReplicaSetReconciler struct {
func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
ctx := context.Background()
log := r.Log.WithValues("runner", req.NamespacedName)
log := r.Log.WithValues("runnerreplicaset", req.NamespacedName)
var rs v1alpha1.RunnerReplicaSet
if err := r.Get(ctx, req.NamespacedName, &rs); err != nil {
@@ -63,18 +68,33 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
return ctrl.Result{}, nil
}
selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector)
if err != nil {
return ctrl.Result{}, err
}
// Get the Runners managed by the target RunnerReplicaSet
var allRunners v1alpha1.RunnerList
if err := r.List(ctx, &allRunners, client.InNamespace(req.Namespace)); err != nil {
if !errors.IsNotFound(err) {
if err := r.List(
ctx,
&allRunners,
client.InNamespace(req.Namespace),
client.MatchingLabelsSelector{Selector: selector},
); err != nil {
if !kerrors.IsNotFound(err) {
return ctrl.Result{}, err
}
}
var myRunners []v1alpha1.Runner
var available, ready int
var (
available int
ready int
)
for _, r := range allRunners.Items {
// This guard is required to avoid the RunnerReplicaSet created by the controller v0.17.0 or before
// to not treat all the runners in the namespace as its children.
if metav1.IsControlledBy(&r, &rs) {
myRunners = append(myRunners, r)
@@ -101,13 +121,61 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
// get runners that are currently not busy
var notBusy []v1alpha1.Runner
for _, runner := range myRunners {
busy, err := r.isRunnerBusy(ctx, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
for _, runner := range allRunners.Items {
busy, err := r.GitHubClient.IsRunnerBusy(ctx, runner.Spec.Enterprise, runner.Spec.Organization, runner.Spec.Repository, runner.Name)
if err != nil {
log.Error(err, "Failed to check if runner is busy")
return ctrl.Result{}, err
}
if !busy {
notRegistered := false
offline := false
var notFoundException *github.RunnerNotFound
var offlineException *github.RunnerOffline
if errors.As(err, &notFoundException) {
log.V(1).Info("Failed to check if runner is busy. Either this runner has never been successfully registered to GitHub or it still needs more time.", "runnerName", runner.Name)
notRegistered = true
} else if errors.As(err, &offlineException) {
offline = true
} else {
var e *gogithub.RateLimitError
if errors.As(err, &e) {
// We log the underlying error when we failed calling GitHub API to list or unregisters,
// or the runner is still busy.
log.Error(
err,
fmt.Sprintf(
"Failed to check if runner is busy due to GitHub API rate limit. Retrying in %s to avoid excessive GitHub API calls",
retryDelayOnGitHubAPIRateLimitError,
),
)
return ctrl.Result{RequeueAfter: retryDelayOnGitHubAPIRateLimitError}, err
}
return ctrl.Result{}, err
}
registrationTimeout := 15 * time.Minute
currentTime := time.Now()
registrationDidTimeout := currentTime.Sub(runner.CreationTimestamp.Add(registrationTimeout)) > 0
if notRegistered && registrationDidTimeout {
log.Info(
"Runner failed to register itself to GitHub in timely manner. "+
"Marking the runner for scale down. "+
"CAUTION: If you see this a lot, you should investigate the root cause. "+
"See https://github.com/summerwind/actions-runner-controller/issues/288",
"runnerCreationTimestamp", runner.CreationTimestamp,
"currentTime", currentTime,
"configuredRegistrationTimeout", registrationTimeout,
)
notBusy = append(notBusy, runner)
}
// offline runners should always be a great target for scale down
if offline {
notBusy = append(notBusy, runner)
}
} else if !busy {
notBusy = append(notBusy, runner)
}
}
@@ -117,13 +185,13 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
}
for i := 0; i < n; i++ {
if err := r.Client.Delete(ctx, &notBusy[i]); err != nil {
if err := r.Client.Delete(ctx, &notBusy[i]); client.IgnoreNotFound(err) != nil {
log.Error(err, "Failed to delete runner resource")
return ctrl.Result{}, err
}
r.Recorder.Event(&rs, corev1.EventTypeNormal, "RunnerDeleted", fmt.Sprintf("Deleted runner '%s'", myRunners[i].Name))
r.Recorder.Event(&rs, corev1.EventTypeNormal, "RunnerDeleted", fmt.Sprintf("Deleted runner '%s'", notBusy[i].Name))
log.Info("Deleted runner", "runnerreplicaset", rs.ObjectMeta.Name)
}
} else if desired > available {
@@ -151,8 +219,10 @@ func (r *RunnerReplicaSetReconciler) Reconcile(req ctrl.Request) (ctrl.Result, e
updated.Status.ReadyReplicas = ready
if err := r.Status().Update(ctx, updated); err != nil {
log.Error(err, "Failed to update runner status")
return ctrl.Result{}, err
log.Info("Failed to update status. Retrying immediately", "error", err.Error())
return ctrl.Result{
Requeue: true,
}, nil
}
}
@@ -179,26 +249,16 @@ func (r *RunnerReplicaSetReconciler) newRunner(rs v1alpha1.RunnerReplicaSet) (v1
}
func (r *RunnerReplicaSetReconciler) SetupWithManager(mgr ctrl.Manager) error {
r.Recorder = mgr.GetEventRecorderFor("runnerreplicaset-controller")
name := "runnerreplicaset-controller"
if r.Name != "" {
name = r.Name
}
r.Recorder = mgr.GetEventRecorderFor(name)
return ctrl.NewControllerManagedBy(mgr).
For(&v1alpha1.RunnerReplicaSet{}).
Owns(&v1alpha1.Runner{}).
Named(name).
Complete(r)
}
func (r *RunnerReplicaSetReconciler) isRunnerBusy(ctx context.Context, org, repo, name string) (bool, error) {
runners, err := r.GitHubClient.ListRunners(ctx, org, repo)
r.Log.Info("runners", "github", runners)
if err != nil {
return false, err
}
for _, runner := range runners {
if runner.GetName() == name {
return runner.GetBusy(), nil
}
}
return false, fmt.Errorf("runner not found")
}

View File

@@ -6,7 +6,7 @@ import (
"net/http/httptest"
"time"
"github.com/google/go-github/v32/github"
"github.com/google/go-github/v33/github"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
"k8s.io/client-go/kubernetes/scheme"
@@ -60,6 +60,7 @@ func SetupTest(ctx context.Context) *corev1.Namespace {
Log: logf.Log,
Recorder: mgr.GetEventRecorderFor("runnerreplicaset-controller"),
GitHubClient: ghClient,
Name: "runnerreplicaset-" + ns.Name,
}
err = controller.SetupWithManager(mgr)
Expect(err).NotTo(HaveOccurred(), "failed to setup controller")
@@ -114,7 +115,17 @@ var _ = Context("Inside of a new namespace", func() {
},
Spec: actionsv1alpha1.RunnerReplicaSetSpec{
Replicas: intPtr(1),
Selector: &metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
Template: actionsv1alpha1.RunnerTemplate{
ObjectMeta: metav1.ObjectMeta{
Labels: map[string]string{
"foo": "bar",
},
},
Spec: actionsv1alpha1.RunnerSpec{
Repository: "foo/bar",
Image: "bar",
@@ -134,9 +145,26 @@ var _ = Context("Inside of a new namespace", func() {
Eventually(
func() int {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
selector, err := metav1.LabelSelectorAsSelector(
&metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
)
if err != nil {
logf.Log.Error(err, "failed to create labelselector")
return -1
}
err = k8sClient.List(
ctx,
&runners,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
logf.Log.Error(err, "list runners")
return -1
}
for i, runner := range runners.Items {
@@ -175,7 +203,23 @@ var _ = Context("Inside of a new namespace", func() {
Eventually(
func() int {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
selector, err := metav1.LabelSelectorAsSelector(
&metav1.LabelSelector{
MatchLabels: map[string]string{
"foo": "bar",
},
},
)
if err != nil {
logf.Log.Error(err, "failed to create labelselector")
return -1
}
err = k8sClient.List(
ctx,
&runners,
client.InNamespace(ns.Name),
client.MatchingLabelsSelector{Selector: selector},
)
if err != nil {
logf.Log.Error(err, "list runners")
}
@@ -219,6 +263,7 @@ var _ = Context("Inside of a new namespace", func() {
err := k8sClient.List(ctx, &runners, client.InNamespace(ns.Name))
if err != nil {
logf.Log.Error(err, "list runners")
return -1
}
for i, runner := range runners.Items {

View File

@@ -17,6 +17,8 @@ limitations under the License.
package controllers
import (
"github.com/onsi/ginkgo/config"
"os"
"path/filepath"
"testing"
@@ -43,6 +45,8 @@ var testEnv *envtest.Environment
func TestAPIs(t *testing.T) {
RegisterFailHandler(Fail)
config.GinkgoConfig.FocusString = os.Getenv("GINKGO_FOCUS")
RunSpecsWithDefaultAndCustomReporters(t,
"Controller Suite",
[]Reporter{envtest.NewlineReporter{}})

View File

@@ -0,0 +1,373 @@
{
"action": "created",
"check_run": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_sha": "1234567890123456789012345678901234567890",
"external_id": "92058b04-f16a-5035-546c-cae3ad5e2f5f",
"url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890",
"html_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"details_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"status": "queued",
"conclusion": null,
"started_at": "2021-02-18T06:16:31Z",
"completed_at": null,
"output": {
"title": null,
"summary": null,
"text": null,
"annotations_count": 0,
"annotations_url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890/annotations"
},
"name": "validate",
"check_suite": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_branch": "MYNAME/actions-runner-controller-webhook",
"head_sha": "1234567890123456789012345678901234567890",
"status": "queued",
"conclusion": null,
"url": "https://api.github.com/repos/MYORG/MYREPO/check-suites/1234567890",
"before": "1234567890123456789012345678901234567890",
"after": "1234567890123456789012345678901234567890",
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/2033",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
],
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/123467890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"created_at": "2021-02-18T06:15:32Z",
"updated_at": "2021-02-18T06:16:31Z"
},
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/1234567890",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
]
},
"repository": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"name": "MYREPO",
"full_name": "MYORG/MYREPO",
"private": true,
"owner": {
"login": "MYORG",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYORG",
"html_url": "https://github.com/MYORG",
"followers_url": "https://api.github.com/users/MYORG/followers",
"following_url": "https://api.github.com/users/MYORG/following{/other_user}",
"gists_url": "https://api.github.com/users/MYORG/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYORG/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYORG/subscriptions",
"organizations_url": "https://api.github.com/users/MYORG/orgs",
"repos_url": "https://api.github.com/users/MYORG/repos",
"events_url": "https://api.github.com/users/MYORG/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYORG/received_events",
"type": "Organization",
"site_admin": false
},
"html_url": "https://github.com/MYORG/MYREPO",
"description": "MYREPO",
"fork": false,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"forks_url": "https://api.github.com/repos/MYORG/MYREPO/forks",
"keys_url": "https://api.github.com/repos/MYORG/MYREPO/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/MYORG/MYREPO/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/MYORG/MYREPO/teams",
"hooks_url": "https://api.github.com/repos/MYORG/MYREPO/hooks",
"issue_events_url": "https://api.github.com/repos/MYORG/MYREPO/issues/events{/number}",
"events_url": "https://api.github.com/repos/MYORG/MYREPO/events",
"assignees_url": "https://api.github.com/repos/MYORG/MYREPO/assignees{/user}",
"branches_url": "https://api.github.com/repos/MYORG/MYREPO/branches{/branch}",
"tags_url": "https://api.github.com/repos/MYORG/MYREPO/tags",
"blobs_url": "https://api.github.com/repos/MYORG/MYREPO/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/MYORG/MYREPO/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/MYORG/MYREPO/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/MYORG/MYREPO/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/MYORG/MYREPO/statuses/{sha}",
"languages_url": "https://api.github.com/repos/MYORG/MYREPO/languages",
"stargazers_url": "https://api.github.com/repos/MYORG/MYREPO/stargazers",
"contributors_url": "https://api.github.com/repos/MYORG/MYREPO/contributors",
"subscribers_url": "https://api.github.com/repos/MYORG/MYREPO/subscribers",
"subscription_url": "https://api.github.com/repos/MYORG/MYREPO/subscription",
"commits_url": "https://api.github.com/repos/MYORG/MYREPO/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/MYORG/MYREPO/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/MYORG/MYREPO/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/MYORG/MYREPO/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/MYORG/MYREPO/contents/{+path}",
"compare_url": "https://api.github.com/repos/MYORG/MYREPO/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/MYORG/MYREPO/merges",
"archive_url": "https://api.github.com/repos/MYORG/MYREPO/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/MYORG/MYREPO/downloads",
"issues_url": "https://api.github.com/repos/MYORG/MYREPO/issues{/number}",
"pulls_url": "https://api.github.com/repos/MYORG/MYREPO/pulls{/number}",
"milestones_url": "https://api.github.com/repos/MYORG/MYREPO/milestones{/number}",
"notifications_url": "https://api.github.com/repos/MYORG/MYREPO/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/MYORG/MYREPO/labels{/name}",
"releases_url": "https://api.github.com/repos/MYORG/MYREPO/releases{/id}",
"deployments_url": "https://api.github.com/repos/MYORG/MYREPO/deployments",
"created_at": "2017-08-10T02:21:10Z",
"updated_at": "2021-02-18T04:40:55Z",
"pushed_at": "2021-02-18T06:15:30Z",
"git_url": "git://github.com/MYORG/MYREPO.git",
"ssh_url": "git@github.com:MYORG/MYREPO.git",
"clone_url": "https://github.com/MYORG/MYREPO.git",
"svn_url": "https://github.com/MYORG/MYREPO",
"homepage": null,
"size": 30782,
"stargazers_count": 2,
"watchers_count": 2,
"language": "Shell",
"has_issues": false,
"has_projects": true,
"has_downloads": true,
"has_wiki": false,
"has_pages": false,
"forks_count": 0,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 6,
"license": null,
"forks": 0,
"open_issues": 6,
"watchers": 2,
"default_branch": "master"
},
"organization": {
"login": "MYORG",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"url": "https://api.github.com/orgs/MYORG",
"repos_url": "https://api.github.com/orgs/MYORG/repos",
"events_url": "https://api.github.com/orgs/MYORG/events",
"hooks_url": "https://api.github.com/orgs/MYORG/hooks",
"issues_url": "https://api.github.com/orgs/MYORG/issues",
"members_url": "https://api.github.com/orgs/MYORG/members{/member}",
"public_members_url": "https://api.github.com/orgs/MYORG/public_members{/member}",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"description": ""
},
"sender": {
"login": "MYNAME",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYNAME",
"html_url": "https://github.com/MYNAME",
"followers_url": "https://api.github.com/users/MYNAME/followers",
"following_url": "https://api.github.com/users/MYNAME/following{/other_user}",
"gists_url": "https://api.github.com/users/MYNAME/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYNAME/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYNAME/subscriptions",
"organizations_url": "https://api.github.com/users/MYNAME/orgs",
"repos_url": "https://api.github.com/users/MYNAME/repos",
"events_url": "https://api.github.com/users/MYNAME/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYNAME/received_events",
"type": "User",
"site_admin": false
}
}

View File

@@ -0,0 +1,360 @@
{
"action": "completed",
"check_run": {
"id": 1949438388,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_sha": "1234567890123456789012345678901234567890",
"external_id": "ca395085-040a-526b-2ce8-bdc85f692774",
"url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890",
"html_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"details_url": "https://github.com/MYORG/MYREPO/runs/123467890",
"status": "queued",
"conclusion": null,
"started_at": "2021-02-18T06:16:31Z",
"completed_at": null,
"output": {
"title": null,
"summary": null,
"text": null,
"annotations_count": 0,
"annotations_url": "https://api.github.com/repos/MYORG/MYREPO/check-runs/123467890/annotations"
},
"name": "build",
"name": "validate",
"check_suite": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"head_branch": "MYNAME/actions-runner-controller-webhook",
"head_sha": "1234567890123456789012345678901234567890",
"status": "queued",
"conclusion": null,
"url": "https://api.github.com/repos/MYORG/MYREPO/check-suites/1234567890",
"before": "1234567890123456789012345678901234567890",
"after": "1234567890123456789012345678901234567890",
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/2033",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
],
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/123467890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"created_at": "2021-02-18T06:15:32Z",
"updated_at": "2021-02-18T06:16:31Z"
},
"app": {
"id": 1234567890,
"slug": "github-actions",
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"owner": {
"login": "github",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/github",
"html_url": "https://github.com/github",
"followers_url": "https://api.github.com/users/github/followers",
"following_url": "https://api.github.com/users/github/following{/other_user}",
"gists_url": "https://api.github.com/users/github/gists{/gist_id}",
"starred_url": "https://api.github.com/users/github/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/github/subscriptions",
"organizations_url": "https://api.github.com/users/github/orgs",
"repos_url": "https://api.github.com/users/github/repos",
"events_url": "https://api.github.com/users/github/events{/privacy}",
"received_events_url": "https://api.github.com/users/github/received_events",
"type": "Organization",
"site_admin": false
},
"name": "GitHub Actions",
"description": "Automate your workflow from idea to production",
"external_url": "https://help.github.com/en/actions",
"html_url": "https://github.com/apps/github-actions",
"created_at": "2018-07-30T09:30:17Z",
"updated_at": "2019-12-10T19:04:12Z",
"permissions": {
"actions": "write",
"checks": "write",
"contents": "write",
"deployments": "write",
"issues": "write",
"metadata": "read",
"organization_packages": "write",
"packages": "write",
"pages": "write",
"pull_requests": "write",
"repository_hooks": "write",
"repository_projects": "write",
"security_events": "write",
"statuses": "write",
"vulnerability_alerts": "read"
},
"events": [
"check_run",
"check_suite",
"create",
"delete",
"deployment",
"deployment_status",
"fork",
"gollum",
"issues",
"issue_comment",
"label",
"milestone",
"page_build",
"project",
"project_card",
"project_column",
"public",
"pull_request",
"pull_request_review",
"pull_request_review_comment",
"push",
"registry_package",
"release",
"repository",
"repository_dispatch",
"status",
"watch",
"workflow_dispatch",
"workflow_run"
]
},
"pull_requests": [
{
"url": "https://api.github.com/repos/MYORG/MYREPO/pulls/1234567890",
"id": 1234567890,
"number": 1234567890,
"head": {
"ref": "feature",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
},
"base": {
"ref": "master",
"sha": "1234567890123456789012345678901234567890",
"repo": {
"id": 1234567890,
"url": "https://api.github.com/repos/MYORG/MYREPO",
"name": "MYREPO"
}
}
}
]
},
"repository": {
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"name": "MYREPO",
"full_name": "MYORG/MYREPO",
"private": true,
"owner": {
"login": "MYUSER",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYUSER",
"html_url": "https://github.com/MYUSER",
"followers_url": "https://api.github.com/users/MYUSER/followers",
"following_url": "https://api.github.com/users/MYUSER/following{/other_user}",
"gists_url": "https://api.github.com/users/MYUSER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYUSER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYUSER/subscriptions",
"organizations_url": "https://api.github.com/users/MYUSER/orgs",
"repos_url": "https://api.github.com/users/MYUSER/repos",
"events_url": "https://api.github.com/users/MYUSER/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYUSER/received_events",
"type": "User",
"site_admin": false
},
"html_url": "https://github.com/MYUSER/MYREPO",
"description": null,
"fork": false,
"url": "https://api.github.com/repos/MYUSER/MYREPO",
"forks_url": "https://api.github.com/repos/MYUSER/MYREPO/forks",
"keys_url": "https://api.github.com/repos/MYUSER/MYREPO/keys{/key_id}",
"collaborators_url": "https://api.github.com/repos/MYUSER/MYREPO/collaborators{/collaborator}",
"teams_url": "https://api.github.com/repos/MYUSER/MYREPO/teams",
"hooks_url": "https://api.github.com/repos/MYUSER/MYREPO/hooks",
"issue_events_url": "https://api.github.com/repos/MYUSER/MYREPO/issues/events{/number}",
"events_url": "https://api.github.com/repos/MYUSER/MYREPO/events",
"assignees_url": "https://api.github.com/repos/MYUSER/MYREPO/assignees{/user}",
"branches_url": "https://api.github.com/repos/MYUSER/MYREPO/branches{/branch}",
"tags_url": "https://api.github.com/repos/MYUSER/MYREPO/tags",
"blobs_url": "https://api.github.com/repos/MYUSER/MYREPO/git/blobs{/sha}",
"git_tags_url": "https://api.github.com/repos/MYUSER/MYREPO/git/tags{/sha}",
"git_refs_url": "https://api.github.com/repos/MYUSER/MYREPO/git/refs{/sha}",
"trees_url": "https://api.github.com/repos/MYUSER/MYREPO/git/trees{/sha}",
"statuses_url": "https://api.github.com/repos/MYUSER/MYREPO/statuses/{sha}",
"languages_url": "https://api.github.com/repos/MYUSER/MYREPO/languages",
"stargazers_url": "https://api.github.com/repos/MYUSER/MYREPO/stargazers",
"contributors_url": "https://api.github.com/repos/MYUSER/MYREPO/contributors",
"subscribers_url": "https://api.github.com/repos/MYUSER/MYREPO/subscribers",
"subscription_url": "https://api.github.com/repos/MYUSER/MYREPO/subscription",
"commits_url": "https://api.github.com/repos/MYUSER/MYREPO/commits{/sha}",
"git_commits_url": "https://api.github.com/repos/MYUSER/MYREPO/git/commits{/sha}",
"comments_url": "https://api.github.com/repos/MYUSER/MYREPO/comments{/number}",
"issue_comment_url": "https://api.github.com/repos/MYUSER/MYREPO/issues/comments{/number}",
"contents_url": "https://api.github.com/repos/MYUSER/MYREPO/contents/{+path}",
"compare_url": "https://api.github.com/repos/MYUSER/MYREPO/compare/{base}...{head}",
"merges_url": "https://api.github.com/repos/MYUSER/MYREPO/merges",
"archive_url": "https://api.github.com/repos/MYUSER/MYREPO/{archive_format}{/ref}",
"downloads_url": "https://api.github.com/repos/MYUSER/MYREPO/downloads",
"issues_url": "https://api.github.com/repos/MYUSER/MYREPO/issues{/number}",
"pulls_url": "https://api.github.com/repos/MYUSER/MYREPO/pulls{/number}",
"milestones_url": "https://api.github.com/repos/MYUSER/MYREPO/milestones{/number}",
"notifications_url": "https://api.github.com/repos/MYUSER/MYREPO/notifications{?since,all,participating}",
"labels_url": "https://api.github.com/repos/MYUSER/MYREPO/labels{/name}",
"releases_url": "https://api.github.com/repos/MYUSER/MYREPO/releases{/id}",
"deployments_url": "https://api.github.com/repos/MYUSER/MYREPO/deployments",
"created_at": "2021-02-18T06:16:31Z",
"updated_at": "2021-02-18T06:16:31Z",
"pushed_at": "2021-02-18T06:16:31Z",
"git_url": "git://github.com/MYUSER/MYREPO.git",
"ssh_url": "git@github.com:MYUSER/MYREPO.git",
"clone_url": "https://github.com/MYUSER/MYREPO.git",
"svn_url": "https://github.com/MYUSER/MYREPO",
"homepage": null,
"size": 4,
"stargazers_count": 0,
"watchers_count": 0,
"language": null,
"has_issues": true,
"has_projects": true,
"has_downloads": true,
"has_wiki": true,
"has_pages": false,
"forks_count": 0,
"mirror_url": null,
"archived": false,
"disabled": false,
"open_issues_count": 0,
"license": null,
"forks": 0,
"open_issues": 0,
"watchers": 0,
"default_branch": "main"
},
"sender": {
"login": "MYUSER",
"id": 1234567890,
"node_id": "ABCDEFGHIJKLMNOPQRSTUVWXYZ",
"avatar_url": "https://avatars.githubusercontent.com/u/1234567890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MYUSER",
"html_url": "https://github.com/MYUSER",
"followers_url": "https://api.github.com/users/MYUSER/followers",
"following_url": "https://api.github.com/users/MYUSER/following{/other_user}",
"gists_url": "https://api.github.com/users/MYUSER/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MYUSER/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MYUSER/subscriptions",
"organizations_url": "https://api.github.com/users/MYUSER/orgs",
"repos_url": "https://api.github.com/users/MYUSER/repos",
"events_url": "https://api.github.com/users/MYUSER/events{/privacy}",
"received_events_url": "https://api.github.com/users/MYUSER/received_events",
"type": "User",
"site_admin": false
}
}

13
controllers/utils.go Normal file
View File

@@ -0,0 +1,13 @@
package controllers
func filterLabels(labels map[string]string, filter string) map[string]string {
filtered := map[string]string{}
for k, v := range labels {
if k != filter {
filtered[k] = v
}
}
return filtered
}

34
controllers/utils_test.go Normal file
View File

@@ -0,0 +1,34 @@
package controllers
import (
"reflect"
"testing"
)
func Test_filterLabels(t *testing.T) {
type args struct {
labels map[string]string
filter string
}
tests := []struct {
name string
args args
want map[string]string
}{
{
name: "ok",
args: args{
labels: map[string]string{LabelKeyRunnerTemplateHash: "abc", LabelKeyPodTemplateHash: "def"},
filter: LabelKeyRunnerTemplateHash,
},
want: map[string]string{LabelKeyPodTemplateHash: "def"},
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
if got := filterLabels(tt.args.labels, tt.args.filter); !reflect.DeepEqual(got, tt.want) {
t.Errorf("filterLabels() = %v, want %v", got, tt.want)
}
})
}
}

View File

@@ -24,13 +24,34 @@ const (
`
)
type Handler struct {
type ListRunnersHandler struct {
Status int
Body string
}
func (h *ListRunnersHandler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(h.Status)
fmt.Fprintf(w, h.Body)
}
type Handler struct {
Status int
Body string
Statuses map[string]string
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, req *http.Request) {
w.WriteHeader(h.Status)
status := req.URL.Query().Get("status")
if h.Statuses != nil {
if body, ok := h.Statuses[status]; ok {
fmt.Fprintf(w, body)
return
}
}
fmt.Fprintf(w, h.Body)
}
@@ -92,12 +113,21 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest,
Body: "",
},
"/enterprises/test/actions/runners/registration-token": &Handler{
Status: http.StatusCreated,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
},
"/enterprises/invalid/actions/runners/registration-token": &Handler{
Status: http.StatusOK,
Body: fmt.Sprintf("{\"token\": \"%s\", \"expires_at\": \"%s\"}", RegistrationToken, time.Now().Add(time.Hour*1).Format(time.RFC3339)),
},
"/enterprises/error/actions/runners/registration-token": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For ListRunners
"/repos/test/valid/actions/runners": &Handler{
Status: http.StatusOK,
Body: RunnersListBody,
},
"/repos/test/valid/actions/runners": config.FixedResponses.ListRunners,
"/repos/test/invalid/actions/runners": &Handler{
Status: http.StatusNoContent,
Body: "",
@@ -118,6 +148,18 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest,
Body: "",
},
"/enterprises/test/actions/runners": &Handler{
Status: http.StatusOK,
Body: RunnersListBody,
},
"/enterprises/invalid/actions/runners": &Handler{
Status: http.StatusNoContent,
Body: "",
},
"/enterprises/error/actions/runners": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For RemoveRunner
"/repos/test/valid/actions/runners/1": &Handler{
@@ -144,6 +186,18 @@ func NewServer(opts ...Option) *httptest.Server {
Status: http.StatusBadRequest,
Body: "",
},
"/enterprises/test/actions/runners/1": &Handler{
Status: http.StatusNoContent,
Body: "",
},
"/enterprises/invalid/actions/runners/1": &Handler{
Status: http.StatusOK,
Body: "",
},
"/enterprises/error/actions/runners/1": &Handler{
Status: http.StatusBadRequest,
Body: "",
},
// For auto-scaling based on the number of queued(pending) workflow runs
"/repos/test/valid/actions/runs": config.FixedResponses.ListRepositoryWorkflowRuns,
@@ -159,3 +213,10 @@ func NewServer(opts ...Option) *httptest.Server {
return httptest.NewServer(mux)
}
func DefaultListRunnersHandler() *ListRunnersHandler {
return &ListRunnersHandler{
Status: http.StatusOK,
Body: RunnersListBody,
}
}

View File

@@ -1,17 +1,24 @@
package fake
import "net/http"
type FixedResponses struct {
ListRepositoryWorkflowRuns *Handler
ListWorkflowJobs *MapHandler
ListRunners http.Handler
}
type Option func(*ServerConfig)
func WithListRepositoryWorkflowRunsResponse(status int, body string) Option {
func WithListRepositoryWorkflowRunsResponse(status int, body, queued, in_progress string) Option {
return func(c *ServerConfig) {
c.FixedResponses.ListRepositoryWorkflowRuns = &Handler{
Status: status,
Body: body,
Statuses: map[string]string{
"queued": queued,
"in_progress": in_progress,
},
}
}
}
@@ -25,6 +32,15 @@ func WithListWorkflowJobsResponse(status int, bodies map[int]string) Option {
}
}
func WithListRunnersResponse(status int, body string) Option {
return func(c *ServerConfig) {
c.FixedResponses.ListRunners = &ListRunnersHandler{
Status: status,
Body: body,
}
}
}
func WithFixedResponses(responses *FixedResponses) Option {
return func(c *ServerConfig) {
c.FixedResponses = responses

View File

@@ -2,11 +2,12 @@ package fake
import (
"encoding/json"
"github.com/summerwind/actions-runner-controller/api/v1alpha1"
"net/http"
"net/http/httptest"
"strconv"
"github.com/google/go-github/v32/github"
"github.com/google/go-github/v33/github"
"github.com/gorilla/mux"
)
@@ -29,15 +30,15 @@ func (r *RunnersList) Add(runner *github.Runner) {
func (r *RunnersList) GetServer() *httptest.Server {
router := mux.NewRouter()
router.Handle("/repos/{owner}/{repo}/actions/runners", r.handleList())
router.Handle("/repos/{owner}/{repo}/actions/runners", r.HandleList())
router.Handle("/repos/{owner}/{repo}/actions/runners/{id}", r.handleRemove())
router.Handle("/orgs/{org}/actions/runners", r.handleList())
router.Handle("/orgs/{org}/actions/runners", r.HandleList())
router.Handle("/orgs/{org}/actions/runners/{id}", r.handleRemove())
return httptest.NewServer(router)
}
func (r *RunnersList) handleList() http.HandlerFunc {
func (r *RunnersList) HandleList() http.HandlerFunc {
return func(w http.ResponseWriter, res *http.Request) {
j, err := json.Marshal(github.Runners{
TotalCount: len(r.runners),
@@ -64,6 +65,20 @@ func (r *RunnersList) handleRemove() http.HandlerFunc {
}
}
func (r *RunnersList) Sync(runners []v1alpha1.Runner) {
r.runners = nil
for i, want := range runners {
r.Add(&github.Runner{
ID: github.Int64(int64(i)),
Name: github.String(want.Name),
OS: github.String("linux"),
Status: github.String("online"),
Busy: github.Bool(false),
})
}
}
func exists(runners []*github.Runner, runner *github.Runner) bool {
for _, r := range runners {
if *r.Name == *runner.Name {

View File

@@ -4,70 +4,112 @@ import (
"context"
"fmt"
"net/http"
"net/url"
"os"
"strings"
"sync"
"time"
"github.com/bradleyfalzon/ghinstallation"
"github.com/google/go-github/v32/github"
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/github/metrics"
"golang.org/x/oauth2"
)
// Config contains configuration for Github client
type Config struct {
EnterpriseURL string `split_words:"true"`
AppID int64 `split_words:"true"`
AppInstallationID int64 `split_words:"true"`
AppPrivateKey string `split_words:"true"`
Token string
}
// Client wraps GitHub client with some additional
type Client struct {
*github.Client
regTokens map[string]*github.RegistrationToken
mu sync.Mutex
// GithubBaseURL to Github without API suffix.
GithubBaseURL string
}
// NewClient returns a client authenticated as a GitHub App.
func NewClient(appID, installationID int64, privateKeyPath string) (*Client, error) {
tr, err := ghinstallation.NewKeyFromFile(http.DefaultTransport, appID, installationID, privateKeyPath)
if err != nil {
return nil, fmt.Errorf("authentication failed: %v", err)
// NewClient creates a Github Client
func (c *Config) NewClient() (*Client, error) {
var transport http.RoundTripper
if len(c.Token) > 0 {
transport = oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(&oauth2.Token{AccessToken: c.Token})).Transport
} else {
var tr *ghinstallation.Transport
if _, err := os.Stat(c.AppPrivateKey); err == nil {
tr, err = ghinstallation.NewKeyFromFile(http.DefaultTransport, c.AppID, c.AppInstallationID, c.AppPrivateKey)
if err != nil {
return nil, fmt.Errorf("authentication failed: using private key at %s: %v", c.AppPrivateKey, err)
}
} else {
tr, err = ghinstallation.New(http.DefaultTransport, c.AppID, c.AppInstallationID, []byte(c.AppPrivateKey))
if err != nil {
return nil, fmt.Errorf("authentication failed: using private key of size %d (%s...): %v", len(c.AppPrivateKey), strings.Split(c.AppPrivateKey, "\n")[0], err)
}
}
if len(c.EnterpriseURL) > 0 {
githubAPIURL, err := getEnterpriseApiUrl(c.EnterpriseURL)
if err != nil {
return nil, fmt.Errorf("enterprise url incorrect: %v", err)
}
tr.BaseURL = githubAPIURL
}
transport = tr
}
transport = metrics.Transport{Transport: transport}
httpClient := &http.Client{Transport: transport}
var client *github.Client
var githubBaseURL string
if len(c.EnterpriseURL) > 0 {
var err error
client, err = github.NewEnterpriseClient(c.EnterpriseURL, c.EnterpriseURL, httpClient)
if err != nil {
return nil, fmt.Errorf("enterprise client creation failed: %v", err)
}
githubBaseURL = fmt.Sprintf("%s://%s%s", client.BaseURL.Scheme, client.BaseURL.Host, strings.TrimSuffix(client.BaseURL.Path, "api/v3/"))
} else {
client = github.NewClient(httpClient)
githubBaseURL = "https://github.com/"
}
gh := github.NewClient(&http.Client{Transport: tr})
return &Client{
Client: gh,
regTokens: map[string]*github.RegistrationToken{},
mu: sync.Mutex{},
}, nil
}
// NewClientWithAccessToken returns a client authenticated with personal access token.
func NewClientWithAccessToken(token string) (*Client, error) {
tc := oauth2.NewClient(context.Background(), oauth2.StaticTokenSource(
&oauth2.Token{AccessToken: token},
))
return &Client{
Client: github.NewClient(tc),
regTokens: map[string]*github.RegistrationToken{},
mu: sync.Mutex{},
Client: client,
regTokens: map[string]*github.RegistrationToken{},
mu: sync.Mutex{},
GithubBaseURL: githubBaseURL,
}, nil
}
// GetRegistrationToken returns a registration token tied with the name of repository and runner.
func (c *Client) GetRegistrationToken(ctx context.Context, org, repo, name string) (*github.RegistrationToken, error) {
func (c *Client) GetRegistrationToken(ctx context.Context, enterprise, org, repo, name string) (*github.RegistrationToken, error) {
c.mu.Lock()
defer c.mu.Unlock()
key := getRegistrationKey(org, repo)
key := getRegistrationKey(org, repo, enterprise)
rt, ok := c.regTokens[key]
if ok && rt.GetExpiresAt().After(time.Now().Add(-10*time.Minute)) {
// we like to give runners a chance that are just starting up and may miss the expiration date by a bit
runnerStartupTimeout := 3 * time.Minute
if ok && rt.GetExpiresAt().After(time.Now().Add(runnerStartupTimeout)) {
return rt, nil
}
owner, repo, err := getOwnerAndRepo(org, repo)
enterprise, owner, repo, err := getEnterpriseOrganisationAndRepo(enterprise, org, repo)
if err != nil {
return rt, err
}
rt, res, err := c.createRegistrationToken(ctx, owner, repo)
rt, res, err := c.createRegistrationToken(ctx, enterprise, owner, repo)
if err != nil {
return nil, fmt.Errorf("failed to create registration token: %v", err)
@@ -86,17 +128,17 @@ func (c *Client) GetRegistrationToken(ctx context.Context, org, repo, name strin
}
// RemoveRunner removes a runner with specified runner ID from repository.
func (c *Client) RemoveRunner(ctx context.Context, org, repo string, runnerID int64) error {
owner, repo, err := getOwnerAndRepo(org, repo)
func (c *Client) RemoveRunner(ctx context.Context, enterprise, org, repo string, runnerID int64) error {
enterprise, owner, repo, err := getEnterpriseOrganisationAndRepo(enterprise, org, repo)
if err != nil {
return err
}
res, err := c.removeRunner(ctx, owner, repo, runnerID)
res, err := c.removeRunner(ctx, enterprise, owner, repo, runnerID)
if err != nil {
return fmt.Errorf("failed to remove runner: %v", err)
return fmt.Errorf("failed to remove runner: %w", err)
}
if res.StatusCode != 204 {
@@ -107,8 +149,8 @@ func (c *Client) RemoveRunner(ctx context.Context, org, repo string, runnerID in
}
// ListRunners returns a list of runners of specified owner/repository name.
func (c *Client) ListRunners(ctx context.Context, org, repo string) ([]*github.Runner, error) {
owner, repo, err := getOwnerAndRepo(org, repo)
func (c *Client) ListRunners(ctx context.Context, enterprise, org, repo string) ([]*github.Runner, error) {
enterprise, owner, repo, err := getEnterpriseOrganisationAndRepo(enterprise, org, repo)
if err != nil {
return nil, err
@@ -118,10 +160,10 @@ func (c *Client) ListRunners(ctx context.Context, org, repo string) ([]*github.R
opts := github.ListOptions{PerPage: 10}
for {
list, res, err := c.listRunners(ctx, owner, repo, &opts)
list, res, err := c.listRunners(ctx, enterprise, owner, repo, &opts)
if err != nil {
return runners, fmt.Errorf("failed to list runners: %v", err)
return runners, fmt.Errorf("failed to list runners: %w", err)
}
runners = append(runners, list.Runners...)
@@ -146,50 +188,102 @@ func (c *Client) cleanup() {
}
}
// wrappers for github functions (switch between organization/repository mode)
// wrappers for github functions (switch between enterprise/organization/repository mode)
// so the calling functions don't need to switch and their code is a bit cleaner
func (c *Client) createRegistrationToken(ctx context.Context, owner, repo string) (*github.RegistrationToken, *github.Response, error) {
func (c *Client) createRegistrationToken(ctx context.Context, enterprise, org, repo string) (*github.RegistrationToken, *github.Response, error) {
if len(repo) > 0 {
return c.Client.Actions.CreateRegistrationToken(ctx, owner, repo)
} else {
return CreateOrganizationRegistrationToken(ctx, c, owner)
}
}
func (c *Client) removeRunner(ctx context.Context, owner, repo string, runnerID int64) (*github.Response, error) {
if len(repo) > 0 {
return c.Client.Actions.RemoveRunner(ctx, owner, repo, runnerID)
} else {
return RemoveOrganizationRunner(ctx, c, owner, runnerID)
}
}
func (c *Client) listRunners(ctx context.Context, owner, repo string, opts *github.ListOptions) (*github.Runners, *github.Response, error) {
if len(repo) > 0 {
return c.Client.Actions.ListRunners(ctx, owner, repo, opts)
} else {
return ListOrganizationRunners(ctx, c, owner, opts)
}
}
// Validates owner and repo arguments. Both are optional, but at least one should be specified
func getOwnerAndRepo(org, repo string) (string, string, error) {
if len(repo) > 0 {
return splitOwnerAndRepo(repo)
return c.Client.Actions.CreateRegistrationToken(ctx, org, repo)
}
if len(org) > 0 {
return org, "", nil
return c.Client.Actions.CreateOrganizationRegistrationToken(ctx, org)
}
return "", "", fmt.Errorf("organization and repository are both empty")
return c.Client.Enterprise.CreateRegistrationToken(ctx, enterprise)
}
func getRegistrationKey(org, repo string) string {
if len(org) > 0 {
return org
} else {
return repo
func (c *Client) removeRunner(ctx context.Context, enterprise, org, repo string, runnerID int64) (*github.Response, error) {
if len(repo) > 0 {
return c.Client.Actions.RemoveRunner(ctx, org, repo, runnerID)
}
if len(org) > 0 {
return c.Client.Actions.RemoveOrganizationRunner(ctx, org, runnerID)
}
return c.Client.Enterprise.RemoveRunner(ctx, enterprise, runnerID)
}
func (c *Client) listRunners(ctx context.Context, enterprise, org, repo string, opts *github.ListOptions) (*github.Runners, *github.Response, error) {
if len(repo) > 0 {
return c.Client.Actions.ListRunners(ctx, org, repo, opts)
}
if len(org) > 0 {
return c.Client.Actions.ListOrganizationRunners(ctx, org, opts)
}
return c.Client.Enterprise.ListRunners(ctx, enterprise, opts)
}
func (c *Client) ListRepositoryWorkflowRuns(ctx context.Context, user string, repoName string) ([]*github.WorkflowRun, error) {
queued, err := c.listRepositoryWorkflowRuns(ctx, user, repoName, "queued")
if err != nil {
return nil, fmt.Errorf("listing queued workflow runs: %w", err)
}
inProgress, err := c.listRepositoryWorkflowRuns(ctx, user, repoName, "in_progress")
if err != nil {
return nil, fmt.Errorf("listing in_progress workflow runs: %w", err)
}
var workflowRuns []*github.WorkflowRun
workflowRuns = append(workflowRuns, queued...)
workflowRuns = append(workflowRuns, inProgress...)
return workflowRuns, nil
}
func (c *Client) listRepositoryWorkflowRuns(ctx context.Context, user string, repoName, status string) ([]*github.WorkflowRun, error) {
var workflowRuns []*github.WorkflowRun
opts := github.ListWorkflowRunsOptions{
ListOptions: github.ListOptions{
PerPage: 100,
},
Status: status,
}
for {
list, res, err := c.Client.Actions.ListRepositoryWorkflowRuns(ctx, user, repoName, &opts)
if err != nil {
return workflowRuns, fmt.Errorf("failed to list workflow runs: %v", err)
}
workflowRuns = append(workflowRuns, list.WorkflowRuns...)
if res.NextPage == 0 {
break
}
opts.Page = res.NextPage
}
return workflowRuns, nil
}
// Validates enterprise, organisation and repo arguments. Both are optional, but at least one should be specified
func getEnterpriseOrganisationAndRepo(enterprise, org, repo string) (string, string, string, error) {
if len(repo) > 0 {
owner, repository, err := splitOwnerAndRepo(repo)
return "", owner, repository, err
}
if len(org) > 0 {
return "", org, "", nil
}
if len(enterprise) > 0 {
return enterprise, "", "", nil
}
return "", "", "", fmt.Errorf("enterprise, organization and repository are all empty")
}
func getRegistrationKey(org, repo, enterprise string) string {
return fmt.Sprintf("org=%s,repo=%s,enterprise=%s", org, repo, enterprise)
}
func splitOwnerAndRepo(repo string) (string, string, error) {
@@ -199,3 +293,55 @@ func splitOwnerAndRepo(repo string) (string, string, error) {
}
return chunk[0], chunk[1], nil
}
func getEnterpriseApiUrl(baseURL string) (string, error) {
baseEndpoint, err := url.Parse(baseURL)
if err != nil {
return "", err
}
if !strings.HasSuffix(baseEndpoint.Path, "/") {
baseEndpoint.Path += "/"
}
if !strings.HasSuffix(baseEndpoint.Path, "/api/v3/") &&
!strings.HasPrefix(baseEndpoint.Host, "api.") &&
!strings.Contains(baseEndpoint.Host, ".api.") {
baseEndpoint.Path += "api/v3/"
}
// Trim trailing slash, otherwise there's double slash added to token endpoint
return fmt.Sprintf("%s://%s%s", baseEndpoint.Scheme, baseEndpoint.Host, strings.TrimSuffix(baseEndpoint.Path, "/")), nil
}
type RunnerNotFound struct {
runnerName string
}
func (e *RunnerNotFound) Error() string {
return fmt.Sprintf("runner %q not found", e.runnerName)
}
type RunnerOffline struct {
runnerName string
}
func (e *RunnerOffline) Error() string {
return fmt.Sprintf("runner %q offline", e.runnerName)
}
func (r *Client) IsRunnerBusy(ctx context.Context, enterprise, org, repo, name string) (bool, error) {
runners, err := r.ListRunners(ctx, enterprise, org, repo)
if err != nil {
return false, err
}
for _, runner := range runners {
if runner.GetName() == name {
if runner.GetStatus() == "offline" {
return false, &RunnerOffline{runnerName: name}
}
return runner.GetBusy(), nil
}
}
return false, &RunnerNotFound{runnerName: name}
}

View File

@@ -1,95 +0,0 @@
package github
// this contains BETA API clients, that are currently not (yet) in go-github
// once these functions have been added there, they can be removed from here
// code was reused from https://github.com/google/go-github
import (
"context"
"fmt"
"net/url"
"reflect"
"github.com/google/go-github/v32/github"
"github.com/google/go-querystring/query"
)
// CreateOrganizationRegistrationToken creates a token that can be used to add a self-hosted runner on an organization.
//
// GitHub API docs: https://developer.github.com/v3/actions/self-hosted-runners/#create-a-registration-token-for-an-organization
func CreateOrganizationRegistrationToken(ctx context.Context, client *Client, owner string) (*github.RegistrationToken, *github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners/registration-token", owner)
req, err := client.NewRequest("POST", u, nil)
if err != nil {
return nil, nil, err
}
registrationToken := new(github.RegistrationToken)
resp, err := client.Do(ctx, req, registrationToken)
if err != nil {
return nil, resp, err
}
return registrationToken, resp, nil
}
// ListOrganizationRunners lists all the self-hosted runners for an organization.
//
// GitHub API docs: https://developer.github.com/v3/actions/self-hosted-runners/#list-self-hosted-runners-for-an-organization
func ListOrganizationRunners(ctx context.Context, client *Client, owner string, opts *github.ListOptions) (*github.Runners, *github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners", owner)
u, err := addOptions(u, opts)
if err != nil {
return nil, nil, err
}
req, err := client.NewRequest("GET", u, nil)
if err != nil {
return nil, nil, err
}
runners := &github.Runners{}
resp, err := client.Do(ctx, req, &runners)
if err != nil {
return nil, resp, err
}
return runners, resp, nil
}
// RemoveOrganizationRunner forces the removal of a self-hosted runner in a repository using the runner id.
//
// GitHub API docs: https://developer.github.com/v3/actions/self_hosted_runners/#remove-a-self-hosted-runner
func RemoveOrganizationRunner(ctx context.Context, client *Client, owner string, runnerID int64) (*github.Response, error) {
u := fmt.Sprintf("orgs/%v/actions/runners/%v", owner, runnerID)
req, err := client.NewRequest("DELETE", u, nil)
if err != nil {
return nil, err
}
return client.Do(ctx, req, nil)
}
// addOptions adds the parameters in opt as URL query parameters to s. opt
// must be a struct whose fields may contain "url" tags.
func addOptions(s string, opts interface{}) (string, error) {
v := reflect.ValueOf(opts)
if v.Kind() == reflect.Ptr && v.IsNil() {
return s, nil
}
u, err := url.Parse(s)
if err != nil {
return s, err
}
qs, err := query.Values(opts)
if err != nil {
return s, err
}
u.RawQuery = qs.Encode()
return u.String(), nil
}

View File

@@ -7,14 +7,17 @@ import (
"testing"
"time"
"github.com/google/go-github/v32/github"
"github.com/google/go-github/v33/github"
"github.com/summerwind/actions-runner-controller/github/fake"
)
var server *httptest.Server
func newTestClient() *Client {
client, err := NewClientWithAccessToken("token")
c := Config{
Token: "token",
}
client, err := c.NewClient()
if err != nil {
panic(err)
}
@@ -29,29 +32,36 @@ func newTestClient() *Client {
}
func TestMain(m *testing.M) {
server = fake.NewServer()
res := &fake.FixedResponses{
ListRunners: fake.DefaultListRunnersHandler(),
}
server = fake.NewServer(fake.WithFixedResponses(res))
defer server.Close()
m.Run()
}
func TestGetRegistrationToken(t *testing.T) {
tests := []struct {
org string
repo string
token string
err bool
enterprise string
org string
repo string
token string
err bool
}{
{org: "", repo: "test/valid", token: fake.RegistrationToken, err: false},
{org: "", repo: "test/invalid", token: "", err: true},
{org: "", repo: "test/error", token: "", err: true},
{org: "test", repo: "", token: fake.RegistrationToken, err: false},
{org: "invalid", repo: "", token: "", err: true},
{org: "error", repo: "", token: "", err: true},
{enterprise: "", org: "", repo: "test/valid", token: fake.RegistrationToken, err: false},
{enterprise: "", org: "", repo: "test/invalid", token: "", err: true},
{enterprise: "", org: "", repo: "test/error", token: "", err: true},
{enterprise: "", org: "test", repo: "", token: fake.RegistrationToken, err: false},
{enterprise: "", org: "invalid", repo: "", token: "", err: true},
{enterprise: "", org: "error", repo: "", token: "", err: true},
{enterprise: "test", org: "", repo: "", token: fake.RegistrationToken, err: false},
{enterprise: "invalid", org: "", repo: "", token: "", err: true},
{enterprise: "error", org: "", repo: "", token: "", err: true},
}
client := newTestClient()
for i, tt := range tests {
rt, err := client.GetRegistrationToken(context.Background(), tt.org, tt.repo, "test")
rt, err := client.GetRegistrationToken(context.Background(), tt.enterprise, tt.org, tt.repo, "test")
if !tt.err && err != nil {
t.Errorf("[%d] unexpected error: %v", i, err)
}
@@ -63,22 +73,26 @@ func TestGetRegistrationToken(t *testing.T) {
func TestListRunners(t *testing.T) {
tests := []struct {
org string
repo string
length int
err bool
enterprise string
org string
repo string
length int
err bool
}{
{org: "", repo: "test/valid", length: 2, err: false},
{org: "", repo: "test/invalid", length: 0, err: true},
{org: "", repo: "test/error", length: 0, err: true},
{org: "test", repo: "", length: 2, err: false},
{org: "invalid", repo: "", length: 0, err: true},
{org: "error", repo: "", length: 0, err: true},
{enterprise: "", org: "", repo: "test/valid", length: 2, err: false},
{enterprise: "", org: "", repo: "test/invalid", length: 0, err: true},
{enterprise: "", org: "", repo: "test/error", length: 0, err: true},
{enterprise: "", org: "test", repo: "", length: 2, err: false},
{enterprise: "", org: "invalid", repo: "", length: 0, err: true},
{enterprise: "", org: "error", repo: "", length: 0, err: true},
{enterprise: "test", org: "", repo: "", length: 2, err: false},
{enterprise: "invalid", org: "", repo: "", length: 0, err: true},
{enterprise: "error", org: "", repo: "", length: 0, err: true},
}
client := newTestClient()
for i, tt := range tests {
runners, err := client.ListRunners(context.Background(), tt.org, tt.repo)
runners, err := client.ListRunners(context.Background(), tt.enterprise, tt.org, tt.repo)
if !tt.err && err != nil {
t.Errorf("[%d] unexpected error: %v", i, err)
}
@@ -90,21 +104,25 @@ func TestListRunners(t *testing.T) {
func TestRemoveRunner(t *testing.T) {
tests := []struct {
org string
repo string
err bool
enterprise string
org string
repo string
err bool
}{
{org: "", repo: "test/valid", err: false},
{org: "", repo: "test/invalid", err: true},
{org: "", repo: "test/error", err: true},
{org: "test", repo: "", err: false},
{org: "invalid", repo: "", err: true},
{org: "error", repo: "", err: true},
{enterprise: "", org: "", repo: "test/valid", err: false},
{enterprise: "", org: "", repo: "test/invalid", err: true},
{enterprise: "", org: "", repo: "test/error", err: true},
{enterprise: "", org: "test", repo: "", err: false},
{enterprise: "", org: "invalid", repo: "", err: true},
{enterprise: "", org: "error", repo: "", err: true},
{enterprise: "test", org: "", repo: "", err: false},
{enterprise: "invalid", org: "", repo: "", err: true},
{enterprise: "error", org: "", repo: "", err: true},
}
client := newTestClient()
for i, tt := range tests {
err := client.RemoveRunner(context.Background(), tt.org, tt.repo, int64(1))
err := client.RemoveRunner(context.Background(), tt.enterprise, tt.org, tt.repo, int64(1))
if !tt.err && err != nil {
t.Errorf("[%d] unexpected error: %v", i, err)
}

View File

@@ -0,0 +1,63 @@
// Package metrics provides monitoring of the GitHub related metrics.
//
// This depends on the metrics exporter of kubebuilder.
// See https://book.kubebuilder.io/reference/metrics.html for details.
package metrics
import (
"net/http"
"strconv"
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/controller-runtime/pkg/metrics"
)
func init() {
metrics.Registry.MustRegister(metricRateLimit, metricRateLimitRemaining)
}
var (
// https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting
metricRateLimit = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "github_rate_limit",
Help: "The maximum number of requests you're permitted to make per hour",
},
)
metricRateLimitRemaining = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "github_rate_limit_remaining",
Help: "The number of requests remaining in the current rate limit window",
},
)
)
const (
// https://docs.github.com/en/rest/overview/resources-in-the-rest-api#rate-limiting
headerRateLimit = "X-RateLimit-Limit"
headerRateLimitRemaining = "X-RateLimit-Remaining"
)
// Transport wraps a transport with metrics monitoring
type Transport struct {
Transport http.RoundTripper
}
func (t Transport) RoundTrip(req *http.Request) (*http.Response, error) {
resp, err := t.Transport.RoundTrip(req)
if resp != nil {
parseResponse(resp)
}
return resp, err
}
func parseResponse(resp *http.Response) {
rateLimit, err := strconv.Atoi(resp.Header.Get(headerRateLimit))
if err == nil {
metricRateLimit.Set(float64(rateLimit))
}
rateLimitRemaining, err := strconv.Atoi(resp.Header.Get(headerRateLimitRemaining))
if err == nil {
metricRateLimitRemaining.Set(float64(rateLimitRemaining))
}
}

8
go.mod
View File

@@ -1,16 +1,18 @@
module github.com/summerwind/actions-runner-controller
go 1.13
go 1.15
require (
github.com/bradleyfalzon/ghinstallation v1.1.1
github.com/davecgh/go-spew v1.1.1
github.com/go-logr/logr v0.1.0
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04
github.com/google/go-querystring v1.0.0
github.com/google/go-cmp v0.3.1
github.com/google/go-github/v33 v33.0.1-0.20210204004227-319dcffb518a
github.com/gorilla/mux v1.8.0
github.com/kelseyhightower/envconfig v1.4.0
github.com/onsi/ginkgo v1.8.0
github.com/onsi/gomega v1.5.0
github.com/prometheus/client_golang v0.9.2
github.com/stretchr/testify v1.4.0 // indirect
golang.org/x/oauth2 v0.0.0-20190604053449-0f29369cfe45
k8s.io/api v0.0.0-20190918155943-95b840bb6a1f

6
go.sum
View File

@@ -118,8 +118,8 @@ github.com/google/go-cmp v0.3.1 h1:Xye71clBPdm5HgqGwUkwhbynsUJZhDbS20FvLhQ2izg=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-github/v29 v29.0.2 h1:opYN6Wc7DOz7Ku3Oh4l7prmkOMwEcQxpFtxdU8N8Pts=
github.com/google/go-github/v29 v29.0.2/go.mod h1:CHKiKKPHJ0REzfwc14QMklvtHwCveD0PxlMjLlzAM5E=
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04 h1:wEYk2h/GwOhImcVjiTIceP88WxVbXw2F+ARYUQMEsfg=
github.com/google/go-github/v32 v32.1.1-0.20200822031813-d57a3a84ba04/go.mod h1:rIEpZD9CTDQwDK9GDrtMTycQNA4JU3qBsCizh3q2WCI=
github.com/google/go-github/v33 v33.0.1-0.20210204004227-319dcffb518a h1:Z9Nzq8ntvvXCLnFGOkzzcD8HDOzOo+obuwE5oK85vNQ=
github.com/google/go-github/v33 v33.0.1-0.20210204004227-319dcffb518a/go.mod h1:GMdDnVZY/2TsWgp/lkYnpSAh6TrzhANBBwm6k6TTEXg=
github.com/google/go-querystring v1.0.0 h1:Xkwi/a1rcvNg1PPYe5vI8GbeBY/jrVuDX5ASuANWTrk=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gofuzz v0.0.0-20161122191042-44d81051d367/go.mod h1:HP5RmnzzSNb993RKQDq4+1A4ia9nllfqcQFTQJedwGI=
@@ -158,6 +158,8 @@ github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCV
github.com/json-iterator/go v1.1.7 h1:KfgG9LzI+pYjr4xvmz/5H4FXjokeP+rlHLhv3iH62Fo=
github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/kelseyhightower/envconfig v1.4.0 h1:Im6hONhd3pLkfDFsbRgu68RDNkGF1r3dvMUtDTo2cv8=
github.com/kelseyhightower/envconfig v1.4.0/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=

19
hack/make-env.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
COMMIT=$(git rev-parse HEAD)
TAG=$(git describe --exact-match --abbrev=0 --tags "${COMMIT}" 2> /dev/null || true)
BRANCH=$(git branch | grep \* | cut -d ' ' -f2 | sed -e 's/[^a-zA-Z0-9+=._:/-]*//g' || true)
VERSION=""
if [ -z "$TAG" ]; then
[[ -n "$BRANCH" ]] && VERSION="${BRANCH}-"
VERSION="${VERSION}${COMMIT:0:8}"
else
VERSION=$TAG
fi
if [ -n "$(git diff --shortstat 2> /dev/null | tail -n1)" ]; then
VERSION="${VERSION}-dirty"
fi
export VERSION

17
hash/fnv.go Normal file
View File

@@ -0,0 +1,17 @@
package hash
import (
"fmt"
"hash/fnv"
"k8s.io/apimachinery/pkg/util/rand"
)
func FNVHashStringObjects(objs ...interface{}) string {
hash := fnv.New32a()
for _, obj := range objs {
DeepHashObject(hash, obj)
}
return rand.SafeEncodeString(fmt.Sprint(hash.Sum32()))
}

25
hash/hash.go Normal file
View File

@@ -0,0 +1,25 @@
// Copyright 2015 The Kubernetes Authors.
// hash.go is copied from kubernetes's pkg/util/hash.go
// See https://github.com/kubernetes/kubernetes/blob/e1c617a88ec286f5f6cb2589d6ac562d095e1068/pkg/util/hash/hash.go#L25-L37
package hash
import (
"hash"
"github.com/davecgh/go-spew/spew"
)
// DeepHashObject writes specified object to hash using the spew library
// which follows pointers and prints actual values of the nested objects
// ensuring the hash does not change when a pointer changes.
func DeepHashObject(hasher hash.Hash, objectToWrite interface{}) {
hasher.Reset()
printer := spew.ConfigState{
Indent: " ",
SortKeys: true,
DisableMethods: true,
SpewKeys: true,
}
printer.Fprintf(hasher, "%#v", objectToWrite)
}

111
main.go
View File

@@ -20,9 +20,10 @@ import (
"flag"
"fmt"
"os"
"strconv"
"strings"
"time"
"github.com/kelseyhightower/envconfig"
actionsv1alpha1 "github.com/summerwind/actions-runner-controller/api/v1alpha1"
"github.com/summerwind/actions-runner-controller/controllers"
"github.com/summerwind/actions-runner-controller/github"
@@ -62,74 +63,42 @@ func main() {
runnerImage string
dockerImage string
namespace string
ghToken string
ghAppID int64
ghAppInstallationID int64
ghAppPrivateKey string
commonRunnerLabels commaSeparatedStringSlice
)
var c github.Config
err = envconfig.Process("github", &c)
if err != nil {
fmt.Fprintln(os.Stderr, "Error: Environment variable read failed.")
}
flag.StringVar(&metricsAddr, "metrics-addr", ":8080", "The address the metric endpoint binds to.")
flag.BoolVar(&enableLeaderElection, "enable-leader-election", false,
"Enable leader election for controller manager. Enabling this will ensure there is only one active controller manager.")
flag.StringVar(&runnerImage, "runner-image", defaultRunnerImage, "The image name of self-hosted runner container.")
flag.StringVar(&dockerImage, "docker-image", defaultDockerImage, "The image name of docker sidecar container.")
flag.StringVar(&ghToken, "github-token", "", "The personal access token of GitHub.")
flag.Int64Var(&ghAppID, "github-app-id", 0, "The application ID of GitHub App.")
flag.Int64Var(&ghAppInstallationID, "github-app-installation-id", 0, "The installation ID of GitHub App.")
flag.StringVar(&ghAppPrivateKey, "github-app-private-key", "", "The path of a private key file to authenticate as a GitHub App")
flag.StringVar(&c.Token, "github-token", c.Token, "The personal access token of GitHub.")
flag.Int64Var(&c.AppID, "github-app-id", c.AppID, "The application ID of GitHub App.")
flag.Int64Var(&c.AppInstallationID, "github-app-installation-id", c.AppInstallationID, "The installation ID of GitHub App.")
flag.StringVar(&c.AppPrivateKey, "github-app-private-key", c.AppPrivateKey, "The path of a private key file to authenticate as a GitHub App")
flag.DurationVar(&syncPeriod, "sync-period", 10*time.Minute, "Determines the minimum frequency at which K8s resources managed by this controller are reconciled. When you use autoscaling, set to a lower value like 10 minute, because this corresponds to the minimum time to react on demand change")
flag.Var(&commonRunnerLabels, "common-runner-labels", "Runner labels in the K1=V1,K2=V2,... format that are inherited all the runners created by the controller. See https://github.com/summerwind/actions-runner-controller/issues/321 for more information")
flag.StringVar(&namespace, "watch-namespace", "", "The namespace to watch for custom resources. Set to empty for letting it watch for all namespaces.")
flag.Parse()
if ghToken == "" {
ghToken = os.Getenv("GITHUB_TOKEN")
}
if ghAppID == 0 {
appID, err := strconv.ParseInt(os.Getenv("GITHUB_APP_ID"), 10, 64)
if err == nil {
ghAppID = appID
}
}
if ghAppInstallationID == 0 {
appInstallationID, err := strconv.ParseInt(os.Getenv("GITHUB_APP_INSTALLATION_ID"), 10, 64)
if err == nil {
ghAppInstallationID = appInstallationID
}
}
if ghAppPrivateKey == "" {
ghAppPrivateKey = os.Getenv("GITHUB_APP_PRIVATE_KEY")
}
logger := zap.New(func(o *zap.Options) {
o.Development = true
})
if ghAppID != 0 {
if ghAppInstallationID == 0 {
fmt.Fprintln(os.Stderr, "Error: The installation ID must be specified.")
os.Exit(1)
}
if ghAppPrivateKey == "" {
fmt.Fprintln(os.Stderr, "Error: The path of a private key file must be specified.")
os.Exit(1)
}
ghClient, err = github.NewClient(ghAppID, ghAppInstallationID, ghAppPrivateKey)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: Failed to create GitHub client: %v\n", err)
os.Exit(1)
}
} else if ghToken != "" {
ghClient, err = github.NewClientWithAccessToken(ghToken)
if err != nil {
fmt.Fprintf(os.Stderr, "Error: Failed to create GitHub client: %v\n", err)
os.Exit(1)
}
} else {
fmt.Fprintln(os.Stderr, "Error: GitHub App credentials or personal access token must be specified.")
ghClient, err = c.NewClient()
if err != nil {
fmt.Fprintln(os.Stderr, "Error: Client creation failed.", err)
os.Exit(1)
}
ctrl.SetLogger(zap.New(func(o *zap.Options) {
o.Development = true
}))
ctrl.SetLogger(logger)
mgr, err := ctrl.NewManager(ctrl.GetConfigOrDie(), ctrl.Options{
Scheme: scheme,
@@ -137,6 +106,7 @@ func main() {
LeaderElection: enableLeaderElection,
Port: 9443,
SyncPeriod: &syncPeriod,
Namespace: namespace,
})
if err != nil {
setupLog.Error(err, "unable to start manager")
@@ -170,9 +140,10 @@ func main() {
}
runnerDeploymentReconciler := &controllers.RunnerDeploymentReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"),
Scheme: mgr.GetScheme(),
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("RunnerDeployment"),
Scheme: mgr.GetScheme(),
CommonRunnerLabels: commonRunnerLabels,
}
if err = runnerDeploymentReconciler.SetupWithManager(mgr); err != nil {
@@ -181,10 +152,11 @@ func main() {
}
horizontalRunnerAutoscaler := &controllers.HorizontalRunnerAutoscalerReconciler{
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("HorizontalRunnerAutoscaler"),
Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
Client: mgr.GetClient(),
Log: ctrl.Log.WithName("controllers").WithName("HorizontalRunnerAutoscaler"),
Scheme: mgr.GetScheme(),
GitHubClient: ghClient,
CacheDuration: syncPeriod - 10*time.Second,
}
if err = horizontalRunnerAutoscaler.SetupWithManager(mgr); err != nil {
@@ -212,3 +184,20 @@ func main() {
os.Exit(1)
}
}
type commaSeparatedStringSlice []string
func (s *commaSeparatedStringSlice) String() string {
return fmt.Sprintf("%v", *s)
}
func (s *commaSeparatedStringSlice) Set(value string) error {
for _, v := range strings.Split(value, ",") {
if v == "" {
continue
}
*s = append(*s, v)
}
return nil
}

View File

@@ -0,0 +1,8 @@
This package is an implementation of glob that is intended to simulate the behaviour of
https://github.com/actions/toolkit/tree/master/packages/glob in many cases.
This isn't a complete reimplementation of the referenced nodejs package.
Differences:
- This package doesn't implement `**`

View File

@@ -0,0 +1,78 @@
package actionsglob
import (
"fmt"
"strings"
)
func Match(pat string, s string) bool {
if len(pat) == 0 {
panic(fmt.Sprintf("unexpected length of pattern: %d", len(pat)))
}
var inverse bool
if pat[0] == '!' {
pat = pat[1:]
inverse = true
}
tokens := strings.SplitAfter(pat, "*")
var wildcardInHead bool
for i := 0; i < len(tokens); i++ {
p := tokens[i]
if p == "" {
s = ""
break
}
if p == "*" {
if i == len(tokens)-1 {
s = ""
break
}
wildcardInHead = true
continue
}
wildcardInTail := p[len(p)-1] == '*'
if wildcardInTail {
p = p[:len(p)-1]
}
subs := strings.SplitN(s, p, 2)
if len(subs) == 0 {
break
}
if subs[0] != "" {
if !wildcardInHead {
break
}
}
if subs[1] != "" {
if !wildcardInTail {
break
}
}
s = subs[1]
wildcardInHead = false
}
r := s == ""
if inverse {
r = !r
}
return r
}

Some files were not shown because too many files have changed in this diff Show More